The tools on this site exist to answer one question: what does the research actually say about what makes content get cited by AI? We measure your content against indicators drawn from academic studies, with evidence tiers clearly labelled, so you can calibrate how much to act on them.
Seven tools. One operating principle: no tool provided without a published source and a quantifiable output metric. No invented scores. No gut-feel weights. No band labels imposing thresholds research never derived.
The name derives from psycholinguistics — the field studying how humans process language, and a foundational discipline in NLP research — and -citable, the property the tools are designed to measure.
Every dimension measured traces to a published study — peer-reviewed or preprint — with a quantifiable output metric and independent pre-verification before build. This is a gate, not a guideline. No waiver mechanism.
Scores are continuous gradients. No "Strong / Moderate / Low" thresholds are imposed — research didn't derive them, so we don't construct them. Users see a spectrum marker and infer their own position. That's honest measurement.
When tools surface a "Where to focus first" panel, the sort order is: evidence confidence tier first, gap size second. Aggarwal et al. 2024 (peer-reviewed, KDD) always outranks Zhang et al. 2026 (preprint) in priority — regardless of which gap is larger.
Every tool exposes its research basis, evidence tier, and known limitations. Preprint sources carry advisory notes. Tools built on directional indicators label them as directional. No dimension is presented with more confidence than the underlying research supports.
No email required. No account. No watermarks. No upsell path hiding behind a "free" label. The research that powers these tools is public. Access to the tools built on it should be too.
Preprint sources are assigned scheduled peer-review status checks. Zhang et al. 2026: next check August 2026. If it becomes peer-reviewed, the tools are upgraded. If it is revised significantly or retracted, the tools are rebuilt. The review cycle is standing, not optional.
A new tool proposal must pass three criteria before development begins: (1) a published source identified, specific and accessible; (2) a quantifiable output metric in the source; (3) independent pre-verification of the implementation before build. No waiver mechanism exists.
The peer-reviewed foundation. Statistics, source attribution, and readability identified as the highest-confidence AI citation predictors.
Evidence Density Score and Readability Analyser built on Aggarwal et al. findings. No tool ships without a published source — established as the founding principle.
21,143 citations analysed across ChatGPT, Google AI Overviews, and Perplexity. Six structural absorption dimensions identified. Preprint status noted — tools built on it carry directional label.
AI Answer Absorption Analyser built on Zhang et al. 2026. "Where to focus first" panels added to EDS and Absorption tools — ranking by evidence confidence tier, not gap size.
Strong / Moderate / Low labels removed from all tools. Replaced with a gradient spectrum bar and neutral scale note. No invented thresholds imposed on continuous research data.
Scheduled standing review of Zhang et al. 2026 peer-review status. Tools upgraded or rebuilt based on outcome.
A fit check is more useful than a pitch. Here's where we're honest about what these tools do well and where they fall short.