About

An evidence-first content lab for solo creators.

The tools on this site exist to answer one question: what does the research actually say about what makes content get cited by AI? We measure your content against indicators drawn from academic studies, with evidence tiers clearly labelled, so you can calibrate how much to act on them.

Seven tools. One operating principle: no tool provided without a published source and a quantifiable output metric. No invented scores. No gut-feel weights. No band labels imposing thresholds research never derived.

The name derives from psycholinguistics — the field studying how humans process language, and a foundational discipline in NLP research — and -citable, the property the tools are designed to measure.

Operating principles

Seven principles. All rooted in academic research.

P.01 · Evidence before build

No tool ships without a source

Every dimension measured traces to a published study — peer-reviewed or preprint — with a quantifiable output metric and independent pre-verification before build. This is a gate, not a guideline. No waiver mechanism.

P.02 · Continuous over categorical

Spectrums, not band labels

Scores are continuous gradients. No "Strong / Moderate / Low" thresholds are imposed — research didn't derive them, so we don't construct them. Users see a spectrum marker and infer their own position. That's honest measurement.

P.03 · Confidence-first prioritisation

Peer-reviewed always ranks above preprint

When tools surface a "Where to focus first" panel, the sort order is: evidence confidence tier first, gap size second. Aggarwal et al. 2024 (peer-reviewed, KDD) always outranks Zhang et al. 2026 (preprint) in priority — regardless of which gap is larger.

P.04 · Transparency by default

Source and confidence always shown

Every tool exposes its research basis, evidence tier, and known limitations. Preprint sources carry advisory notes. Tools built on directional indicators label them as directional. No dimension is presented with more confidence than the underlying research supports.

P.05 · No gates

Free because evidence should be accessible

No email required. No account. No watermarks. No upsell path hiding behind a "free" label. The research that powers these tools is public. Access to the tools built on it should be too.

P.06 · Preprint review cycle

Zhang et al. 2026 is actively monitored

Preprint sources are assigned scheduled peer-review status checks. Zhang et al. 2026: next check August 2026. If it becomes peer-reviewed, the tools are upgraded. If it is revised significantly or retracted, the tools are rebuilt. The review cycle is standing, not optional.

P.07 · Tool admission standard

Three criteria before any new tool is scoped

A new tool proposal must pass three criteria before development begins: (1) a published source identified, specific and accessible; (2) a quantifiable output metric in the source; (3) independent pre-verification of the implementation before build. No waiver mechanism exists.

Timeline

How the tool set evolved.

2024

Aggarwal et al. 2024 published at KDD

The peer-reviewed foundation. Statistics, source attribution, and readability identified as the highest-confidence AI citation predictors.

2024

First tools scoped against peer-reviewed evidence

Evidence Density Score and Readability Analyser built on Aggarwal et al. findings. No tool ships without a published source — established as the founding principle.

2026

Zhang et al. 2026 preprint published

21,143 citations analysed across ChatGPT, Google AI Overviews, and Perplexity. Six structural absorption dimensions identified. Preprint status noted — tools built on it carry directional label.

2026

Absorption Analyser and focus panels added

AI Answer Absorption Analyser built on Zhang et al. 2026. "Where to focus first" panels added to EDS and Absorption tools — ranking by evidence confidence tier, not gap size.

2026

Band labels replaced with continuous spectrum

Strong / Moderate / Low labels removed from all tools. Replaced with a gradient spectrum bar and neutral scale note. No invented thresholds imposed on continuous research data.

Aug 2026

Next Zhang et al. peer-review status check

Scheduled standing review of Zhang et al. 2026 peer-review status. Tools upgraded or rebuilt based on outcome.

Who these tools are for — and who they're not for.

A fit check is more useful than a pitch. Here's where we're honest about what these tools do well and where they fall short.

Good fit

  • Solo content creators publishing long-form content
  • Writers who want evidence-based writing guidance, not intuition
  • Creators optimising for AI Overviews, ChatGPT, or Perplexity visibility
  • Publishers who want to understand which factors their content is missing
  • Anyone willing to act on directional guidance with appropriate caveats

Not a good fit

  • Content that relies on FAQ-only structure — the advisory applies
  • Anyone needing peer-reviewed certainty for every indicator — some remain preprint-only
  • Content where keyword placement is the primary strategy — these tools don't measure it
  • Brands expecting a score to predict exact citation rates — all findings are probabilistic estimates, not guarantees