Built on peer-reviewed research

Turn your content into the source AI systems cite.

Seven free tools that measure what the research says actually drives AI extraction — evidence density, readability, structure, entity coverage, and more. No signup. No paywall. No watermarks.

E.01 · Evidence & Scoring

Evidence Density Score

Statistics, citations, quotations, and readability combined into a single 0–100 score. Peer-reviewed evidence, highest confidence.

E.02 · Evidence & Scoring

AI Answer Absorption Analyser

Six structural properties linked directly to AI answer absorption probability. Directional — preprint source clearly labelled.

S.01 · Structure & Readability

Readability Analyser

Flesch-Kincaid grade, passive voice rate, sentence length distribution — calibrated to AI extraction research.

S.02 · Structure & Readability

Heading Visualiser

Heading hierarchy charted against structural absorption dimensions from published research.

V.01 · Visibility & Coverage

Entity Prominence Map

Named entity density and position mapped across your content — see what AI systems have to anchor on.

V.02 · Visibility & Coverage

Platform Variance Indicator

How ChatGPT, Perplexity, and Google AI Overviews differ in citation behaviour — so you know which platform to optimise for first.

V.03 · Visibility & Coverage

AI Snippet Extractor

Sentence-level extraction probability, highlighted in place. See exactly which sentences AI systems are likely to pull.

Full-funnel coverage

From first draft to content AI systems cite.

Stage 01 · Write

Draft with evidence built in

Know before you publish which structural properties research links to AI citation. Build evidence density from paragraph one.

Stage 02 · Score

Measure against benchmarks

Run your content through Evidence Density Score and Absorption Analyser. See where you sit on a continuous spectrum — no artificial band labels.

Stage 03 · Improve

Prioritised fix list

Every scoring tool surfaces a "Where to focus first" panel — deficiencies ranked by evidence confidence, then gap size.

Stage 04 · Compound

Structural richness compounds

Headings feed readability. Statistics feed evidence density. Definitions feed absorption. Each improvement amplifies the others.

+31%
Citation probability lift from statistics in content
Aggarwal et al. 2024 · peer-reviewed
11.44×
Length advantage of high-influence vs. low-influence pages
Zhang et al. 2026 · preprint
12.5×
Heading density of AI-absorbed content versus low-influence pages
Zhang et al. 2026 · preprint
7
Free tools. No email gates. No watermarks. No upsell.
All tools · open access
How we build

Seven principles. None of them negotiable.

P.01 · Evidence before build

No tool ships without a source

Every dimension measured traces to a published study — peer-reviewed or preprint — with a quantifiable output metric. No invented signals. No gut-feel weights.

P.02 · Continuous over categorical

Spectrums, not bands

Scores are shown on a continuous gradient. No "Strong / Moderate / Low" labels impose thresholds research never derived. You infer your own position.

P.03 · Confidence-first prioritisation

Peer-reviewed ranks above preprint

Every "Where to focus first" panel sorts peer-reviewed predictors (Aggarwal et al. 2024) above directional preprint dimensions (Zhang et al. 2026) — regardless of gap size.

P.04 · Transparency by default

Source and confidence always visible

Every tool shows its research basis, evidence tier, and known limitations. No black boxes. No trust-me scores.

P.05 · No gates

Free because evidence should be accessible

No email required. No account. No watermarks on output. No upsell path hiding behind a "free" label. Seven tools, open to anyone.

P.06 · Preprint review

Zhang et al. 2026 is actively monitored

Preprint sources get scheduled peer-review status checks. Zhang et al. 2026: next check August 2026. If peer-reviewed, tools are upgraded. If retracted, tools are rebuilt.

P.07 · Tool admission standard

Three criteria before any new tool is scoped

Published source identified. Quantifiable output metric in the source. Independent pre-verification before build. No waiver mechanism.

Common questions

What the research actually says — and what it doesn't.

Every dimension measured by these tools traces directly to a published study. Aggarwal et al. (2024) is peer-reviewed and published at KDD. Zhang et al. (2026) is a preprint with a scheduled peer-review status check in August 2026. Each tool labels which evidence properties come from which source tier. No dimension is included without a source.

Yes. No email required. No account. No watermarks on output. No upsell path. The tools are free because the research they're built on is public, and access to it shouldn't require a credit card.

Evidence Density Score measures the combination of verifiable content properties most associated with AI citation selection — statistics, citations, quotations, readability, structure. It's anchored in the highest-confidence peer-reviewed source (Aggarwal et al. 2024). Absorption Analyser measures six structural properties specifically associated with AI answer absorption — a distinct phenomenon where AI draws from content to shape its generated answer, not just list it as a source. Start with EDS, add Absorption once you've improved your core evidence properties.

Aggarwal et al. (2024) is the peer-reviewed base — published at KDD 2024, the highest-confidence source available. Zhang et al. (2026) is a preprint from 2026 analysing 21,143 citations across ChatGPT, Google AI Overviews, and Perplexity. It has not yet been peer-reviewed. Tools clearly label which properties come from which evidence tier. The next scheduled review of Zhang et al.'s peer-review status is August 2026.

Start with Evidence Density Score. It covers the highest-confidence research-backed properties and includes a prioritised focus panel that ranks what to fix first. Once you've addressed them, run Absorption Analyser for structural depth. The remaining five tools add specific lenses — use them once the core properties are in order.

The tools measure properties linked to AI citation across a broad research base (21,143+ citations analysed in Zhang et al. 2026, covering ChatGPT, Perplexity, and Google AI Overviews). The properties measured — statistics, structure, definitions, comparative language, readability — are content-type agnostic, drawn from both studies above. FAQ-formatted content is flagged separately; narrative content is where these properties are most clearly applicable.