Seven free tools that measure what the research says actually drives AI extraction — evidence density, readability, structure, entity coverage, and more. No signup. No paywall. No watermarks.
Statistics, citations, quotations, and readability combined into a single 0–100 score. Peer-reviewed evidence, highest confidence.
Six structural properties linked directly to AI answer absorption probability. Directional — preprint source clearly labelled.
Flesch-Kincaid grade, passive voice rate, sentence length distribution — calibrated to AI extraction research.
Heading hierarchy charted against structural absorption dimensions from published research.
Named entity density and position mapped across your content — see what AI systems have to anchor on.
How ChatGPT, Perplexity, and Google AI Overviews differ in citation behaviour — so you know which platform to optimise for first.
Sentence-level extraction probability, highlighted in place. See exactly which sentences AI systems are likely to pull.
Know before you publish which structural properties research links to AI citation. Build evidence density from paragraph one.
Run your content through Evidence Density Score and Absorption Analyser. See where you sit on a continuous spectrum — no artificial band labels.
Every scoring tool surfaces a "Where to focus first" panel — deficiencies ranked by evidence confidence, then gap size.
Headings feed readability. Statistics feed evidence density. Definitions feed absorption. Each improvement amplifies the others.
Every dimension measured traces to a published study — peer-reviewed or preprint — with a quantifiable output metric. No invented signals. No gut-feel weights.
Scores are shown on a continuous gradient. No "Strong / Moderate / Low" labels impose thresholds research never derived. You infer your own position.
Every "Where to focus first" panel sorts peer-reviewed predictors (Aggarwal et al. 2024) above directional preprint dimensions (Zhang et al. 2026) — regardless of gap size.
Every tool shows its research basis, evidence tier, and known limitations. No black boxes. No trust-me scores.
No email required. No account. No watermarks on output. No upsell path hiding behind a "free" label. Seven tools, open to anyone.
Preprint sources get scheduled peer-review status checks. Zhang et al. 2026: next check August 2026. If peer-reviewed, tools are upgraded. If retracted, tools are rebuilt.
Published source identified. Quantifiable output metric in the source. Independent pre-verification before build. No waiver mechanism.
Every dimension measured by these tools traces directly to a published study. Aggarwal et al. (2024) is peer-reviewed and published at KDD. Zhang et al. (2026) is a preprint with a scheduled peer-review status check in August 2026. Each tool labels which evidence properties come from which source tier. No dimension is included without a source.
Yes. No email required. No account. No watermarks on output. No upsell path. The tools are free because the research they're built on is public, and access to it shouldn't require a credit card.
Evidence Density Score measures the combination of verifiable content properties most associated with AI citation selection — statistics, citations, quotations, readability, structure. It's anchored in the highest-confidence peer-reviewed source (Aggarwal et al. 2024). Absorption Analyser measures six structural properties specifically associated with AI answer absorption — a distinct phenomenon where AI draws from content to shape its generated answer, not just list it as a source. Start with EDS, add Absorption once you've improved your core evidence properties.
Aggarwal et al. (2024) is the peer-reviewed base — published at KDD 2024, the highest-confidence source available. Zhang et al. (2026) is a preprint from 2026 analysing 21,143 citations across ChatGPT, Google AI Overviews, and Perplexity. It has not yet been peer-reviewed. Tools clearly label which properties come from which evidence tier. The next scheduled review of Zhang et al.'s peer-review status is August 2026.
Start with Evidence Density Score. It covers the highest-confidence research-backed properties and includes a prioritised focus panel that ranks what to fix first. Once you've addressed them, run Absorption Analyser for structural depth. The remaining five tools add specific lenses — use them once the core properties are in order.
The tools measure properties linked to AI citation across a broad research base (21,143+ citations analysed in Zhang et al. 2026, covering ChatGPT, Perplexity, and Google AI Overviews). The properties measured — statistics, structure, definitions, comparative language, readability — are content-type agnostic, drawn from both studies above. FAQ-formatted content is flagged separately; narrative content is where these properties are most clearly applicable.