Psytable
What is the Platform Variance Indicator?
AI systems are not a single unified target. ChatGPT, Perplexity, and Google AI Overviews each surface content differently and, according to published research, respond to different content signals with different strengths. Optimising for "AI visibility" without accounting for platform differences means optimising for an average that may not match any actual platform.

This page presents documented platform-specific preferences from published research. For ChatGPT and Google AI Overviews, the primary source is Zhang et al. (2026) — a preprint study analysing 21,143 citations. For Perplexity, the strongest data comes from Aggarwal et al. (2024), a peer-reviewed study published in KDD 2024 that includes a real-world validation test on the Perplexity platform.

These findings reflect research conducted at specific points in time on specific query types. Generative engine behaviour changes as models are updated. Use this information as directional guidance for content decisions, not as fixed rules.
PVI

Platform Variance Indicator

Not all AI optimisation points at the same target. Here is what published research documents about platform-specific content preferences across ChatGPT, Perplexity, and Google AI Overviews.

Platform preferences are documented from published research conducted at specific points in time on specific query types. The primary source for ChatGPT and Perplexity platform findings (Zhang et al. 2026) is a preprint and has not yet completed peer review. Generative engine behaviour changes as models are updated. These findings are directional guidance, not guaranteed outcomes.

ChatGPT

Preprint — Zhang et al. 2026

Positive signals

  • Structural richness — significantly more headings in high-influence pages
  • Paragraph density — significantly more paragraphs in high-influence pages
  • Definitional content — approximately +57% absorption for pages with high definitional density
  • Comparative content — approximately +55% absorption for comparative writing
  • Statistics — approximately +61% absorption for statistic-dense pages

Anti-signal

  • Q&A / FAQ formatting — approximately −5.74% relative absorption compared to narrative structure

Source: Zhang et al. (2026), 'From Citation Selection to Citation Absorption'. Preprint — not yet peer-reviewed. All percentages are approximate. These are page-level group comparisons, not controlled experiments.

Perplexity

Peer-reviewed — Aggarwal et al. 2024

Positive signals

  • Quotation addition — approximately +22% citation probability (real-world validated)
  • Statistics addition — approximately +37% citation probability (real-world validated)
  • Source citation — approximately +30% citation probability
  • Fluency optimisation — approximately +28% citation probability

Source: Aggarwal et al. (2024), 'GEO: Generative Engine Optimization', KDD 2024 (peer-reviewed). Perplexity-specific figures include a real-world test on the live platform — the most evidentially robust data in this tool suite.

Google AI Overviews

Limited data — Zhang et al. 2026

Positive signals

  • Structural richness — consistent with ChatGPT patterns; headings and paragraph density associated with higher influence

Source: Zhang et al. (2026), preprint. Google AI Overviews findings are less extensively documented than ChatGPT and Perplexity in published research.

RL-025: Google AI Overviews data is less extensively documented in published research than ChatGPT and Perplexity findings. Treat these signals as directional only — the evidence base for this platform is thinner than for the others shown.

Why platform differences matter
The research literature on AI optimisation draws from at least two distinct studies — Aggarwal et al. (2024) and Zhang et al. (2026) — which measure different platforms, different dependent variables (citation selection vs. answer absorption), and different content interventions. Presenting a single "AI optimisation" number without disclosing platform variance conflates findings from different contexts.

The practical implication: if your priority is Perplexity citation probability, quotations and statistics are the highest-leverage interventions, supported by peer-reviewed research. If your priority is ChatGPT answer absorption, structural richness (headings, paragraphs, definitional language) is the better signal — though based on a preprint. Google AI Overviews findings are too thin to confidently distinguish from ChatGPT.

The other tools in this suite measure properties relevant across platforms — structural richness, evidence density, and entity prominence are positive signals for all three platforms covered here. Platform-specific targeting is a refinement layer, not a replacement for content quality.
Pair with
Evidence Density Score

Run the full cross-platform evidence analysis — structural richness, content signals, and readability in one composite score.

Try it →