Study Reveals AI Platforms Cite Websites Differently Than Google—And 'AI SEO' Advice Is Wrong
Key Takeaways
- ▸URL-level overlap between Google's top rankings and AI platform citations is shockingly low (7.8-32.4%), proving that Google rankings do not translate to AI citations
- ▸Internal links are the strongest predictor of AI citation, roughly doubling citation odds, while pages with heavy external linking show reduced citation likelihood
- ▸Commonly recommended 'AI SEO' tactics (author attribution, page speed, popup removal, page size optimization) showed no statistically significant correlation with AI citation in this 479-page study
Summary
A comprehensive empirical study analyzing citation patterns across ChatGPT, Claude, Perplexity, and Gemini has challenged the burgeoning "AI SEO" industry, revealing that traditional search engine optimization strategies do not translate to AI platform citations. The researcher crawled 479 pages and measured 26 technical features to determine what actually influences whether AI platforms cite a given source. The findings show that URL-level overlap between Google's top-3 results and AI citations is extremely low (7.8% for ChatGPT, 32.4% for Google's own AI Mode), indicating that ranking well on Google does not guarantee citation by AI platforms.
The study identified internal links as the strongest predictor of AI citation (doubling citation odds with each standard deviation increase), while contradicting popular "AI SEO" advice that recommends author attribution, popup removal, page speed optimization, and backlink building. Pages with high external link counts relative to internal links were significantly less likely to be cited. Notably, cited pages actually showed lower author attribution rates (44.0%) compared to non-cited pages (48.7%), directly undermining industry guidance to add author bylines for AI trust.
The research also revealed architectural differences between AI platforms through server-side HTTP logging: ChatGPT and Claude perform live fetches during conversations with different user agents and discovery mechanisms, while some platforms rely more heavily on training data and cached search snippets. These findings suggest that effective strategies for AI discoverability require a fundamentally different approach than traditional SEO.
- AI platforms have different architectures and discovery mechanisms—ChatGPT and Claude perform live HTTP fetches, while others rely on training data and cached search results
- Domain-level trust matters more than page-level ranking; AI platforms cite different pages from the same trusted domains that Google ranks
Editorial Opinion
This research provides much-needed empirical grounding to a space flooded with speculative "AI SEO" advice. The finding that internal linking patterns—not author bios or page speed—predict AI citation represents a genuine insight that challenges the industry's assumption that Google optimization equals AI optimization. However, the study's scope (120 queries, product recommendation focus) suggests results may not generalize uniformly across all query types and use cases, warranting follow-up research on informational and navigational queries. The most valuable contribution is demonstrating that AI platform citation mechanisms are fundamentally different from search engine ranking algorithms, requiring new strategies rather than recycled SEO dogma.


