BotBeat
...
← Back

> ▌

OpenAIOpenAI
RESEARCHOpenAI2026-05-12

Large-Scale Study Reveals 146,932 Hallucinated Citations in 2025 Scientific Papers, Raising Alarms About LLM Reliability in Research

Key Takeaways

  • ▸A study of 111 million citations identified 146,932 hallucinated references in 2025 alone, indicating sharp increases in fabricated citations following widespread LLM adoption
  • ▸Hallucinated citations are unevenly distributed, appearing more frequently in papers with AI linguistic signatures and disproportionately among early-career and smaller research teams
  • ▸The errors systematically credit already-prominent and male scholars, potentially exacerbating existing gender and prominence biases in academia
Source:
Hacker Newshttps://arxiv.org/abs/2605.07723↗

Summary

A comprehensive arXiv research paper analyzing 111 million scientific citations across 2.5 million papers has documented alarming evidence of hallucinated references spreading through academic literature following widespread LLM adoption. The study, which examined papers from arXiv, bioRxiv, SSRN, and PubMed Central, identified a sharp rise in non-existent citations in 2025, with a conservative estimate of 146,932 hallucinated references alone that year. These errors are not randomly distributed—they disproportionately appear in papers with linguistic signatures of AI-assisted writing, among small and early-career author teams, and in fields with rapid AI uptake.

The research reveals a troubling equity dimension: hallucinated references tend to assign credit to already prominent and male scholars, potentially reinforcing existing inequities in scientific recognition. More concerning is that existing safeguards—preprint moderation and journal publication processes—capture only a fraction of these errors, suggesting that the spread of hallucinated content has significantly outpaced institutional oversight mechanisms. As human researchers and AI systems increasingly draw on existing literature, the infiltration of false citations into the scientific record threatens both the reliability and integrity of future scientific discovery.

  • Current journal and preprint moderation systems capture only a fraction of hallucinated citations, indicating institutional safeguards have significantly lagged behind the scale and speed of the problem

Editorial Opinion

This research exposes a critical vulnerability in scientific knowledge infrastructure at precisely the moment when LLMs are becoming ubiquitous research tools. The scale is staggering—146,000+ false citations injected into the record in a single year—and the trajectory suggests the problem is accelerating. Most troubling is that the errors aren't random: they systematically advantage existing power structures while undermining the work of marginalized researchers. Until LLM developers implement robust citation verification and academic institutions develop detection mechanisms, the integrity of the scientific literature itself remains at serious risk.

Large Language Models (LLMs)Science & ResearchEthics & BiasAI Safety & AlignmentMisinformation & Deepfakes

More from OpenAI

OpenAIOpenAI
POLICY & REGULATION

Parents Sue OpenAI After ChatGPT Allegedly Gave Deadly Drug Advice to College Student

2026-05-12
OpenAIOpenAI
RESEARCH

ChatGPT Excels at Julia Code Generation, Outperforming Python

2026-05-12
OpenAIOpenAI
PRODUCT LAUNCH

OpenAI Expands GPT-5.5-Cyber Access to European Companies

2026-05-12

Comments

Suggested

AnthropicAnthropic
OPEN SOURCE

Anthropic Releases Prempti: Open-Source Guardrails for AI Coding Agents

2026-05-12
MetaMeta
POLICY & REGULATION

Meta Employees Protest Mouse Tracking Technology at US Offices

2026-05-12
AnthropicAnthropic
PARTNERSHIP

SpaceX Backs Anthropic with Massive Data Centre Deal Amidst Musk's OpenAI Legal Battle

2026-05-12
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us