Google DeepMind Researcher Argues LLMs Cannot Achieve Consciousness
Key Takeaways
- ▸Lerchner argues LLMs lack intrinsic meaning because they depend on humans to pre-organize data into discrete states; they are fundamentally 'mapmaker-dependent' systems
- ▸The paper challenges the 'abstraction fallacy'—the belief that sophisticated pattern-matching and symbol manipulation constitute consciousness
- ▸DeepMind's publication of this paper creates tension with its own leadership's AGI claims and suggests hard practical limits on AI capabilities
Summary
A senior staff scientist at Google DeepMind, Alexander Lerchner, published a paper titled "The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness" arguing that computational systems will never achieve consciousness. The paper contends that AI systems are "mapmaker-dependent"—they require humans to organize continuous reality into discrete, meaningful states, and therefore lack the intrinsic meaning necessary for consciousness. Lerchner argues that the common misconception—that mimicking sentient behavior through language and image manipulation equates to actual consciousness—is a fundamental category error.
The research creates a notable contradiction within DeepMind itself: while CEO Demis Hassabis claims artificial general intelligence will arrive with transformative impact "10 times the Industrial Revolution," Lerchner's rigorous technical argument suggests such AGI-level consciousness is theoretically impossible. Experts in consciousness studies corroborate Lerchner's core claims but note that philosophers and researchers have advanced nearly identical arguments for decades—suggesting the paper may represent a reinvention rather than a breakthrough, albeit one carrying significant weight coming from inside a major AI corporation.
- Consciousness researchers agree with Lerchner's core arguments but emphasize the position reflects decades-old philosophical consensus, not new insight
Editorial Opinion
DeepMind's decision to publish Lerchner's work is both commendable and self-undermining. The paper strips away the techno-optimist veneer around AI consciousness claims, yet it also contradicts the company's own commercial narrative about AGI's inevitability. This gap between rigorous research and boardroom rhetoric illuminates a growing credibility problem in AI: the industry's public claims about AGI timelines and capabilities are increasingly difficult to reconcile with peer-reviewed technical work from its own ranks.



