Google DeepMind Paper Argues LLMs Cannot Achieve Consciousness, Contradicting Company's AGI Narrative
Key Takeaways
- ▸DeepMind researcher argues LLMs are 'mapmaker-dependent'—requiring humans to organize data into meaningful states—making consciousness impossible without embodied physical needs
- ▸The paper's conclusion contradicts DeepMind CEO Demis Hassabis' predictions about transformative AGI impact comparable to the Industrial Revolution
- ▸Consciousness experts agree with the philosophical thesis but note the arguments have been established in academic literature for decades, with limited citation of prior work
Summary
A senior staff scientist at Google DeepMind, Alexander Lerchner, has published a paper titled "The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness" arguing that no computational system will ever become conscious. The paper contends that AI systems are fundamentally "mapmaker-dependent," requiring human agents to organize raw data into meaningful states that AI can then manipulate. Lerchner argues that consciousness requires embodied physical needs—eating, breathing, and survival drives—that computational systems cannot possess.
Lerchner's argument directly contradicts the narrative promoted by DeepMind CEO Demis Hassabis, who has repeatedly predicted that artificial general intelligence (AGI) will arrive and have an impact "10 times the Industrial Revolution, but happening at 10 times the speed." The paper's core claim that AI cannot achieve consciousness suggests a fundamental ceiling on what AI systems could accomplish practically and commercially, which would undermine predictions of transformative AGI impact.
While philosophers and consciousness researchers agree with Lerchner's philosophical argument, they note that these arguments have been established in academic literature for decades. Experts expressed surprise that Google allowed the paper to be published given its direct contradiction of the company's public AGI ambitions, but also criticized the work for failing to adequately cite or engage with existing philosophical and biological research on consciousness.
- Google's decision to publish the paper highlights a tension between the company's public AGI narrative and its internal research findings
Editorial Opinion
It's remarkable that Google allowed this paper to be published, as it directly contradicts the company's public AGI narrative championed by CEO Demis Hassabis. While the philosophical argument appears sound, experts correctly point out that Lerchner's work lacks proper grounding in decades of existing consciousness research, undermining its authority. The paper raises important questions about the practical limits of AI capabilities, but its impact may be constrained by its narrow engagement with prior scholarship.

