BotBeat
...
← Back

> ▌

Google / AlphabetGoogle / Alphabet
RESEARCHGoogle / Alphabet2026-04-28

Google DeepMind Researcher Argues LLMs Cannot Achieve Consciousness

Key Takeaways

  • ▸Lerchner argues LLMs lack intrinsic meaning because they depend on humans to pre-organize data into discrete states; they are fundamentally 'mapmaker-dependent' systems
  • ▸The paper challenges the 'abstraction fallacy'—the belief that sophisticated pattern-matching and symbol manipulation constitute consciousness
  • ▸DeepMind's publication of this paper creates tension with its own leadership's AGI claims and suggests hard practical limits on AI capabilities
Source:
Hacker Newshttps://www.404media.co/google-deepmind-paper-argues-llms-will-never-be-conscious/↗

Summary

A senior staff scientist at Google DeepMind, Alexander Lerchner, published a paper titled "The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness" arguing that computational systems will never achieve consciousness. The paper contends that AI systems are "mapmaker-dependent"—they require humans to organize continuous reality into discrete, meaningful states, and therefore lack the intrinsic meaning necessary for consciousness. Lerchner argues that the common misconception—that mimicking sentient behavior through language and image manipulation equates to actual consciousness—is a fundamental category error.

The research creates a notable contradiction within DeepMind itself: while CEO Demis Hassabis claims artificial general intelligence will arrive with transformative impact "10 times the Industrial Revolution," Lerchner's rigorous technical argument suggests such AGI-level consciousness is theoretically impossible. Experts in consciousness studies corroborate Lerchner's core claims but note that philosophers and researchers have advanced nearly identical arguments for decades—suggesting the paper may represent a reinvention rather than a breakthrough, albeit one carrying significant weight coming from inside a major AI corporation.

  • Consciousness researchers agree with Lerchner's core arguments but emphasize the position reflects decades-old philosophical consensus, not new insight

Editorial Opinion

DeepMind's decision to publish Lerchner's work is both commendable and self-undermining. The paper strips away the techno-optimist veneer around AI consciousness claims, yet it also contradicts the company's own commercial narrative about AGI's inevitability. This gap between rigorous research and boardroom rhetoric illuminates a growing credibility problem in AI: the industry's public claims about AGI timelines and capabilities are increasingly difficult to reconcile with peer-reviewed technical work from its own ranks.

Large Language Models (LLMs)Ethics & BiasAI Safety & Alignment

More from Google / Alphabet

Google / AlphabetGoogle / Alphabet
PARTNERSHIP

Google Agrees to 'Any Lawful' Pentagon AI Deal, Waives Veto Power Over Military Use

2026-04-28
Google / AlphabetGoogle / Alphabet
RESEARCH

Google-Backed Research Releases PAVO-Bench: 50K-Turn Voice Dataset and Coupled-System Router

2026-04-28
Google / AlphabetGoogle / Alphabet
POLICY & REGULATION

EU Forces Google to Open Android AI Ecosystem to Competitors; Company Objects to Compliance Mandate

2026-04-28

Comments

Suggested

OpenAIOpenAI
PRODUCT LAUNCH

OpenAI Releases GPT-5.5: A Competitive Challenger to Claude with Focus on Agentic Capabilities

2026-04-28
Antigma LabsAntigma Labs
RESEARCH

Antigma Labs Releases Ante Agent as Open-Weight 27B Models Hit Frontier Performance

2026-04-28
GitHubGitHub
UPDATE

GitHub Copilot Silently Adds Itself as Co-Author to Commits, Raising Accountability Concerns

2026-04-28
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us