New Research Explores How Language Shapes AI Reasoning: Testing Dennett's Theory Through Large Language Models
Key Takeaways
- ▸Recent LLM performance supports Dennett's 1996 theory that language qualitatively transforms the nature of mind
- ▸The abstractness and efficiency of linguistic encoding enables LLMs to perform inferential reasoning across multiple domains
- ▸Language may make inference computationally tractable by providing a flexible medium for representing and manipulating abstract concepts
Summary
A new arXiv paper titled "Language and Thought: The View from LLMs" revisits philosopher Daniel Dennett's 1996 hypothesis that adding language fundamentally transforms the nature of mind itself. The research uses recent advances in large language models as an empirical test of Dennett's radical claim, examining whether linguistic training is the key factor enabling AI systems to perform complex inferential reasoning across diverse domains.
The author argues that LLMs' success at reasoning tasks—despite limitations—provides evidence supporting Dennett's thesis. The core insight is that language's abstractness and computational efficiency make inference tractable for AI systems. By encoding information linguistically rather than through other representations, LLMs can generalize reasoning patterns across unrelated problem spaces far more effectively than non-linguistic approaches.
The paper bridges AI research with philosophical inquiry into cognition, suggesting that the mechanisms enabling LLM reasoning may illuminate how language shapes human thought itself. The work contributes to ongoing debates about whether language is merely a tool for expressing pre-existing thoughts or a fundamental cognitive capability that literally changes how minds work.
- AI systems offer empirical grounds for testing longstanding philosophical theories about the relationship between language and thought
Editorial Opinion
This paper represents valuable cross-disciplinary work bridging AI systems analysis with philosophy of mind. By using LLM capabilities as evidence for Dennett's thesis, it suggests that understanding why language works so well in neural networks may genuinely inform us about human cognition. However, the argument's strength depends on whether LLM performance truly reflects the same linguistic mechanisms that underlie human thought, or whether deep learning systems achieve similar reasoning through fundamentally different computational principles—a distinction the paper doesn't fully resolve.


