Brain's Word Prediction Works Differently Than LLMs, New Study Shows
Key Takeaways
- ▸The human brain predicts words by first grouping them into grammatical constituents (chunks) before making predictions, unlike LLMs which predict word-by-word sequentially
- ▸Brain predictions are modulated by linguistic structure and surrounding context at the phrase level, not just immediate next-word context
- ▸Study used MEG brain imaging and linguistic tasks with Mandarin Chinese and English speakers to show these differences hold across languages
Summary
A new study published in Nature Neuroscience reveals that human brains predict words through a more complex process than large language models. Researchers from NYU, Ernst Struengmann Institute, and Zhejiang University found that the brain predicts words by considering grammatical structure and grouping words into phrases (constituents), rather than predicting the next word sequentially like LLMs do.
Using magnetoencephalography (MEG) to measure brain activity and Cloze tests with Mandarin Chinese and English speakers, the team discovered that next-word prediction in the human brain is balanced by consideration of grammatically organized chunks of words. This stands in stark contrast to LLMs, which are trained to predict the next word with each word exploiting its predictive context independently and equally.
The research suggests that while LLMs and human brains both engage in word prediction, the mechanisms are fundamentally different. The brain's approach is more hierarchical and structure-aware, while LLMs use a flatter, sequential prediction model.
- Findings suggest LLMs may not capture the full grammatical hierarchy the brain uses for language prediction
Editorial Opinion
This research should prompt the AI research community to reconsider how LLMs are structured for language prediction. If human cognition leverages hierarchical grammatical structure in ways current LLMs don't, it suggests there may be untapped architectural improvements that could make language models more efficient or capable. The gap between human and machine language processing remains significant—and that gap might be where future breakthroughs lie.



