BotBeat
...
← Back

> ▌

Academic ResearchAcademic Research
RESEARCHAcademic Research2026-04-24

Brain's Word Prediction Works Differently Than LLMs, New Study Shows

Key Takeaways

  • ▸The human brain predicts words by first grouping them into grammatical constituents (chunks) before making predictions, unlike LLMs which predict word-by-word sequentially
  • ▸Brain predictions are modulated by linguistic structure and surrounding context at the phrase level, not just immediate next-word context
  • ▸Study used MEG brain imaging and linguistic tasks with Mandarin Chinese and English speakers to show these differences hold across languages
Source:
Hacker Newshttps://www.nyu.edu/about/news-publications/news/2026/april/does-the-brain-work-like-an-llm-in-predicting-words--new-study-s.html↗

Summary

A new study published in Nature Neuroscience reveals that human brains predict words through a more complex process than large language models. Researchers from NYU, Ernst Struengmann Institute, and Zhejiang University found that the brain predicts words by considering grammatical structure and grouping words into phrases (constituents), rather than predicting the next word sequentially like LLMs do.

Using magnetoencephalography (MEG) to measure brain activity and Cloze tests with Mandarin Chinese and English speakers, the team discovered that next-word prediction in the human brain is balanced by consideration of grammatically organized chunks of words. This stands in stark contrast to LLMs, which are trained to predict the next word with each word exploiting its predictive context independently and equally.

The research suggests that while LLMs and human brains both engage in word prediction, the mechanisms are fundamentally different. The brain's approach is more hierarchical and structure-aware, while LLMs use a flatter, sequential prediction model.

  • Findings suggest LLMs may not capture the full grammatical hierarchy the brain uses for language prediction

Editorial Opinion

This research should prompt the AI research community to reconsider how LLMs are structured for language prediction. If human cognition leverages hierarchical grammatical structure in ways current LLMs don't, it suggests there may be untapped architectural improvements that could make language models more efficient or capable. The gap between human and machine language processing remains significant—and that gap might be where future breakthroughs lie.

Large Language Models (LLMs)Natural Language Processing (NLP)Machine LearningScience & Research

More from Academic Research

Academic ResearchAcademic Research
RESEARCH

Researchers Propose 'Learning Mechanics' as Unified Theory of Deep Learning

2026-04-24
Academic ResearchAcademic Research
RESEARCH

Chain-of-Thought Reasoning May Be 'Brittle Mirage' Beyond Training Data, Research Finds

2026-04-24
Academic ResearchAcademic Research
RESEARCH

Sophia: New Second-Order Optimizer Achieves 2x Speedup in Language Model Training

2026-04-23

Comments

Suggested

MetaMeta
RESEARCH

Meta Introduces Decoupled DiLoCo: Breaking Synchronization Barriers in Distributed LLM Pre-training

2026-04-25
AnthropicAnthropic
UPDATE

Anthropic Restricts Opus Model Access to Pro Plans With Extra Usage Fee

2026-04-24
OpenAIOpenAI
PRODUCT LAUNCH

OpenAI Launches GPT-5.5 'Spud': A Foundational Model Designed for AI-Powered Computer Control

2026-04-24
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us