BotBeat
...
← Back

> ▌

UCLA Health / University of California, Los AngelesUCLA Health / University of California, Los Angeles
RESEARCHUCLA Health / University of California, Los Angeles2026-04-05

UCLA Study Identifies 'Body Gap' in AI Models as Critical Safety and Performance Issue

Key Takeaways

  • ▸UCLA researchers identify 'internal embodiment'—awareness of one's own bodily states—as missing from current AI systems, distinguishing it from external embodiment (physical world interaction) which receives more research focus
  • ▸AI models fail basic perceptual tests that humans and newborns easily pass because they lack the lifetime of bodily experience that anchors human perception and understanding
  • ▸The absence of embodiment has measurable consequences for AI performance and safety, particularly in multimodal systems like ChatGPT and Gemini that can describe experiences they cannot actually understand
Source:
Hacker Newshttps://www.uclahealth.org/news/release/ai-can-describe-human-experiences-lacks-experience-actual-2↗

Summary

A new study published in Neuron by UCLA Health researchers argues that current advanced AI systems, including multimodal large language models like ChatGPT and Google Gemini, lack a fundamental capability that humans possess: embodiment. The research distinguishes between two types of embodiment—external (interaction with the physical world) and internal (awareness of one's own bodily states like fatigue, uncertainty, or physiological need)—and proposes that AI systems are missing both, particularly the latter.

According to the study led by postdoctoral fellow Akila Kadambi, this "body gap" has measurable consequences for AI performance and safety. The researchers demonstrated that leading AI models failed simple perceptual tests that even newborns can pass, such as recognizing point-light displays as human figures in motion. When such images were rotated just 20 degrees, even the best-performing models broke down—a failure the authors attribute to AI systems lacking the lifetime of bodily experience that anchors human perception.

The implications extend beyond academic curiosity. The authors argue that without internal embodiment—a built-in safety system analogous to how humans register uncertainty, depletion, or conflict with survival needs—AI systems can sound experiential while having no genuine understanding of experience. This gap becomes particularly concerning when these systems are deployed in consequential real-world settings where such understanding could be critical for safety and trustworthiness.

  • Building functional analogies of internal embodiment into AI represents a critical and underexplored frontier for developing safer and more trustworthy AI systems

Editorial Opinion

This research highlights an important philosophical and practical distinction that has been largely overlooked in AI development. While the field has focused heavily on external embodiment and multimodal capabilities, the absence of internal embodiment—the ability to monitor and understand one's own limitations, uncertainties, and constraints—represents a genuine safety gap. The irony that AI systems can convincingly describe human experiences they cannot understand is troubling, especially as these systems take on increasingly consequential roles in medicine, law, and other high-stakes domains.

Natural Language Processing (NLP)Generative AIMultimodal AIDeep LearningAI Safety & Alignment

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
Google / AlphabetGoogle / Alphabet
RESEARCH

Deep Dive: Optimizing Sharded Matrix Multiplication on TPU with Pallas

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us