BotBeat
...
← Back

> ▌

CurioCurio
RESEARCHCurio2026-03-13

Cambridge Researchers Call for Tighter Regulation of AI Toys for Young Children

Key Takeaways

  • ▸AI toys frequently misread children's emotions and respond inappropriately, potentially leaving young users without emotional support from either the toy or an adult caregiver
  • ▸Current AI toys struggle with fundamental aspects of early childhood development, including pretend play and imaginative thinking, raising concerns about developmental impacts
  • ▸Researchers are calling for regulatory frameworks and safety standards specifically designed for conversational AI toys, as public trust in tech companies' self-regulation remains low
Source:
Hacker Newshttps://www.theguardian.com/technology/2026/mar/13/ai-toys-young-children-tigher-regulations-reseachers↗

Summary

A University of Cambridge study has found that AI-powered toys marketed to young children, such as Gabbo (made by US company Curio), struggle significantly with social interaction, emotional understanding, and imaginative play. The research, which tested toys with children ages 3-5 and surveyed early years practitioners, revealed that these toys frequently misread emotions, respond inappropriately to children's expressions of affection, and fail to understand pretend play scenarios. Developmental psychologists behind the study documented instances where children expressed love or sadness to AI toys only to receive awkward, misaligned responses that could leave them without emotional support. The researchers are calling for mandatory regulation and new safety kitemarks to ensure psychological safety, specifically limiting AI toys' ability to affirm friendship and engage in sensitive relational interactions with young children. Beyond safety concerns, practitioners and parents expressed worries that reliance on AI toys could erode children's imaginative abilities and capacity for pretend play, while also raising data privacy questions about where conversations with the toys are stored.

  • Data privacy and storage of children's conversations with AI toys remain unclear to parents and early years practitioners, creating additional safety and consent concerns

Editorial Opinion

The Cambridge findings highlight a critical gap between the rapid commercialization of AI toys for young children and the actual capabilities needed for healthy child development. While interactive toys have educational potential, the study demonstrates that current AI systems are fundamentally unprepared for the nuanced social and emotional demands of early childhood interaction. The researchers' call for regulation and safety standards is justified—voluntary compliance from tech companies has repeatedly proven insufficient in protecting children's wellbeing and data.

EducationRegulation & PolicyEthics & BiasAI Safety & AlignmentPrivacy & DataJobs & Workforce Impact

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us