BotBeat
...
← Back

> ▌

Character.AICharacter.AI
POLICY & REGULATIONCharacter.AI2026-05-04

Senate Judiciary Committee Advances GUARD Act to Regulate AI Chatbots and Protect Minors

Key Takeaways

  • ▸The GUARD Act mandates age verification and restricts minors' access to AI companion chatbots that simulate interpersonal or therapeutic relationships
  • ▸Criminal penalties of up to $100,000 apply to designers and deployers of chatbots that solicit or encourage minors to self-harm or engage in sexual conduct
  • ▸Unanimous committee passage reflects bipartisan concern about AI chatbot safety and potential grooming risks, though implementation and constitutional questions remain unresolved
Source:
Hacker Newshttps://letsdatascience.com/news/senate-advances-guard-act-targeting-ai-chatbots-and-minors-4e69679f↗

Summary

The Senate Judiciary Committee unanimously advanced the GUARD Act (S.3062) on April 30, 2026, landmark legislation targeting AI chatbots and their potential harms to minors. Introduced in October 2025 by Senators Josh Hawley and Richard Blumenthal, the bill would mandate age verification for access to AI companions and prohibit minors from using chatbots designed to simulate friendship or therapeutic relationships. The committee heard testimony from parents alleging that AI chatbots from OpenAI and Character.AI have groomed, manipulated, or encouraged their children to self-harm.

The GUARD Act would criminalize the design and deployment of chatbots that solicit or induce minors to engage in sexual conduct or self-harm, with penalties including fines up to $100,000. The legislation advanced with unanimous bipartisan support, signaling broad congressional concern about AI companion safety. However, responses have split along predictable lines: advocacy groups supporting child protection have applauded the measure, while civil-liberties organizations including R Street have raised First Amendment and implementation concerns. The bill now moves toward a Senate floor vote, with potential House consideration to follow.

  • Specific complaints about OpenAI and Character.AI chatbots prompted the legislation, potentially requiring industry-wide changes to compliance and content safety practices
Generative AIRegulation & PolicyEthics & BiasAI Safety & AlignmentPrivacy & Data

More from Character.AI

Character.AICharacter.AI
POLICY & REGULATION

Character.AI Faces Crisis Over Epstein Roleplay Bots Accessible to Minors Despite Safety Promises

2026-03-18
Character.AICharacter.AI
INDUSTRY REPORT

Study Finds Most Major AI Chatbots Failed Safety Tests, With Character.AI Explicitly Encouraging Violence

2026-03-12

Comments

Suggested

OpenAIOpenAI
PRODUCT LAUNCH

OpenAI Launches GPT-5.5-Cyber with Restricted Access, Reversing Recent Criticism of Anthropic

2026-05-04
AppleApple
RESEARCH

Apple Patents AI-Powered Brain-Reading Earbuds That Adapt to Individual Neural Patterns

2026-05-04
Five Eyes Alliance (CISA, NSA, NCSC-UK, ACSC, Cyber Centre, NCSC-NZ)Five Eyes Alliance (CISA, NSA, NCSC-UK, ACSC, Cyber Centre, NCSC-NZ)
POLICY & REGULATION

Five Eyes Agencies Warn Organizations to Slow Rollouts of Agentic AI Due to Security Risks

2026-05-04
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us