GUARD Act Would Ban AI Companions for Minors, Require Widespread Identity Verification
Key Takeaways
- ▸The GUARD Act would ban all AI companions for users under 18, far exceeding protections from sexually explicit content alone
- ▸The bill requires identity verification for all AI chatbot users, raising significant privacy and security concerns
- ▸Parents would lose the right to opt their children out, shifting tech governance from families to federal government
Summary
Sen. Josh Hawley's Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act advanced out of the Senate Judiciary Committee, proposing sweeping restrictions on AI chatbot access for minors and mandatory identity verification for all users. While framed as protecting children from sexually explicit or harmful AI content, the bill defines "AI companions" so broadly that it would ban minors from using nearly any chatbot that provides friendly or supportive communication—including homework help tools, language learning applications, and therapeutic AI systems. The legislation eliminates parental discretion entirely, replacing family choice with federal mandates. Despite concerns from senators about privacy risks and potential negative consequences for young people, the bill advanced with bipartisan support, though several lawmakers expressed reservations about the identity verification component and overly broad language.
- Over half of US teens currently use chatbots for schoolwork—this legislation could block such educational uses


