Washington Passes Landmark AI Laws Requiring Misinformation Disclosures and Protections for Minors
Key Takeaways
- ▸Washington requires AI-generated or substantially modified content to include traceable watermarks or metadata to combat misinformation
- ▸AI chatbot companions must disclose their non-human nature at conversation start and every three hours for adults, every hour for minors under 18
- ▸Chatbots are prohibited from engaging in sexually explicit conversations with minors, using manipulative tactics, or encouraging self-harm; companies must implement mental health support protocols
Summary
Washington State has enacted two significant pieces of legislation regulating artificial intelligence, marking the latest state-level effort to address AI-related harms. House Bill 1170 mandates that large AI companies with over 1 million monthly subscribers include watermarks or metadata on substantially AI-modified content to combat misinformation, addressing public concerns about distinguishing genuine from synthetic media. House Bill 2225 establishes guardrails for AI chatbot companions, requiring disclosure that users are interacting with non-human entities at the start of conversations and periodically during ongoing chats. The legislation includes heightened protections for minors, including hourly disclosures for users under 18, prohibitions on sexually explicit conversations with children, bans on manipulative engagement techniques, and requirements for mental health protocols when chatbots encounter references to self-harm or suicide.
- The regulations apply to major AI companies like OpenAI and Anthropic but exclude narrowly-tailored customer service chatbots
Editorial Opinion
Washington's dual approach to AI regulation represents a sensible middle ground between innovation and consumer protection. By targeting specific harms—misinformation disclosures and exploitative chatbot interactions with minors—the state avoids blanket restrictions while addressing documented risks from AI systems. The emphasis on minor protections is particularly timely given recent high-profile cases linking AI companions to teenage mental health crises. However, the effectiveness of these laws will depend heavily on enforcement mechanisms and whether other states adopt similar standards, or whether fragmented state-level regulation becomes a compliance burden for national AI companies.


