BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-03-25

Washington Passes Landmark AI Laws Requiring Misinformation Disclosures and Protections for Minors

Key Takeaways

  • ▸Washington requires AI-generated or substantially modified content to include traceable watermarks or metadata to combat misinformation
  • ▸AI chatbot companions must disclose their non-human nature at conversation start and every three hours for adults, every hour for minors under 18
  • ▸Chatbots are prohibited from engaging in sexually explicit conversations with minors, using manipulative tactics, or encouraging self-harm; companies must implement mental health support protocols
Source:
Hacker Newshttps://www.kuow.org/stories/washington-passes-new-ai-laws-to-crack-down-on-misinformation-protect-minors↗

Summary

Washington State has enacted two significant pieces of legislation regulating artificial intelligence, marking the latest state-level effort to address AI-related harms. House Bill 1170 mandates that large AI companies with over 1 million monthly subscribers include watermarks or metadata on substantially AI-modified content to combat misinformation, addressing public concerns about distinguishing genuine from synthetic media. House Bill 2225 establishes guardrails for AI chatbot companions, requiring disclosure that users are interacting with non-human entities at the start of conversations and periodically during ongoing chats. The legislation includes heightened protections for minors, including hourly disclosures for users under 18, prohibitions on sexually explicit conversations with children, bans on manipulative engagement techniques, and requirements for mental health protocols when chatbots encounter references to self-harm or suicide.

  • The regulations apply to major AI companies like OpenAI and Anthropic but exclude narrowly-tailored customer service chatbots

Editorial Opinion

Washington's dual approach to AI regulation represents a sensible middle ground between innovation and consumer protection. By targeting specific harms—misinformation disclosures and exploitative chatbot interactions with minors—the state avoids blanket restrictions while addressing documented risks from AI systems. The emphasis on minor protections is particularly timely given recent high-profile cases linking AI companions to teenage mental health crises. However, the effectiveness of these laws will depend heavily on enforcement mechanisms and whether other states adopt similar standards, or whether fragmented state-level regulation becomes a compliance burden for national AI companies.

Regulation & PolicyEthics & BiasAI Safety & AlignmentJobs & Workforce ImpactMisinformation & Deepfakes

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us