U.S. Grapples With 1,200+ AI Bills and No Consensus Testing Standard for Regulation
Key Takeaways
- ▸U.S. states introduced 1,200+ AI bills in 2025 with no agreed-upon testing framework to evaluate their effectiveness or necessity
- ▸Anthropic's disclosure of Mythos Preview—featuring autonomous cyber capabilities—prompted White House consideration of FDA-like pre-release vetting for advanced AI models
- ▸Federal and state policies are moving in conflicting directions, with Trump's executive order challenging state laws while Congress excluded preemption language from defense authorization
Summary
The United States finds itself amid a regulatory inflection point as state legislatures and federal policymakers rush to create AI governance frameworks, but without agreement on how to test whether these regulations are effective. In 2025, state legislatures introduced over 1,200 AI-related bills, enacting nearly 150, with the pace accelerating. This legislative surge reflects divergent theories of AI policy: California's SB 53 focuses on developer transparency, New York's RAISE Act mandates stricter incident reporting, and Texas's TRAIGA prohibits specific misuses. However, federal policy has lurched in conflicting directions, with President Trump's December executive order challenging state AI laws while the 2026 National Defense Authorization Act excludes preemption language entirely.
The policy gridlock intensified when Anthropic disclosed Mythos Preview, a frontier AI model featuring autonomous cyber capabilities that was deliberately withheld from public release. This disclosure revealed risks policymakers were unprepared to address, prompting White House consideration of an FDA-like pre-release vetting system for advanced AI models. IBM Chairman and CEO Arvind Krishna framed the central challenge as finding the "Goldilocks middle" between overregulation that stifles innovation and underregulation that creates safety gaps—a balance that remains elusive across federal, state, and international levels.
The fundamental problem underlying the regulatory confusion is that policymakers at all levels lack a shared test to determine whether proposed legislation constitutes good policy. Many state bills attempt to regulate "AI" as a monolithic category despite existing consumer protection, civil rights, and data privacy laws that already address relevant concerns. Colorado and Utah, which passed omnibus AI statutes in 2024, are now retreating with sunset clauses and reenactments, signaling drafters' uncertainty. Meanwhile, this U.S. patchwork unfolds against a sharper international backdrop where the EU implements its AI Act and China deploys frontier capability under state direction—raising the cost of incoherent American policy in a technological competition that increasingly intersects with national security.
- Policymakers lack clarity on which specific regulations address which gaps, causing major states like Colorado and Utah to retreat from or revise omnibus AI statutes
- The U.S. regulatory confusion occurs amid international competition, with the EU implementing its AI Act and China advancing state-directed frontier AI capabilities
Editorial Opinion
The U.S. is repeating a familiar mistake: rushing to regulate an emerging technology without first establishing clear principles or metrics for what good regulation looks like. The sheer volume of proposed bills—over 1,200 in a single year—suggests legislative motion rather than legislative clarity. Anthropic's Mythos Preview disclosure helpfully exposed the real risks that policy should address (autonomous capabilities in frontier models), but the response remains scattered across incompatible state frameworks and contradictory federal signals. Until policymakers establish a shared test for regulation—a framework that clarifies which actors face which requirements to address which specific harms—the U.S. will continue to produce regulatory theater rather than effective governance.


