A 90-Year-Old Regulatory Model Could Solve AI's Safety Race-to-the-Bottom
Key Takeaways
- ▸Anthropic's decision to abandon pre-deployment safety guarantees exemplifies how competitive pressure creates a race-to-the-bottom on AI safety, motivating the need for coordinated industry standards
- ▸A federally supervised self-regulatory organization (SRO) model, proven over 90 years in financial regulation, could solve the collective action problem preventing individual AI labs from maintaining high safety standards
- ▸The existing Frontier Model Forum could serve as the foundation for an AI SRO if granted statutory authority, mandatory membership, and government oversight, with regulatory flexibility to keep pace with rapid AI advances
Summary
A new analysis proposes adapting a financial regulation model—federally supervised self-regulatory organizations (SROs)—to address AI safety challenges and the competitive pressures forcing companies to cut corners on safety. The article notes that Anthropic recently abandoned its industry-leading safety guarantees for new models, citing competitive pressure from rivals moving faster without stringent safety commitments. OpenAI has similarly reduced pre-deployment safety testing time, reflecting a classic collective action problem where individual companies cannot prioritize safety if competitors don't—even though the industry as a whole would benefit from coordinated standards.
The proposed SRO model, successfully used in financial regulation for nearly a century through organizations like FINRA, would enable the AI industry to self-govern through binding rules subject to government approval and oversight. The infrastructure partially exists through the Frontier Model Forum, which coordinates risk management among major frontier labs (Anthropic, OpenAI, Google, Meta, and others, though Elon Musk's xAI remains absent). However, the Forum currently lacks statutory authority, mandatory membership, and government oversight—the defining features of a true SRO.
Any effective AI regulatory framework must address four critical challenges: the race-to-the-bottom on safety created by competition, the information asymmetry regulators face regarding proprietary training data and techniques, the rapid pace of AI evolution outstripping slower legal frameworks, and the need for ex ante intervention (pre-deployment evaluations) to prevent irreversible harms. The article argues that an SRO model could address all four by providing legal cover for coordinated safety standards, enabling industry experts to assess proprietary information, allowing flexible rule-making that keeps pace with AI advances, and enabling proactive safety measures before dangerous models reach the public.
Editorial Opinion
The SRO model offers a compelling and often-overlooked institutional solution to AI safety challenges that neither pure self-regulation nor heavy-handed government mandates can achieve. By providing legal cover for coordinated safety standards, SROs could break the competitive dynamics forcing companies to cut safety corners—a problem that even well-intentioned executives like those at Anthropic feel unable to resist alone. However, this proposal's success hinges on two critical unknowns: whether AI companies will genuinely commit to binding rules, and whether government oversight can remain flexible enough to keep pace with rapid technological change without becoming obsolete.


