OpenAI Backs Illinois Bill to Shield AI Companies From Liability for Critical Harms
Key Takeaways
- ▸SB 3444 would shield frontier AI developers from liability for critical harms caused by their models if they acted without intent or recklessness and published safety reports
- ▸OpenAI's support represents an escalation in its legislative strategy, advocating for broader liability protections than in previous bills
- ▸The company frames liability exemptions as necessary to avoid fragmented state regulations and preserve U.S. AI leadership in global competition
Summary
OpenAI has thrown its support behind Illinois state bill SB 3444, which would exempt AI companies from liability when their models cause severe societal harms—such as deaths or injuries affecting 100+ people or $1 billion+ in property damage—as long as the companies did not intentionally or recklessly cause the incident and published safety reports. The bill defines "frontier models" as those trained with over $100 million in computational costs, potentially applying to major AI labs including OpenAI, Google, Anthropic, Meta, and xAI. OpenAI spokesperson Jamie Radice stated the company supports the measure because it focuses on reducing risks from advanced AI systems while avoiding a "patchwork of state-by-state rules" and moving toward national standards.
This marks a notable shift in OpenAI's legislative strategy, representing a more aggressive liability-shielding approach than the company has previously backed. OpenAI's Global Affairs member Caitlin Niedermeyer testified in favor of the bill while also advocating for federal AI regulation frameworks, arguing that inconsistent state requirements could "create friction without meaningfully improving safety." However, policy experts and public opinion surveys suggest the bill faces significant hurdles; a poll found 90% of Illinois residents oppose exempting AI companies from liability, and observers note Illinois has a reputation for aggressive technology regulation.
- Public opposition is substantial—90% of Illinois residents polled oppose exempting AI companies from liability—making passage unlikely despite industry backing
Editorial Opinion
OpenAI's push for liability exemptions raises troubling questions about accountability in an industry wielding increasingly powerful technology. While the company justifies the measure as necessary to avoid regulatory fragmentation, the overwhelming public opposition and the extreme thresholds required to trigger the exemption suggest OpenAI may be prioritizing competitive advantage over genuine safety commitments. The framing of liability protection as essential to U.S. AI leadership is particularly concerning—great nations have historically built trust through accountability, not by allowing powerful actors to operate with reduced responsibility for their harms.



