Study Reveals Extensive Regulatory Capture by Major AI Companies
Key Takeaways
- ▸Researchers identified 27 distinct mechanisms of regulatory capture across 5 categories, documenting 249 instances in 100 analyzed news articles
- ▸Discourse & Epistemic Influence (narrative framing) and Elision of Law are the most prevalent capture categories
- ▸Industry narratives like 'Regulation stifles innovation,' 'Red tape,' and 'National Interest' are frequently invoked to rationalize capture
Summary
A comprehensive peer-reviewed study published on arXiv has systematically documented how major AI companies have captured regulatory processes across multiple domains and jurisdictions. Researchers developed a taxonomy of 27 regulatory capture mechanisms and analyzed 100 news articles, identifying 249 instances of capture mechanisms in practice. The most prevalent categories are Discourse & Epistemic Influence—concerning narrative framing in regulatory discussions—and Elision of Law, relating to violations or contentious interpretations of antitrust, privacy, copyright, and labor laws.
The analysis reveals that industry narratives like "Regulation stifles innovation," "Red tape," and "National Interest" are frequently deployed to rationalize regulatory capture. The authors emphasize the breadth and severity of this capture, calling it an emergency requiring urgent attention from policymakers and the public. They propose counter-tactics and lessons learned from how other regulated industries have successfully resisted similar capture.
- The study calls for urgent policy intervention and proposes strategies from other regulated industries for resisting and challenging Big AI's regulatory influence
Editorial Opinion
This peer-reviewed research provides crucial empirical documentation of what has long been observed anecdotally: that major AI companies have systematically captured regulatory processes. The identification of 27 distinct capture mechanisms—from board interlocks to strategic narrative framing—and the systematic documentation of how these mechanisms operate should be a wake-up call for policymakers. The authors' designation of this as an emergency is appropriate given the stakes involved in AI governance and the pervasive influence of Big AI over policy outcomes.



