Acutus News Site Exposed as AI-Generated Content Operation Funded by OpenAI Super PAC
Key Takeaways
- ▸Acutus uses automated AI to generate news articles without human journalists, deploying synthetic reporters to solicit quotes from industry critics
- ▸97% of articles were fully or partially AI-generated, indicating a coordinated disinformation operation operating at scale
- ▸Alleged funding by OpenAI's super PAC raises serious questions about tech companies using corporate resources to secretly influence policy debates
Summary
An investigation by Model Republic has exposed Acutus, an anonymously operated news site launched in December 2025, as a sophisticated AI-generated content operation. The publication, which claims to offer 'expert-sourced journalism,' actually uses a behind-the-scenes interface to automatically generate stories from prompts labeled 'AI Background Context' and 'Question Prompts.' Analysis using AI content detectors found that 69% of the site's 94 articles were fully AI-generated and another 28% were partially AI-generated, with only three classified as human-authored.
The operation was exposed when an AI-generated fake journalist named Michael Chen attempted to interview Nathan Calvin, vice president and general counsel of the advocacy group Encode. Evidence suggests OpenAI's super PAC is funding the operation through the consulting firm Targeted Victory, indicating that a major AI company may be using generative AI to create coordinated disinformation campaigns designed to influence policy and public opinion on AI regulation.
- AI content detectors can identify synthetic journalism, but transparency requirements and regulatory oversight remain critical to prevent misuse
Editorial Opinion
The Acutus case reveals a troubling new frontier in AI misuse: sophisticated, automated disinformation campaigns designed to manipulate policy discourse while evading detection. The apparent involvement of a major AI company's funding infrastructure suggests that bad actors are weaponizing generative AI to systematically undermine media integrity and manufacture artificial consensus. This should accelerate regulatory efforts to require AI disclosure in journalism, establish watermarking standards for synthetic content, and hold tech companies accountable for how they deploy their resources and capital.



