OpenAI's Altman Admits Defense Deal 'Looked Opportunistic and Sloppy' Amid Backlash
Key Takeaways
- ▸OpenAI CEO Sam Altman admitted the company rushed its Defense Department deal, which was announced hours after a federal ban on Anthropic's AI tools
- ▸The revised contract explicitly prohibits use of OpenAI's systems for domestic surveillance of U.S. persons and bars intelligence agencies from using the tools
- ▸The deal came after Anthropic's negotiations with the Pentagon collapsed over ethical safeguards, leading to Anthropic being designated a supply-chain threat
Summary
OpenAI CEO Sam Altman publicly acknowledged that the company moved too hastily on its recent contract with the U.S. Department of Defense, admitting the timing "looked opportunistic and sloppy." The deal was announced on Friday, March 2026, just hours after President Trump directed federal agencies to stop using competitor Anthropic's AI tools and shortly before U.S. military strikes on Iran. In an internal memo shared on X, Altman outlined significant revisions to the agreement, including explicit language prohibiting the use of OpenAI's systems for domestic surveillance of U.S. persons and confirming that intelligence agencies like the NSA would not use the company's tools.
The controversy comes amid a broader dispute between AI companies and the Defense Department over the ethical boundaries of military AI deployment. Anthropic, which had previously been the first AI lab to deploy models on the Pentagon's classified network, failed to reach an agreement with the government after seeking guarantees against uses like domestic surveillance and autonomous weapons without human control. Following the breakdown of those talks, Defense Secretary Pete Hegseth designated Anthropic a supply-chain threat, creating an opening that OpenAI quickly filled.
Altman explained that OpenAI was "genuinely trying to de-escalate things and avoid a much worse outcome" but acknowledged the execution was flawed. The revised contract now includes specific technical safeguards and principle-based limitations, with Altman noting that "there are many things the technology just isn't ready for, and many areas we don't yet understand the tradeoffs required for safety." The incident highlights the complex intersection of commercial AI development, national security interests, and ethical concerns as leading AI companies navigate government partnerships.
- Altman acknowledged the timing appeared opportunistic but said OpenAI was attempting to prevent a worse outcome while sharing similar ethical red lines as Anthropic



