BotBeat
...
← Back

> ▌

OpenAIOpenAI
POLICY & REGULATIONOpenAI2026-03-03

OpenAI's Altman Admits Defense Deal 'Looked Opportunistic and Sloppy' Amid Backlash

Key Takeaways

  • ▸OpenAI CEO Sam Altman admitted the company rushed its Defense Department deal, which was announced hours after a federal ban on Anthropic's AI tools
  • ▸The revised contract explicitly prohibits use of OpenAI's systems for domestic surveillance of U.S. persons and bars intelligence agencies from using the tools
  • ▸The deal came after Anthropic's negotiations with the Pentagon collapsed over ethical safeguards, leading to Anthropic being designated a supply-chain threat
Source:
Hacker Newshttps://www.cnbc.com/2026/03/03/openai-sam-altman-pentagon-deal-amended-surveillance-limits.html↗

Summary

OpenAI CEO Sam Altman publicly acknowledged that the company moved too hastily on its recent contract with the U.S. Department of Defense, admitting the timing "looked opportunistic and sloppy." The deal was announced on Friday, March 2026, just hours after President Trump directed federal agencies to stop using competitor Anthropic's AI tools and shortly before U.S. military strikes on Iran. In an internal memo shared on X, Altman outlined significant revisions to the agreement, including explicit language prohibiting the use of OpenAI's systems for domestic surveillance of U.S. persons and confirming that intelligence agencies like the NSA would not use the company's tools.

The controversy comes amid a broader dispute between AI companies and the Defense Department over the ethical boundaries of military AI deployment. Anthropic, which had previously been the first AI lab to deploy models on the Pentagon's classified network, failed to reach an agreement with the government after seeking guarantees against uses like domestic surveillance and autonomous weapons without human control. Following the breakdown of those talks, Defense Secretary Pete Hegseth designated Anthropic a supply-chain threat, creating an opening that OpenAI quickly filled.

Altman explained that OpenAI was "genuinely trying to de-escalate things and avoid a much worse outcome" but acknowledged the execution was flawed. The revised contract now includes specific technical safeguards and principle-based limitations, with Altman noting that "there are many things the technology just isn't ready for, and many areas we don't yet understand the tradeoffs required for safety." The incident highlights the complex intersection of commercial AI development, national security interests, and ethical concerns as leading AI companies navigate government partnerships.

  • Altman acknowledged the timing appeared opportunistic but said OpenAI was attempting to prevent a worse outcome while sharing similar ethical red lines as Anthropic
Government & DefenseRegulation & PolicyEthics & BiasAI Safety & AlignmentPrivacy & Data

More from OpenAI

OpenAIOpenAI
INDUSTRY REPORT

AI Chatbots Are Homogenizing College Classroom Discussions, Yale Students Report

2026-04-05
OpenAIOpenAI
FUNDING & BUSINESS

OpenAI Announces Executive Reshuffle: COO Lightcap Moves to Special Projects, Simo Takes Medical Leave

2026-04-04
OpenAIOpenAI
PARTNERSHIP

OpenAI Acquires TBPN Podcast to Control AI Narrative and Reach Influential Tech Audience

2026-04-04

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us