US Military Used Anthropic's Claude AI in Iran Strikes Despite Trump Ban
Key Takeaways
- ▸US military used Anthropic's Claude AI for intelligence, target selection, and simulations during Iran strikes despite Trump's ban issued hours earlier
- ▸The conflict originated from military use of Claude in a January 2026 Venezuela operation, which violated Anthropic's terms prohibiting violent applications
- ▸Pentagon granted six-month transition period acknowledging Claude's deep integration into military operations and difficulty of immediate removal
Summary
The US military reportedly used Anthropic's Claude AI system during a joint US-Israel military operation against Iran on March 1, 2026, despite President Trump ordering a complete ban on the AI tool just hours before the strikes began. According to reports from the Wall Street Journal and Axios, military command utilized Claude for intelligence analysis, target selection, and battlefield simulations during the attack.
The controversy stems from January 2026, when the US military used Claude during a raid to capture Venezuelan President Nicolás Maduro. Anthropic objected to this usage, citing its terms of service that prohibit the application of Claude for violent purposes, weapons development, or surveillance. This sparked an escalating conflict between Trump, the Pentagon, and the AI company. On the Friday before the Iran strikes, Trump publicly denounced Anthropic as a "Radical Left AI company run by people who have no idea what the real World is all about" and ordered all federal agencies to immediately cease using Claude.
Defense Secretary Pete Hegseth responded with accusations of "arrogance and betrayal" against Anthropic while demanding unrestricted access to all of the company's AI models for military purposes. However, Hegseth also acknowledged the practical challenges of rapidly removing Claude from military systems, allowing a six-month transition period. OpenAI has since stepped in to fill the gap, with CEO Sam Altman announcing an agreement to provide the Pentagon access to ChatGPT and other OpenAI tools for classified networks.
- OpenAI quickly positioned itself as replacement, securing agreement to provide ChatGPT access to Pentagon's classified networks


