BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-03-01

US Military Used Anthropic's Claude AI in Iran Strikes Despite Trump Ban

Key Takeaways

  • ▸US military used Anthropic's Claude AI for intelligence, target selection, and simulations during Iran strikes despite Trump's ban issued hours earlier
  • ▸The conflict originated from military use of Claude in a January 2026 Venezuela operation, which violated Anthropic's terms prohibiting violent applications
  • ▸Pentagon granted six-month transition period acknowledging Claude's deep integration into military operations and difficulty of immediate removal
Source:
Hacker Newshttps://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-military↗

Summary

The US military reportedly used Anthropic's Claude AI system during a joint US-Israel military operation against Iran on March 1, 2026, despite President Trump ordering a complete ban on the AI tool just hours before the strikes began. According to reports from the Wall Street Journal and Axios, military command utilized Claude for intelligence analysis, target selection, and battlefield simulations during the attack.

The controversy stems from January 2026, when the US military used Claude during a raid to capture Venezuelan President Nicolás Maduro. Anthropic objected to this usage, citing its terms of service that prohibit the application of Claude for violent purposes, weapons development, or surveillance. This sparked an escalating conflict between Trump, the Pentagon, and the AI company. On the Friday before the Iran strikes, Trump publicly denounced Anthropic as a "Radical Left AI company run by people who have no idea what the real World is all about" and ordered all federal agencies to immediately cease using Claude.

Defense Secretary Pete Hegseth responded with accusations of "arrogance and betrayal" against Anthropic while demanding unrestricted access to all of the company's AI models for military purposes. However, Hegseth also acknowledged the practical challenges of rapidly removing Claude from military systems, allowing a six-month transition period. OpenAI has since stepped in to fill the gap, with CEO Sam Altman announcing an agreement to provide the Pentagon access to ChatGPT and other OpenAI tools for classified networks.

  • OpenAI quickly positioned itself as replacement, securing agreement to provide ChatGPT access to Pentagon's classified networks
Large Language Models (LLMs)Government & DefenseRegulation & PolicyEthics & BiasAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us