BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-03-02

Pentagon Continued Using Anthropic's Claude in Iran Military Operations Despite Trump Administration Ban

Key Takeaways

  • ▸The Pentagon used Anthropic's Claude AI for target selection and intelligence operations in Iran strikes, hours after Trump banned federal use of the technology
  • ▸Military officials consider Claude superior to competing models, including OpenAI's offerings, despite the political ban
  • ▸The controversy boosted Claude to the #1 most downloaded app on Apple's App Store following the ban announcement
Source:
Hacker Newshttps://sfist.com/2026/03/02/trump-administration-still-used-sfs-anthropic-in-iran-strikes-mere-hours-after-trump-banned-anthropic/↗

Summary

In a striking contradiction, the U.S. military continued to use Anthropic's Claude AI system during weekend attacks on Iran, just hours after President Trump issued an executive order banning all federal government use of the San Francisco-based company's technology. The ban came after Anthropic raised concerns that its AI tools could be used for mass surveillance and AI-based mass murder, prompting Trump to label the company as "Radical Left" and staffed by people "who have no idea what the real World is all about."

According to reports from the Wall Street Journal and Axios, the Pentagon utilized Claude for critical military functions including target selection, battlefield simulations, and intelligence assessments during the Iran strikes. Defense officials reportedly view Claude as superior to competing AI models, including those from OpenAI, which quickly offered to fill the contract void left by Anthropic's ban. The incident highlights the Pentagon's dependence on Anthropic's technology despite political tensions.

The controversy had an unexpected business impact for Anthropic: following the Trump administration's ban, Claude became the number one most downloaded app in the Apple App Store over the weekend, a position it maintained through Monday morning. The sequence of events—from the Friday afternoon ban, to OpenAI's evening offer to replace Anthropic, to the Iran attacks that same night—underscores the complex intersection of AI technology, national security, and corporate ethics in the current geopolitical landscape.

  • The incident stemmed from Anthropic's ethical concerns about its AI being used for mass surveillance and military applications
Large Language Models (LLMs)Government & DefenseRegulation & PolicyEthics & BiasAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us