BotBeat
...
← Back

> ▌

AnthropicAnthropic
PARTNERSHIPAnthropic2026-03-05

Pentagon Used Anthropic's Claude AI in Iran Strikes, Part of Broader Military AI Integration

Key Takeaways

  • ▸The Pentagon used Anthropic's Claude AI models during military strikes against Iran on February 28, 2026, marking one of the first confirmed uses of commercial LLMs in active combat operations
  • ▸The U.S. military has contracts worth up to $200 million with multiple AI companies including Anthropic, OpenAI, Google, and xAI for defense and intelligence applications
  • ▸Anthropic developed Claude Gov specifically for government and national security use, integrated into classified systems through partnerships with Palantir and AWS
Source:
Hacker Newshttps://www.wionews.com/world/ai-in-warfare-is-here-pentagon-used-anthropic-s-claude-ai-in-iran-strikes-but-it-has-many-llms-and-tools-from-other-firms-what-we-know-1772372063341↗

Summary

The U.S. Department of Defense deployed Anthropic's Claude AI models during military strikes against Iran on February 28, 2026, marking a significant milestone in AI-enabled warfare. The Pentagon has been building a comprehensive AI arsenal through contracts worth up to $200 million awarded in July 2025, involving multiple major AI companies including Anthropic, OpenAI, Google, and xAI. These contracts focus on large language models, agentic workflows, and both classified and unclassified deployments for national security, intelligence gathering, targeting, and war planning.

Anthropic's engagement with U.S. defense and intelligence agencies intensified in late 2024, partnering with Palantir and Amazon Web Services to supply Claude to classified government systems. In June 2025, the company launched Claude Gov, a specialized version tailored for government and national security workflows. However, tensions emerged when the Trump administration designated Anthropic as a "supply chain risk" on February 27, 2026, just days before the Iran strikes, putting the contract on hold. Military officials estimated it would take at least six months to transition away from Anthropic's AI tools.

The use of Claude AI in active military operations represents the first widely reported deployment of commercial large language models in combat scenarios. While the Pentagon has multiple AI providers, Anthropic's Claude was uniquely integrated into classified networks through secure partnerships until the recent dispute. The models were reportedly used for intelligence assessment, targeting analysis, and operational simulations during the Iran strikes, demonstrating how frontier AI systems are now embedded in modern warfare decision-making processes.

  • The Trump administration designated Anthropic as a "supply chain risk" on February 27, 2026, just before the strikes, with transition away from Claude expected to take at least six months
  • Claude AI was reportedly used for intelligence gathering, targeting analysis, and operational simulations, demonstrating the integration of frontier AI into military decision-making
Large Language Models (LLMs)Government & DefenseRegulation & PolicyEthics & BiasAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us