Pentagon Used Anthropic's Claude AI in Iran Strikes, Part of Broader Military AI Integration
Key Takeaways
- ▸The Pentagon used Anthropic's Claude AI models during military strikes against Iran on February 28, 2026, marking one of the first confirmed uses of commercial LLMs in active combat operations
- ▸The U.S. military has contracts worth up to $200 million with multiple AI companies including Anthropic, OpenAI, Google, and xAI for defense and intelligence applications
- ▸Anthropic developed Claude Gov specifically for government and national security use, integrated into classified systems through partnerships with Palantir and AWS
Summary
The U.S. Department of Defense deployed Anthropic's Claude AI models during military strikes against Iran on February 28, 2026, marking a significant milestone in AI-enabled warfare. The Pentagon has been building a comprehensive AI arsenal through contracts worth up to $200 million awarded in July 2025, involving multiple major AI companies including Anthropic, OpenAI, Google, and xAI. These contracts focus on large language models, agentic workflows, and both classified and unclassified deployments for national security, intelligence gathering, targeting, and war planning.
Anthropic's engagement with U.S. defense and intelligence agencies intensified in late 2024, partnering with Palantir and Amazon Web Services to supply Claude to classified government systems. In June 2025, the company launched Claude Gov, a specialized version tailored for government and national security workflows. However, tensions emerged when the Trump administration designated Anthropic as a "supply chain risk" on February 27, 2026, just days before the Iran strikes, putting the contract on hold. Military officials estimated it would take at least six months to transition away from Anthropic's AI tools.
The use of Claude AI in active military operations represents the first widely reported deployment of commercial large language models in combat scenarios. While the Pentagon has multiple AI providers, Anthropic's Claude was uniquely integrated into classified networks through secure partnerships until the recent dispute. The models were reportedly used for intelligence assessment, targeting analysis, and operational simulations during the Iran strikes, demonstrating how frontier AI systems are now embedded in modern warfare decision-making processes.
- The Trump administration designated Anthropic as a "supply chain risk" on February 27, 2026, just before the strikes, with transition away from Claude expected to take at least six months
- Claude AI was reportedly used for intelligence gathering, targeting analysis, and operational simulations, demonstrating the integration of frontier AI into military decision-making


