US Military Continues Using Anthropic's Claude AI in Iran Conflict Despite Government Restrictions and Industry Exodus
Key Takeaways
- ▸Anthropic's Claude AI is actively being used by the US military for targeting decisions in the ongoing conflict with Iran, despite government directives to discontinue use
- ▸The Department of Defense was given six months to wind down Anthropic operations, but a surprise attack on Tehran occurred before the directive could be fully executed
- ▸Major defense contractors including Lockheed Martin are rapidly replacing Claude with competitor AI models, with at least 10 defense-tech startups actively seeking alternatives
Summary
Anthropic finds itself in an unprecedented position as its Claude AI system remains actively deployed in US military operations against Iran, even as the company faces government restrictions and a mass exodus from defense industry clients. President Trump directed civilian agencies to discontinue Anthropic products, and the Department of Defense was given six months to wind down operations. However, a surprise US-Israel attack on Tehran occurred before the directive could be fully executed, leaving Claude AI in active use for targeting decisions in the ongoing conflict.
According to The Washington Post, Anthropic's systems are being used in conjunction with Palantir's Maven system to suggest hundreds of targets, issue precise location coordinates, and prioritize targets according to importance during Pentagon strike planning. Secretary of Defense Pete Hegseth has pledged to designate Anthropic as a supply-chain risk, though no official steps have been taken, leaving no legal barriers to continued system use.
Meanwhile, defense contractors are rapidly replacing Claude with competitor models. Lockheed Martin and other major contractors began swapping out Anthropic's models this week, while a J2 Ventures managing partner reported that 10 portfolio companies have backed off Claude for defense use cases and are actively replacing the service. The situation has created a paradoxical scenario where one of the leading AI labs is being partitioned out of military tech even as its systems are used in an active war zone, with the potential for a heated legal battle if the supply-chain risk designation moves forward.
- Secretary of Defense Pete Hegseth has pledged to designate Anthropic as a supply-chain risk, though no official designation has been made, creating legal ambiguity around continued use


