Pentagon Issues Ultimatum to Anthropic: Remove AI Military Use Restrictions by Friday or Face Forced Compliance
Key Takeaways
- ▸Pentagon issued Friday ultimatum to Anthropic demanding removal of all restrictions on military AI use, threatening Defense Production Act enforcement
- ▸Anthropic previously agreed to allow AI use for missile and cyber defense in December, but Pentagon wants broader access without company-imposed guardrails
- ▸Defense Department threatened to label Anthropic a 'supply chain risk' and ban all defense contracts if company doesn't comply
Summary
Defense Secretary Pete Hegseth has given Anthropic CEO Dario Amodei until Friday to allow the company's AI systems to be used for all legal military purposes, or face potential government intervention under the Defense Production Act. The ultimatum escalates weeks of tension between the Pentagon and the AI safety-focused company over guardrails that restrict military applications of its technology.
According to sources, Anthropic had already agreed in December contract negotiations to allow its AI systems for missile and cyber defense purposes. However, Pentagon officials remain unsatisfied with the company's insistence on maintaining restrictions against mass domestic surveillance and direct use in lethal autonomous weapons. During recent negotiations, Defense Department representatives including Undersecretary Emil Michael discussed hypothetical scenarios, including whether Anthropic's guardrails might impede U.S. response to an intercontinental ballistic missile attack.
The Pentagon has threatened to invoke the Defense Production Act—which allows presidential control over companies critical to national security—or alternatively label Anthropic as a "supply chain risk" and ban all defense business with the company. Anthropic maintains that its proposed contract language already enables missile defense and similar uses, disputing Pentagon characterizations of the negotiations. The company has built its reputation on AI safety principles, making this confrontation a significant test of whether private AI companies can maintain ethical guardrails when facing government pressure.
This standoff highlights broader tensions in the AI industry between national security imperatives and responsible AI development principles, particularly as the Defense Department seeks to rapidly integrate AI capabilities across military operations.
- Dispute centers on Anthropic's safety restrictions preventing mass surveillance and lethal autonomous weapons applications
- Confrontation tests whether AI companies can maintain ethical principles when facing government national security demands


