Pentagon's Anthropic Supply Chain Risk Designation Faces Legal Challenges
Key Takeaways
- ▸Pentagon designated Anthropic as a supply chain risk following disputes over usage restrictions prohibiting autonomous weapons and mass surveillance in military contracts
- ▸This marks the first known use of supply chain risk designation authorities against a major domestic AI company, with only one prior public case against a Swiss cybersecurity firm
- ▸Anthropic plans to challenge the designation in court, potentially establishing precedent for government authority to regulate AI companies on national security grounds
Summary
Defense Secretary Pete Hegseth designated AI company Anthropic as a supply chain risk to national security on February 27, 2026, following a directive from President Trump to cease using Anthropic's Claude AI technology across all federal agencies. The designation came after escalating tensions over two usage restrictions in Anthropic's military contract—prohibitions on autonomous weapons and mass surveillance—which conflicted with Hegseth's January directive requiring all DoD AI contracts to adopt standard "any lawful use" language.
Hegseth invoked Section 10 U.S.C. § 3252, a rarely used procurement authority that allows the Pentagon to exclude vendors from Defense Department contracts and restrict their participation in contractor supply chains. The designation includes a six-month transition period for the military to move away from Anthropic's services. Anthropic has vowed to challenge the designation in court, setting up what could be the first major legal test of these supply chain risk authorities against a domestic AI company.
Legal experts writing in Lawfare argue the designation has serious legal vulnerabilities. According to the analysis, Hegseth's action may exceed statutory authorization, the required findings appear questionable, and his public statements about threatening to invoke the Defense Production Act to compel compliance may have undermined the government's litigation position. The case could establish important precedents for how the government can regulate AI companies on national security grounds, particularly when disputes center on ethical usage restrictions rather than foreign ownership or espionage concerns.
- Legal analysts identify multiple vulnerabilities in the Pentagon's action, including potential statutory overreach and undermining public statements by Defense Secretary Hegseth


