D.C. Circuit Court Declines to Stay DoW's Supply-Chain Risk Designation of Claude, Rejecting Anthropic's Emergency Appeal
Key Takeaways
- ▸The Department of War invoked supply-chain risk authority to bar Anthropic from federal contracts specifically due to the company's refusal to authorize Claude use for domestic surveillance and autonomous warfare
- ▸The D.C. Circuit declined emergency relief but acknowledged the case raises novel constitutional and statutory questions with limited judicial precedent
- ▸Anthropic claims the designation violates the Fifth Amendment and retaliates against the company's protected speech regarding AI safety constraints
Summary
The U.S. Court of Appeals for the D.C. Circuit has declined to grant Anthropic an emergency stay of the Department of War's March 2026 determination that procuring AI goods or services from the company presents a supply-chain risk to national security. The designation resulted from Anthropic's refusal to contractually authorize the Department to use Claude for mass domestic surveillance or lethal autonomous warfare. The Department has since canceled its contracts with Anthropic, begun removing Claude from its systems, and prohibited other contractors from using Anthropic as a subcontractor on DoW work.
In a decision by Judges Karen LeCraft Henderson, Gregory Katsas, and Neomi Rao, the court acknowledged that Anthropic faces novel and difficult questions regarding what constitutes a supply-chain risk under federal law and whether the government's actions were constitutional. However, the panel concluded that Anthropic failed to demonstrate that the balance of equities warranted emergency intervention at this stage. The court noted that while Anthropic would suffer financial harm, the company's CEO had stated that the vast majority of Anthropic's customers would remain unaffected, since the designation applies only to direct DoW contracts.
- The court found that Anthropic's primary injury is financial rather than constitutional, and the company has not demonstrated irreparable harm sufficient to warrant a stay pending full review
Editorial Opinion
This case represents a critical inflection point in how the U.S. government exercises control over AI deployment in national security contexts. While the D.C. Circuit's procedural ruling leaves the core issues unresolved, the dispute itself highlights an emerging tension between government authority to secure military AI supply chains and corporate autonomy to set ethical boundaries on technology use. Anthropic's principled refusal to enable mass surveillance and autonomous warfare through Claude—and the government's apparent weaponization of procurement rules in response—raises profound questions about whether national security law is being stretched to punish ethical dissent rather than address genuine security vulnerabilities.


