Pentagon's Contract-Based AI Governance Model Faces Structural Limits After Anthropic Standoff
Key Takeaways
- ▸The U.S. Department of Defense relies on bilateral procurement contracts rather than statutory regulation to govern military AI deployment, creating a governance model that lacks democratic accountability and institutional durability
- ▸Pentagon AI contracts often operate under Other Transaction (OT) agreements outside the Federal Acquisition Regulation, meaning guardrails and dispute resolution frameworks are determined by individual negotiations rather than standardized rules
- ▸Secretary of Defense Pete Hegseth's January memo requiring "any lawful use" language and removal of technical safety constraints triggered the Anthropic conflict and exposed tensions between military operational needs and vendor governance policies
Summary
The Pentagon's February 2025 designation of Anthropic as a supply chain risk and subsequent government-wide exclusion of the AI company has exposed fundamental weaknesses in how the U.S. military governs artificial intelligence deployment. Rather than relying on statutes and regulations, the Department of Defense has increasingly adopted a "regulation by contract" approach, where bilateral agreements between individual government agencies and AI vendors serve as the primary governance mechanism. This procurement-based framework lacks the democratic accountability, public deliberation, and institutional durability that statutory regulation provides, and its enforceability depends largely on technical controls vendors can maintain within government systems.
The crisis originated in January when Secretary of Defense Pete Hegseth issued a strategic memo requiring all Defense Department AI contracts to include "any lawful use" language within 180 days, effectively removing vendor-imposed usage restrictions and technical safety constraints. This directive conflicted with Anthropic's content policy restrictions, triggering the exclusion. Meanwhile, OpenAI negotiated a separate Pentagon deal and subsequently amended key terms after public backlash. The standoff reveals that the current governance structure—operating through various contracting vehicles including Other Transaction (OT) agreements outside the Federal Acquisition Regulation—cannot adequately address the complex policy questions surrounding military AI use, domestic surveillance, autonomous weapons, and intelligence oversight.
- The enforceability of military AI governance depends on technical controls vendors can maintain within government systems, making contract terms vulnerable to technical workarounds and creating structural insufficiency for governing sensitive applications like autonomous weapons and domestic surveillance
Editorial Opinion
The revelation that the Pentagon governs AI through ad-hoc procurement contracts rather than coherent statutory frameworks represents a dangerous governance gap. While procurement flexibility may accelerate military AI adoption, it fundamentally abdicates legislative and regulatory responsibility for decisions that implicate national security, constitutional rights, and global stability. The Anthropic-Pentagon standoff demonstrates that this contractual approach cannot resolve tensions between military operational demands and responsible AI deployment—only comprehensive statutory governance, public deliberation, and institutional oversight can provide the accountability that such consequential decisions demand.


