Trump Orders Federal Agencies to Stop Using Anthropic's AI Over Pentagon Guardrails Dispute
Key Takeaways
- ▸Trump ordered all federal agencies to immediately stop using Anthropic's AI technology, with a six-month phase-out period
- ▸The dispute stems from Anthropic's insistence on guardrails preventing Claude's use in mass surveillance and fully autonomous weapons without human oversight
- ▸The Pentagon threatened to designate Anthropic as a "supply chain risk" if it doesn't drop its proposed restrictions by the deadline
Summary
President Trump announced on Friday that he is directing all federal agencies to immediately cease using Anthropic's AI technology, escalating a dispute between the AI company and the Pentagon over military use restrictions. The directive comes as the Defense Department set a 5 p.m. Friday deadline for Anthropic to drop its proposed guardrails on the military's use of its Claude AI model, threatening to designate the company as a "supply chain risk" if no agreement is reached.
The conflict centers on Anthropic's push for certain limitations on Claude's military applications, including restrictions against mass surveillance of Americans and requirements for human oversight in final targeting decisions during military operations. Anthropic argues these safeguards are necessary because Claude can hallucinate and make errors that could lead to lethal mistakes without human judgment. The company was awarded a $200 million Pentagon contract in July 2025 to develop AI capabilities for national security.
The Pentagon, led by Defense Secretary Pete Hegseth, has insisted on unrestricted access to Anthropic's AI model and claims it has made "very good concessions," including written acknowledgment of existing laws restricting military surveillance of Americans. However, Anthropic maintains that the new contract language "made virtually no progress" on preventing problematic uses and included legal provisions that would allow safeguards to be "disregarded at will." Trump's order gives agencies six months to phase out Anthropic's technology and threatens "major civil and criminal consequences" if the company doesn't cooperate during the transition.
- Anthropic was awarded a $200 million Pentagon contract in July 2025, which is now at risk due to the disagreement
- The company argues its safeguards are necessary because Claude can make errors that could lead to lethal mistakes in military operations
Editorial Opinion
This dispute represents a critical moment in the debate over responsible AI development versus national security imperatives. Anthropic's position reflects growing concerns among AI safety advocates about autonomous weapons and surveillance, while the Pentagon's stance prioritizes operational flexibility and defense readiness. The question of whether private AI companies should have a say in how their technologies are deployed by government agencies goes beyond this specific case and will likely shape future public-private partnerships in the AI sector. The Trump administration's aggressive response suggests a broader tension between Silicon Valley's AI safety culture and government demands for unrestricted access to cutting-edge technology.


