BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-02-27

Trump Orders US Agencies to Drop Anthropic After Pentagon Feud

Key Takeaways

  • ▸Trump has ordered US agencies to stop working with Anthropic following a dispute with the Pentagon
  • ▸The directive could significantly impact Anthropic's government contracts and role in federal AI policy
  • ▸The move highlights growing tensions between AI companies and defense establishments over military applications
Source:
Hacker Newshttps://www.bloomberg.com/news/articles/2026-02-27/trump-orders-us-government-to-drop-anthropic-after-pentagon-feud↗

Summary

Former President Donald Trump has reportedly ordered US government agencies to cease working with AI company Anthropic following an unspecified dispute with the Pentagon. The directive represents a significant escalation in tensions between the AI safety-focused company and the US defense establishment. Anthropic, known for its Claude AI assistant and emphasis on AI safety research, has been working with various government agencies on AI deployment and safety initiatives.

The nature of the Pentagon feud remains unclear, but the order could have substantial implications for Anthropic's government contracts and its role in shaping federal AI policy. Anthropic has positioned itself as a leader in AI safety and constitutional AI, making its exclusion from government work potentially impactful for federal AI safety initiatives. The company has raised billions in funding and competes directly with OpenAI and other major AI labs.

This development highlights the increasingly political nature of AI deployment in government contexts and raises questions about how personal or institutional disputes might affect the government's access to cutting-edge AI technology. The order could also signal broader tensions about AI companies' willingness to work with defense and military applications, an issue that has divided Silicon Valley in recent years.

  • Anthropic's focus on AI safety makes its exclusion potentially consequential for government AI safety initiatives
Government & DefensePartnershipsRegulation & PolicyAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us