BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-03-04

Federal Agencies Scramble to Replace Anthropic's Claude After Trump Administration Ban

Key Takeaways

  • ▸Federal agencies across multiple departments are ending use of Anthropic's Claude following a Trump administration directive to phase out the technology within six months
  • ▸The ban originated from a disagreement between Anthropic and the Department of Defense over terms of service related to mass surveillance and autonomous weapons
  • ▸Treasury Department engineers are migrating from Claude Code to alternatives including OpenAI's Codex, Google's Gemini, and xAI's Grok
Source:
Hacker Newshttps://fedscoop.com/nasa-chatbots-treasury-coding-opm-drafting-agencies-deployed-claude/↗

Summary

Multiple U.S. federal agencies are rapidly phasing out Anthropic's Claude AI tools following a directive from President Trump to remove the company's services within six months. The ban, announced via Truth Social, stems from a dispute between Anthropic and the Department of Defense over the company's terms of service, which CEO Dario Amodei says are designed to prevent use in mass surveillance and fully autonomous weapons systems.

The Treasury Department, NASA, the Office of Personnel Management, the International Trade Administration, the State Department, and the General Services Administration have all confirmed they are discontinuing or have already stopped using Claude. Treasury's approximately 100 software engineers who used Claude Code for development are migrating to OpenAI's Codex, Google's Gemini, and testing xAI's Grok. The State Department is removing Claude from StateChat, its internal chatbot used by thousands of staff for summarization, drafting, and translation tasks.

NASA faces particular challenges as it uses Claude Sonnet 3.5 for two chatbots—one assisting Goddard Space Flight Center employees with document editing and code explanation, and another at Langley Research Center for processing controlled unclassified information. OPM has already halted its use of Claude for summarization, drafting, and decision support shortly after the announcement. The widespread impact highlights how deeply AI tools have been integrated into federal workflow automation and productivity enhancement, with agencies now forced to rapidly identify alternative solutions.

  • NASA's two Claude-powered chatbots and the State Department's StateChat are among the high-profile use cases being impacted
  • The directive reveals the extensive integration of AI tools in federal operations for coding assistance, document summarization, translation, and workflow automation
Large Language Models (LLMs)AI AgentsGovernment & DefensePartnershipsRegulation & Policy

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us