BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-03-21

Anthropic Denies 'Kill Switch' Claims, Argues It Cannot Sabotage Claude During Military Operations

Key Takeaways

  • ▸Anthropic claims it has no technical ability to manipulate Claude once deployed by the military and maintains no 'kill switch' or remote access to disable the system
  • ▸The Pentagon designated Anthropic as a supply-chain risk due to concerns the company could sabotage military operations, leading to a broad ban on DoD use of Claude across contractors
  • ▸Anthropic has filed two lawsuits challenging the constitutionality of the ban and proposed contractual terms to address Pentagon concerns, including guarantees against veto power over military decisions
Source:
Hacker Newshttps://www.wired.com/story/anthropic-denies-sabotage-ai-tools-war-claude/↗

Summary

Anthropic filed court documents on Friday asserting that it has no ability to manipulate, disable, or alter its Claude AI model once deployed by the U.S. military, directly refuting allegations from the Trump administration that the company could sabotage military operations. The company's head of public sector, Thiyagu Ramasamy, stated in a court filing that "Anthropic does not have the access required to disable the technology or alter the model's behavior before or during ongoing operations" and that the company maintains no "back door or remote 'kill switch'." The filing comes as Anthropic challenges the Pentagon's designation of the company as a supply-chain risk, a decision announced by Defense Secretary Pete Hegseth that has effectively banned the Department of Defense from using Claude and prompted other federal agencies to abandon the platform.

The dispute centers on the Pentagon's concerns that Anthropic could disrupt critical military systems by restricting access to Claude or deploying harmful updates if the company disagrees with how the military uses the AI. Anthropic has been proposing contractual guarantees that it will not seek veto power over military operational decisions, and that any model updates would require approval from both the government and Amazon Web Services. The company also indicated willingness to address Pentagon concerns about Claude being used for autonomous lethal operations without human oversight. However, negotiations between Anthropic and the Department of Defense have broken down. A federal judge is scheduled to hear arguments on March 24 in San Francisco and could issue a temporary order reversing the ban in the near term, though customers have already begun canceling contracts.

  • A federal court hearing scheduled for March 24 could result in a temporary reversal of the ban, though the dispute reflects broader tensions between AI companies and the U.S. military over control, oversight, and deployment of AI systems

Editorial Opinion

Anthropic's technical assertions about Claude's architecture appear credible—modern cloud-deployed AI systems typically do not grant developers unilateral kill-switch capabilities once operational. However, the company's legal battle with the Pentagon highlights a deeper trust problem: even if Anthropic lacks technical leverage, the government remains skeptical of the company's intentions and policy positions on military AI use. The breakdown in negotiations suggests that contractual reassurances alone may not satisfy Pentagon concerns about alignment and control, underscoring the broader challenge of balancing innovation in AI with legitimate national security interests.

Government & DefenseRegulation & PolicyAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
RESEARCH

Research Reveals When Reinforcement Learning Training Undermines Chain-of-Thought Monitorability

2026-04-05
AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05

Comments

Suggested

Whish MoneyWhish Money
INDUSTRY REPORT

As Lebanon's Humanitarian Crisis Deepens, Digital Wallets Emerge as Lifeline for Displaced Millions

2026-04-05
MicrosoftMicrosoft
OPEN SOURCE

Microsoft Releases Agent Governance Toolkit: Open-Source Runtime Security for AI Agents

2026-04-05
MicrosoftMicrosoft
POLICY & REGULATION

Microsoft's Copilot Terms Reveal Entertainment-Only Classification Despite Business Integration

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us