BotBeat
...
← Back

> ▌

AnthropicAnthropic
INDUSTRY REPORTAnthropic2026-04-20

NSA Uses Anthropic's Claude Despite Apparent Blacklist, Report Reveals

Key Takeaways

  • ▸NSA is utilizing Anthropic's Claude model despite apparent restrictions or blacklist status
  • ▸The disclosure reveals gaps between formal government AI procurement policies and actual usage patterns
  • ▸Questions arise about the reasons behind any potential blacklist and the justification for simultaneous operational use
Source:
Hacker Newshttps://www.reuters.com/business/us-security-agency-is-using-anthropics-mythos-despite-blacklist-axios-reports-2026-04-19/↗

Summary

According to reporting by Palmik, the U.S. National Security Agency (NSA) is actively using Anthropic's Claude language model for internal operations despite the company appearing on what has been characterized as a blacklist. The disclosure raises questions about government AI adoption policies and the inconsistency between public procurement restrictions and actual usage patterns within federal intelligence agencies.

The revelation suggests that despite potential restrictions on official procurement or partnerships with Anthropic, the NSA has found ways to access and deploy Claude's capabilities for its work. This apparent contradiction between formal policy and operational reality highlights the complex landscape of AI adoption within U.S. government agencies, where different departments and security clearance levels may operate under different procurement guidelines.

  • The incident underscores the complexity of AI governance across federal agencies with different security and procurement requirements
Large Language Models (LLMs)Government & Defense

More from Anthropic

AnthropicAnthropic
UPDATE

Claude Opus 4.7 Increases Image Processing Token Costs by 3x

2026-04-20
AnthropicAnthropic
INDUSTRY REPORT

Chinese Tech Workers Train AI Doubles as Companies Push Automation—Sparking Debate Over Job Security and Dignity

2026-04-20
AnthropicAnthropic
RESEARCH

AI Code Review Tools Fail to Detect Vulnerabilities in AI-Generated Code Due to Training Data Blindness

2026-04-20

Comments

Suggested

AnthropicAnthropic
RESEARCH

AI Code Review Tools Fail to Detect Vulnerabilities in AI-Generated Code Due to Training Data Blindness

2026-04-20
Boston DynamicsBoston Dynamics
PRODUCT LAUNCH

Boston Dynamics and Google DeepMind Enable Spot Robot with Advanced Reasoning Capabilities via Gemini Robotics

2026-04-20
Research CommunityResearch Community
RESEARCH

New Security Framework Identifies Critical Vulnerabilities in Autonomous LLM Agents for Commerce

2026-04-20
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us