BotBeat
...
← Back

> ▌

AnthropicAnthropic
RESEARCHAnthropic2026-04-03

Anthropic's Claude Code Source Code Leaked Again Via npm Package—Second Incident in 13 Months

Key Takeaways

  • ▸A missing single-line configuration entry in `.npmignore` exposed complete Anthropic source code for Claude Code, marking the second identical leak vector in 13 months
  • ▸The leaked codebase revealed internal model codenames, unreleased product details (Mythos model), and an 'Undercover Mode' system that automatically sanitizes employee contributions to public repositories
  • ▸Anthropic's security posture faces credibility challenges given two major leaks within five days and attributed to 'human error' rather than systematic security failures, contradicting the company's safety-first brand positioning
Source:
Hacker Newshttps://www.sabrina.dev/p/claude-code-source-leak-analysis↗

Summary

Anthropic experienced its second major source code leak in 13 months when a 59.8MB source map file was accidentally included in the Claude Code v2.1.88 npm package on March 31, 2026. The unobfuscated TypeScript source code was exposed through an unauthenticated URL on Anthropic's Cloudflare R2 storage bucket, with the leaked codebase reaching 50,000 GitHub stars within two hours of public discovery by security researcher Chaofan Shou. The root cause was a single missing *.map entry in the .npmignore configuration file, compounded by an open bug in the Bun runtime that generates source maps in production mode by default.

The leak revealed significant internal details about Anthropic's operations, including unreleased product codenames (Mythos, Capybara, Tengu, Opus, Sonnet), secret model versioning schemes, undocumented beta API headers, and 22 internal repository names. The incident occurred just five days after Fortune reported a separate misconfiguration that exposed approximately 3,000 internal Anthropic files, raising serious questions about the company's operational security practices. An "Undercover Mode" system was discovered in the code designed to prevent Anthropic employees from leaking sensitive information in public repositories by automatically filtering commit messages to remove model codenames and version details.

  • The incident highlights the risks of including unobfuscated source maps in production npm packages and exposes gaps between Anthropic's public safety commitments and internal operational practices

Editorial Opinion

While configuration oversights happen, the identical failure mode repeating 13 months later suggests Anthropic lacks robust internal safeguards and post-incident reviews—critical gaps for a company positioning itself as the safety-conscious AI leader. The discovery of a custom 'Undercover Mode' designed to prevent employees from mentioning Claude Code or model details in public repositories reveals a disconnect: if the technology needs active concealment at the developer level, what does that say about transparency and trust? For an organization claiming to prioritize AI safety above all else, losing control of its own codebase twice to basic DevOps mistakes raises uncomfortable questions about whether safety is truly embedded in the culture or merely in the marketing.

MLOps & InfrastructureCybersecurityEthics & BiasPrivacy & Data

More from Anthropic

AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic's Claude Code Stores Unencrypted Session Data and Secrets in Plain Text

2026-04-04

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
Google / AlphabetGoogle / Alphabet
RESEARCH

Deep Dive: Optimizing Sharded Matrix Multiplication on TPU with Pallas

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us