BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-04-03

Security Researcher Details How Anthropic's Claude Mythos Leak Occurred Through Unsecured CMS API

Key Takeaways

  • ▸Anthropic's Sanity.io CMS had unauthenticated read access enabled by default, allowing anyone to retrieve draft and published content via API queries
  • ▸The project ID was discoverable through publicly accessible image URLs, making it easy for attackers to target the specific CMS instance
  • ▸Sanity's revision history feature retained all document versions by default with no way to disable this behavior, allowing access to deleted content indefinitely
Source:
Hacker Newshttps://iter.ca/post/claude-cms/↗

Summary

A security researcher has published a detailed technical analysis of how Anthropic's draft blog post about an upcoming model called "Claude Mythos" was leaked in March. The leak occurred through an improperly configured Sanity.io content management system (CMS) that Anthropic uses to manage its website. The researcher discovered that Anthropic had left unauthenticated read access enabled on its Sanity API endpoints, allowing anyone to query and retrieve all published and draft content from the CMS, including the unreleased article that was published on March 13 but leaked publicly on March 26.

The researcher was able to access the CMS through the API endpoint https://4zrzovbb.apicdn.sanity.io/v2021-10-21/data/query/website, where the project ID was discoverable through image URLs on Anthropic's website. Even more concerning, the researcher found that Sanity's revision history feature allowed access to all previous versions of deleted content, meaning the leaked article could be retrieved even after Anthropic attempted to remove it. The researcher responsibly disclosed the vulnerability to Anthropic on March 26, and the company fixed the issue within hours by disabling all unauthenticated API access.

  • The leak appeared to result from monitoring of the Sanity WebSockets endpoint for new content changes, with a 13-day gap between content creation and public disclosure
  • Anthropic responded quickly to the responsible disclosure, fixing the vulnerability within hours of being notified

Editorial Opinion

This incident highlights the critical importance of properly configuring third-party services and understanding their default security settings. While Anthropic's quick response to the responsible disclosure is commendable, the initial misconfiguration—leaving unauthenticated API access enabled when the frontend doesn't use it—represents a common but preventable security mistake. The case underscores why AI companies handling sensitive information about unreleased models must implement defense-in-depth security practices and regularly audit their infrastructure configurations.

CybersecurityAI Safety & AlignmentPrivacy & Data

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us