BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-04-25

Discord Group Claims Access to Anthropic's Restricted Claude Mythos Model

Key Takeaways

  • ▸Discord users gained unauthorized access to Claude Mythos by guessing its URL location and leveraging insider contractor access—not through sophisticated hacking
  • ▸The breach undermines Anthropic's controlled access model (Project Glasswing) designed to protect a model capable of identifying zero-day vulnerabilities
  • ▸The unauthorized access was discovered to be part of a broader breach, with the group claiming access to multiple unreleased Anthropic models
Source:
Hacker Newshttps://mashable.com/article/discord-group-accesses-claude-mythos-claims↗

Summary

An anonymous group of Discord users claimed to have successfully accessed Claude Mythos, Anthropic's unreleased and heavily restricted AI model designed to identify and exploit zero-day vulnerabilities in major operating systems and web browsers. According to Bloomberg, the unauthorized access was not the result of sophisticated hacking, but rather a combination of guessing the model's URL location using patterns discovered in a Mercor data breach and leveraging insider access provided by a contractor working with Anthropic.

Claude Mythos is distributed exclusively through Anthropic's Project Glasswing initiative, which limits access to a select group of tech partners deemed capable of responsibly handling such a powerful tool. The Discord group, which operates a private channel dedicated to discovering information about unreleased models, claimed they used their unauthorized access only for benign tasks such as building simple websites. However, they also claimed to have gained access to multiple additional unreleased Anthropic models, suggesting a broader security gap.

Anthropric confirmed to Bloomberg that it is aware of the breach claim and is investigating the incident. While there is currently no evidence that other unauthorized parties have accessed Claude Mythos, the incident raises significant concerns about Anthropic's security practices for protecting its most sensitive systems, particularly given the company's positioning of the model as a paradigm-shifting security breakthrough.

  • Anthropic is investigating but has found no evidence of broader malicious use or access by other unauthorized parties
Large Language Models (LLMs)CybersecurityAI Safety & AlignmentPrivacy & Data

More from Anthropic

AnthropicAnthropic
UPDATE

Anthropic Launches Claude Research Capabilities With Multi-Agent System Architecture

2026-04-25
AnthropicAnthropic
INDUSTRY REPORT

The AI Compute Crunch Is Here—And It's Reshaping the Economy

2026-04-25
AnthropicAnthropic
UPDATE

Claude Opus 4.7 Criticized for Overly Aggressive Safety Guardrails, Blocking Legitimate Requests

2026-04-25

Comments

Suggested

Anduril IndustriesAnduril Industries
POLICY & REGULATION

Anduril's AI Surveillance Tower Faces Privacy Backlash Over California Coastal Deployment

2026-04-25
AnthropicAnthropic
UPDATE

Anthropic Launches Claude Research Capabilities With Multi-Agent System Architecture

2026-04-25
GitHubGitHub
PRODUCT LAUNCH

GitHub Announces Copilot SDK for Developer Integration

2026-04-25
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us