BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-04-23

Discord Group Claims Unauthorized Access to Claude Mythos by Exploiting Weak Security

Key Takeaways

  • ▸An unauthorized Discord group claims to have accessed Claude Mythos Preview through basic social engineering (guessing location) and insider access, not advanced hacking techniques
  • ▸The breach highlights a significant gap between Anthropic's security claims and operational security practices for its most sensitive AI model
  • ▸The group reportedly gained access to multiple unreleased Anthropic models, suggesting broader security vulnerabilities in the company's access controls
Source:
Hacker Newshttps://mashable.com/article/discord-group-accesses-claude-mythos-claims↗

Summary

An anonymous group of Discord users claims to have gained unauthorized access to Claude Mythos Preview, Anthropic's highly restricted AI model that the company describes as capable of identifying and exploiting zero-day vulnerabilities across major operating systems and browsers. According to Bloomberg, the group did not employ sophisticated hacking techniques but instead guessed the model's online location using naming conventions discovered in a recent Mercor data breach, combined with privileged access held by one group member working at an Anthropic contractor. Anthropic confirmed it is investigating the claim and stated there is no indication that other unauthorized parties have accessed Claude Mythos. The incident raises significant security concerns given that Anthropic positioned Claude Mythos as too dangerous for public release and restricted access through an invite-only Project Glasswing initiative designed to help tech leaders secure critical infrastructure.

  • Anthropic is investigating the breach, with no current evidence of exploitation for malicious purposes, though the implications for a model described as a cybersecurity threat are concerning

Editorial Opinion

This incident exemplifies a critical tension in AI safety: Anthropic's decision to restrict Claude Mythos due to its dangerous capabilities rings hollow when basic security practices fail to protect it. The fact that Discord users needed only guessing and insider access—not sophisticated exploits—suggests the company prioritized controlled distribution over robust security infrastructure. For a company pitching itself as a responsible steward of powerful AI systems, this breach undermines confidence in both its technical security and its ability to manage other restricted models.

CybersecurityEthics & BiasAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
PRODUCT LAUNCH

Anthropic Launches Monthly Economic Index Survey to Track AI's Real-World Impact on Work

2026-04-22
AnthropicAnthropic
RESEARCH

Anthropic Researchers Propose Test-Time Scaling Framework for Agentic Coding, Boosting Claude Performance on SWE-Bench

2026-04-22
AnthropicAnthropic
INDUSTRY REPORT

Survey of 81,000 Claude Users Reveals Job Displacement Fears Align With AI Exposure Levels

2026-04-22

Comments

Suggested

Delphi SecurityDelphi Security
PRODUCT LAUNCH

Delphi Security Launches xAIDR: First Runtime Benchmark for Agent-to-Agent Attack Detection

2026-04-23
Academic ResearchAcademic Research
RESEARCH

New Research Reveals LLMs Can Violate Privacy Through Inference, Not Just Memorization

2026-04-23
TencentTencent
OPEN SOURCE

Tencent Open-Sources CubeSandbox: High-Performance Sandbox for AI Agents with Sub-60ms Startup

2026-04-23
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us