BotBeat
...
← Back

> ▌

AnthropicAnthropic
RESEARCHAnthropic2026-04-16

Researcher Demonstrates Claude Opus Successfully Building Working Chrome V8 Exploit

Key Takeaways

  • ▸Frontier AI models like Claude Opus can now reliably develop working security exploits from known CVEs with sustained human guidance, blurring the line between theoretical vulnerability and practical weaponization
  • ▸Significant patch lag persists in widely-used Electron-based applications (Discord, Teams, Notion) since 2022, leaving unpatched known vulnerabilities exposed to AI-assisted exploitation
  • ▸The experiment demonstrates that script-kit level exploit development—once requiring deep expertise—is becoming accessible through API access and patience, raising urgent questions about security practices and patching timelines
Source:
Hacker Newshttps://www.hacktron.ai/blog/i-let-claude-opus-to-write-me-a-chrome-exploit↗

Summary

A security researcher has documented how Claude Opus, Anthropic's frontier AI model, successfully developed a functional exploit chain targeting Chrome's V8 engine—specifically creating a working exploit for Discord's outdated bundled Chrome (version 138, nine versions behind current). The exercise, which consumed 2.3 billion tokens and cost $2,283 in API fees over approximately 20 hours of interaction, demonstrates the practical capability of advanced AI models to bridge the gap between known security patches and working exploits. The researcher used a V8 out-of-bounds (OOB) vulnerability from Chrome 146—notably the same version running in Anthropic's own Claude Desktop application.

The experiment was conducted to substantiate concerns raised by Anthropic's recent announcements around Project Glasswing and Mythos, which highlighted AI's potential role in accelerating exploit development. Rather than engaging in theoretical debate, the researcher took a methodical approach: identifying a known, unpatched CVE in Discord's Chromium bundle, and iteratively directing Claude Opus to develop a full exploitation chain. The process required careful scaffolding across multiple sessions, with the researcher functioning as a guide to navigate the model away from dead ends, ultimately achieving code execution (demonstrated by popping the calculator application).

Editorial Opinion

This research is a sobering reality check on the security implications of AI progress. While Anthropic's Mythos announcement may strike some as theatrical, this hands-on demonstration proves the underlying concern is substantive: the exponential improvement in AI-assisted exploit development is outpacing the glacial pace of security patching in production software. The responsibility now falls on application developers and enterprises to treat patch lag as a critical vulnerability, not a convenience.

Generative AIDeep LearningCybersecurityAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
PARTNERSHIP

White House Pushes US Agencies to Adopt Anthropic's AI Technology

2026-04-17
AnthropicAnthropic
RESEARCH

AI Safety Convergence: Three Major Players Deploy Agent Governance Systems Within Weeks

2026-04-17
AnthropicAnthropic
PRODUCT LAUNCH

Finance Leaders Sound Alarm as Anthropic's Claude Mythos Expands to UK Banks

2026-04-17

Comments

Suggested

OpenAIOpenAI
RESEARCH

OpenAI's GPT-5.4 Pro Solves Longstanding Erdős Math Problem, Reveals Novel Mathematical Connections

2026-04-17
AnthropicAnthropic
RESEARCH

AI Safety Convergence: Three Major Players Deploy Agent Governance Systems Within Weeks

2026-04-17
OpenAIOpenAI
RESEARCH

When Should AI Step Aside?: Teaching Agents When Humans Want to Intervene

2026-04-17
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us