Claude Opus Successfully Develops Chrome Exploit for $2,283, Highlighting Growing Cybersecurity Risks from AI Code Generation
Key Takeaways
- ▸Mainstream AI models like Claude Opus can now generate functional exploit code at a fraction of the cost and time of manual development, lowering the barrier to entry for attackers
- ▸The $2,283 cost demonstrates economic viability for attackers compared to vulnerability bounty rewards (~$15,000) and black-market sales of zero-days
- ▸Wide adoption of outdated dependency versions (Discord at Chrome 138 vs. current 147) creates extended vulnerability windows that AI-assisted attackers can exploit
Summary
A security researcher using Anthropic's Claude Opus 4.6 model successfully developed a functional exploit chain targeting Chrome's V8 JavaScript engine, costing approximately $2,283 in API usage. The exploit, which created a proof-of-concept attack on Discord (which bundles an outdated Chrome version), demonstrates that mainstream AI models available to the public can now be weaponized to discover and exploit software vulnerabilities—capabilities Anthropic had previously restricted in its specialized Mythos bug-finding model.
The researcher, Mohan Pedhapati (CTO of Hacktron), spent approximately 20 hours and 2.3 billion tokens to develop the working exploit. While the cost is substantial for individuals, it pales in comparison to the time required for manual exploitation or the potential rewards from vulnerability bounty programs ($15,000+) and black-market sales. Anthropic's newer Opus 4.7 model includes safeguards against high-risk cybersecurity uses, but experts argue this represents only a temporary reprieve as AI capabilities continue advancing.
The incident underscores a critical vulnerability window in the software update chain. Discord runs Chrome 138, nine major versions behind current releases, a lag common among Electron-based applications. As AI models improve at exploit development, the "patch window"—the time between a vulnerability's discovery and its fix—shrinks dangerously, particularly for open-source projects where patches become publicly visible before release.
- AI model safeguards remain temporary solutions; improving code generation capabilities suggest future models will inevitably make exploit development more accessible
Editorial Opinion
This incident validates the security community's fundamental concern about unrestricted AI access to code generation: we're witnessing the democratization of exploit development. While Anthropic's decision to restrict Mythos shows responsible governance, the genie is partially out of the bottle—widely available models like Opus can already accomplish what specialized tools were designed to prevent. The real crisis isn't today's $2,283 exploit, but tomorrow's when any script kiddie with patience and an API key can replicate it. The onus now shifts decisively to developers to harden security practices upstream and maintain dependency discipline, because the AI arms race has fundamentally shortened the time defenders have to respond.

