ClaudeBleed: Critical Vulnerability Allows Any Chrome Extension to Hijack Anthropic's Claude AI
Key Takeaways
- ▸ClaudeBleed allows any Chrome extension to inject commands into Claude's messaging interface, bypassing permission checks entirely
- ▸Attackers can exfiltrate sensitive user data (emails, GitHub repos, Google Drive files) and manipulate Claude into performing unauthorized actions
- ▸Anthropic's patch in v1.0.70 provides only partial mitigation; core vulnerabilities remain under "Act without asking" mode and alternative execution flows
Summary
A critical security vulnerability dubbed "ClaudeBleed" has been discovered in Anthropic's "Claude in Chrome" browser extension, allowing any Chrome extension—even those with zero permissions—to hijack Claude's capabilities and perform sensitive actions without meaningful user consent. Researchers at LayerX identified that the flaw stems from a trust boundary failure in how the extension handles communication between scripts on claude.ai and the extension itself, exploiting Chrome's externally_connectable feature. The vulnerability could enable attackers to steal emails, access private GitHub repositories, exfiltrate Google Drive files, and manipulate Claude into executing browser actions on behalf of users.
The core issue lies in the extension's failure to verify whether scripts are actually from Anthropic or have been injected by malicious extensions. Proof-of-concept attacks successfully extracted sensitive data from Google Drive, sent emails through Gmail, and accessed private GitHub repositories. Researchers also discovered weaknesses in Claude's approval system, including an "approval looping" technique that allows bypassing safeguards by repeatedly submitting automated requests. Additionally, DOM manipulation attacks could trick Claude into treating dangerous actions as harmless ones.
Anthropicresponded quickly after LayerX's April 27 report, but the partial fix in extension version 1.0.70 only mitigates the issue, leaving the core trust model vulnerable. Attackers can still bypass protections by abusing Claude's "Act without asking" mode or triggering alternative execution flows. Researchers recommend restricting extension communications to trusted IDs, implementing authenticated message signing, and tying user approvals to one-time actions that cannot be replayed. Users are advised to review installed extensions carefully and disable autonomous AI browsing modes.
- The vulnerability reveals fundamental security gaps in integrating powerful AI assistants into browsers without proper isolation and trust verification
Editorial Opinion
ClaudeBleed exposes a critical mismatch between Chrome's extension permission model and the capabilities of modern AI assistants. While Anthropic's rapid response is commendable, the partial nature of the fix suggests deeper architectural weaknesses in how AI systems verify trust boundaries and user intent. This incident should serve as a sobering reminder that as language models gain autonomous access to user data and actions, security assumptions built for less powerful tools are dangerously inadequate. The industry must rethink how AI assistants integrate with browsers and operating systems before trust boundaries become meaningless.

