Anthropic's Claude Desktop Faces Privacy Scrutiny for Pre-Installing Browser Extension Files Without User Consent
Key Takeaways
- ▸Claude Desktop installs browser extension configuration files without user disclosure or consent, pre-authorizing extensions for browsers not yet installed
- ▸The practice potentially violates EU's ePrivacy Directive by modifying system configurations affecting third-party applications without explicit permission
- ▸The undisclosed Native Messaging bridge runs at user privilege level outside the browser sandbox, creating a potential escalation path for prompt injection exploits
Summary
Privacy consultant Alexander Hanff has raised serious concerns about Anthropic's Claude Desktop for macOS, which installs Native Messaging manifest files that pre-authorize browser extensions for multiple browsers without user knowledge or consent—even before those browsers are installed on a user's device. The practice involves installing a file called "com.anthropic.claude_browser_extension.json" that sets up automated access to Chrome-based browsers for the Claude extension, enabling features like form-filling, webpage reading, and screenshot capture. Hanff contends this constitutes a violation of the EU's ePrivacy Directive (Article 5(3)) and potentially violates computer access and misuse laws, characterizing it as a "dark pattern" and "spyware." The concern is amplified by the fact that Claude for Chrome has a 23.6% vulnerability rate to prompt injection attacks, meaning successful attacks could escalate from the sandboxed extension to a privileged binary running outside browser protections. Anthropic has not responded to requests for comment on the allegations.
- The installation is difficult to discover and remove, lacks transparency, and amounts to forced bundling across vendor trust boundaries
Editorial Opinion
If Hanff's analysis is correct, Anthropic's approach represents a troubling lapse in the company's stated commitment to AI safety and responsible practices. Silently pre-configuring system files that extend an AI tool's reach into browsers users haven't even chosen to use contradicts the transparency and user autonomy that should be foundational to trustworthy AI development. The fact that this configuration persists even before the target browsers exist suggests a deliberate strategy to bypass user attention rather than an inadvertent technical decision.


