BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-04-20

Anthropic's Claude Desktop Faces Privacy Scrutiny for Pre-Installing Browser Extension Files Without User Consent

Key Takeaways

  • ▸Claude Desktop installs browser extension configuration files without user disclosure or consent, pre-authorizing extensions for browsers not yet installed
  • ▸The practice potentially violates EU's ePrivacy Directive by modifying system configurations affecting third-party applications without explicit permission
  • ▸The undisclosed Native Messaging bridge runs at user privilege level outside the browser sandbox, creating a potential escalation path for prompt injection exploits
Source:
Hacker Newshttps://www.theregister.com/2026/04/20/anthropic_claude_desktop_spyware_allegation/↗

Summary

Privacy consultant Alexander Hanff has raised serious concerns about Anthropic's Claude Desktop for macOS, which installs Native Messaging manifest files that pre-authorize browser extensions for multiple browsers without user knowledge or consent—even before those browsers are installed on a user's device. The practice involves installing a file called "com.anthropic.claude_browser_extension.json" that sets up automated access to Chrome-based browsers for the Claude extension, enabling features like form-filling, webpage reading, and screenshot capture. Hanff contends this constitutes a violation of the EU's ePrivacy Directive (Article 5(3)) and potentially violates computer access and misuse laws, characterizing it as a "dark pattern" and "spyware." The concern is amplified by the fact that Claude for Chrome has a 23.6% vulnerability rate to prompt injection attacks, meaning successful attacks could escalate from the sandboxed extension to a privileged binary running outside browser protections. Anthropic has not responded to requests for comment on the allegations.

  • The installation is difficult to discover and remove, lacks transparency, and amounts to forced bundling across vendor trust boundaries

Editorial Opinion

If Hanff's analysis is correct, Anthropic's approach represents a troubling lapse in the company's stated commitment to AI safety and responsible practices. Silently pre-configuring system files that extend an AI tool's reach into browsers users haven't even chosen to use contradicts the transparency and user autonomy that should be foundational to trustworthy AI development. The fact that this configuration persists even before the target browsers exist suggests a deliberate strategy to bypass user attention rather than an inadvertent technical decision.

Regulation & PolicyEthics & BiasAI Safety & AlignmentPrivacy & Data

More from Anthropic

AnthropicAnthropic
UPDATE

Claude Introduces Live Artifacts for Building Dynamic Dashboards and Trackers in Cowork

2026-04-20
AnthropicAnthropic
PARTNERSHIP

Anthropic Secures Up to 5 Gigawatts of AWS Compute in Expanded Amazon Partnership; Amazon Invests Additional $5 Billion

2026-04-20
AnthropicAnthropic
PRODUCT LAUNCH

Anthropic's Claude Opus 4.7 Gains CAD Design Capabilities Through Onshape MCP Integration

2026-04-20

Comments

Suggested

AnthropicAnthropic
PARTNERSHIP

Anthropic Secures Up to 5 Gigawatts of AWS Compute in Expanded Amazon Partnership; Amazon Invests Additional $5 Billion

2026-04-20
N/AN/A
INDUSTRY REPORT

Over 200 Japanese Companies Fall Victim to Ransomware, With 60% Unable to Recover Data

2026-04-20
Internet ArchiveInternet Archive
INDUSTRY REPORT

AI Training Threatens Internet Archive's Mission as Media Sites Block Wayback Machine Access

2026-04-20
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us