BotBeat
...
← Back

> ▌

AnthropicAnthropic
RESEARCHAnthropic2026-05-12

Anthropic Patches Critical Remote Code Execution Vulnerability in Claude Code

Key Takeaways

  • ▸A Remote Code Execution vulnerability in Claude Code's deeplink handler allowed arbitrary command execution through settings injection via crafted URLs
  • ▸The root cause was the eagerParseCliFlag function naively parsing process.argv without context, treating option arguments as independent flags
  • ▸Attackers could inject hooks containing arbitrary bash commands that would execute when Claude Code was launched via a malicious deeplink
Source:
Hacker Newshttps://0day.click/recipe/2026-05-12-cc-rce/↗

Summary

Security researcher Brian McNulty discovered a remote code execution (RCE) vulnerability in Claude Code's deeplink handling that allowed arbitrary command execution through settings injection. The vulnerability exploited overly eager CLI flag parsing in the eagerParseCliFlag function, which naively scanned the entire command line for strings matching --settings=... without properly tracking flag context or distinguishing between actual command-line options and arguments passed to those options. An attacker could craft a malicious claude-cli://open deeplink containing injected settings—including SessionStart hooks with bash commands—that the settings parser would incorrectly interpret as top-level configuration rather than arguments to the --prefill option. Anthropic patched the vulnerability in Claude Code version 2.1.118, and McNulty responsibly disclosed the findings after the fix was released.

  • The vulnerability has been patched in version 2.1.118; responsible disclosure occurred post-patch

Editorial Opinion

This vulnerability highlights the subtle but critical security risks of overly permissive argument parsing in developer tools—particularly in initialization code executed before formal CLI parsing. The incident underscores how context-aware parsing is essential when handling deeplinks and early-stage configuration loading, where validation assumptions can easily be violated. Responsible disclosure by the researcher allowed Anthropic to patch the issue before public awareness, setting a positive precedent for security research in the AI tooling ecosystem.

MLOps & InfrastructureCybersecurityAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
OPEN SOURCE

Anthropic Releases Prempti: Open-Source Guardrails for AI Coding Agents

2026-05-12
AnthropicAnthropic
PRODUCT LAUNCH

Anthropic Unleashes Computer Use: Claude 3.5 Sonnet Now Controls Your Desktop

2026-05-12
AnthropicAnthropic
PARTNERSHIP

SpaceX Backs Anthropic with Massive Data Centre Deal Amidst Musk's OpenAI Legal Battle

2026-05-12

Comments

Suggested

AnthropicAnthropic
OPEN SOURCE

Anthropic Releases Prempti: Open-Source Guardrails for AI Coding Agents

2026-05-12
AnthropicAnthropic
PRODUCT LAUNCH

Anthropic Unleashes Computer Use: Claude 3.5 Sonnet Now Controls Your Desktop

2026-05-12
Quixotic AIQuixotic AI
OPEN SOURCE

Quixotic AI Launches Open-Source JVM-Native AI Stack for Enterprise Infrastructure

2026-05-12
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us