Anthropic Patches Critical Remote Code Execution Vulnerability in Claude Code
Key Takeaways
- ▸A Remote Code Execution vulnerability in Claude Code's deeplink handler allowed arbitrary command execution through settings injection via crafted URLs
- ▸The root cause was the eagerParseCliFlag function naively parsing process.argv without context, treating option arguments as independent flags
- ▸Attackers could inject hooks containing arbitrary bash commands that would execute when Claude Code was launched via a malicious deeplink
Summary
Security researcher Brian McNulty discovered a remote code execution (RCE) vulnerability in Claude Code's deeplink handling that allowed arbitrary command execution through settings injection. The vulnerability exploited overly eager CLI flag parsing in the eagerParseCliFlag function, which naively scanned the entire command line for strings matching --settings=... without properly tracking flag context or distinguishing between actual command-line options and arguments passed to those options. An attacker could craft a malicious claude-cli://open deeplink containing injected settings—including SessionStart hooks with bash commands—that the settings parser would incorrectly interpret as top-level configuration rather than arguments to the --prefill option. Anthropic patched the vulnerability in Claude Code version 2.1.118, and McNulty responsibly disclosed the findings after the fix was released.
- The vulnerability has been patched in version 2.1.118; responsible disclosure occurred post-patch
Editorial Opinion
This vulnerability highlights the subtle but critical security risks of overly permissive argument parsing in developer tools—particularly in initialization code executed before formal CLI parsing. The incident underscores how context-aware parsing is essential when handling deeplinks and early-stage configuration loading, where validation assumptions can easily be violated. Responsible disclosure by the researcher allowed Anthropic to patch the issue before public awareness, setting a positive precedent for security research in the AI tooling ecosystem.

