Security Firm Discloses One-Click RCE Vulnerability in Claude Code via MCP Server Injection
Key Takeaways
- ▸One-click RCE vulnerability allows malicious repos to spawn attacker-controlled MCP servers through configuration file injection
- ▸Third CVE in Claude Code within six months from identical root cause, suggesting systemic security design issues
- ▸Anthropic removed explicit MCP warnings in v2.1, reducing informed consent despite claiming user trust decisions mitigate risk
Summary
Security firm Adversa AI has disclosed a one-click remote code execution vulnerability in Claude Code, Gemini CLI, Cursor CLI, and Copilot CLI. The TrustFall proof-of-concept exploits inconsistently enforced project-scoped settings, allowing malicious repositories to include JSON configuration files that silently enable attacker-controlled Model Context Protocol (MCP) servers. When a developer confirms a generic 'trust this folder' dialog, the MCP server spawns as an unsandboxed Node.js process with the user's full privileges, potentially compromising the entire system.
Anthropichas characterized the vulnerability as outside its threat model, arguing that explicit user consent via the trust dialog shifts responsibility to the developer. However, Adversa AI contends the consent lacks sufficient transparency, noting that Anthropic removed more explicit MCP warnings in CLI version 2.1. The current dialog defaults to 'Yes, I trust this folder' with no MCP-specific language and no enumeration of executables that will spawn.
This disclosure marks the third CVE in Claude Code within six months from the same underlying cause: project-scoped settings serving as an injection vector. Adversa AI recommends Anthropic block dangerous settings from project-level configuration files, implement a dedicated MCP consent dialog defaulting to 'deny,' and require per-server consent. The vulnerability is particularly acute for CI/CD environments, where Claude Code runs via SDK without any interactive prompt.
- CI/CD pipelines invoking Claude Code via SDK have no interactive safety prompts, creating zero-click attack surface
- Adversa AI proposes blocking dangerous settings, dedicated MCP denial-by-default dialog, and per-server permissions
Editorial Opinion
The pattern is troubling: Anthropic patches individual CVEs stemming from the same architectural flaw rather than fundamentally redesigning how project-scoped settings work. More alarming is the removal of explicit MCP warnings in v2.1—a regression that prioritizes frictionless UX over transparent threat communication. When a generic 'trust this folder' dialog silently enables unsandboxed code execution with full user privileges, informed consent becomes performative. AI developer tools wield dangerous capabilities and must match that power with proportional transparency.

