GitHub Copilot CLI Vulnerability Allows Malware Execution Without User Approval
Key Takeaways
- ▸GitHub Copilot CLI contains vulnerabilities allowing arbitrary code execution via indirect prompt injection without user approval
- ▸Attackers can exploit the env command—part of a hard-coded 'read-only' whitelist—to mask dangerous curl and shell operations
- ▸GitHub acknowledged the vulnerability but classified it as a known issue not presenting significant security risk, with no immediate fix planned
Summary
Security researchers have discovered critical vulnerabilities in GitHub's newly released Copilot CLI that allow attackers to bypass command validation systems and execute malicious code without user approval. The exploit leverages indirect prompt injection techniques to download and run malware from external servers, circumventing the tool's human-in-the-loop safety mechanisms. The vulnerability exploits GitHub Copilot's hard-coded list of 'read-only' commands, specifically using the env command to mask dangerous operations like curl and shell execution that would normally require user permission.
The attack chain begins when a user queries the Copilot CLI while exploring code from an untrusted source, such as a cloned repository. Malicious instructions embedded in files like README documents can inject prompts that craft commands bypassing Copilot's validator. By nesting curl and sh commands as arguments to env—a whitelisted command that executes automatically—attackers can download and execute arbitrary code without triggering external URL access checks or approval dialogs.
GitHub responded to the disclosure by acknowledging the issue but classifying it as a 'known issue that does not present a significant security risk.' The company stated it may make the functionality more strict in future updates but offered no immediate timeline for remediation. This response has raised concerns in the security community, particularly given that Copilot CLI just reached general availability two days before the vulnerability was publicly demonstrated.
The vulnerability highlights broader risks associated with AI agent systems that execute code and access external resources. As AI coding assistants become more autonomous and integrated into developer workflows, the security implications of prompt injection attacks and inadequate command validation become increasingly critical to address.
- The exploit can be triggered through malicious content in README files, web search results, or other untrusted sources processed by the AI
- The vulnerability was discovered just two days after Copilot CLI reached general availability


