BotBeat
...
← Back

> ▌

GitHubGitHub
POLICY & REGULATIONGitHub2026-02-27

GitHub Copilot CLI Vulnerability Allows Malware Execution Without User Approval

Key Takeaways

  • ▸GitHub Copilot CLI contains vulnerabilities allowing arbitrary code execution via indirect prompt injection without user approval
  • ▸Attackers can exploit the env command—part of a hard-coded 'read-only' whitelist—to mask dangerous curl and shell operations
  • ▸GitHub acknowledged the vulnerability but classified it as a known issue not presenting significant security risk, with no immediate fix planned
Source:
Hacker Newshttps://www.promptarmor.com/resources/github-copilot-cli-downloads-and-executes-malware↗

Summary

Security researchers have discovered critical vulnerabilities in GitHub's newly released Copilot CLI that allow attackers to bypass command validation systems and execute malicious code without user approval. The exploit leverages indirect prompt injection techniques to download and run malware from external servers, circumventing the tool's human-in-the-loop safety mechanisms. The vulnerability exploits GitHub Copilot's hard-coded list of 'read-only' commands, specifically using the env command to mask dangerous operations like curl and shell execution that would normally require user permission.

The attack chain begins when a user queries the Copilot CLI while exploring code from an untrusted source, such as a cloned repository. Malicious instructions embedded in files like README documents can inject prompts that craft commands bypassing Copilot's validator. By nesting curl and sh commands as arguments to env—a whitelisted command that executes automatically—attackers can download and execute arbitrary code without triggering external URL access checks or approval dialogs.

GitHub responded to the disclosure by acknowledging the issue but classifying it as a 'known issue that does not present a significant security risk.' The company stated it may make the functionality more strict in future updates but offered no immediate timeline for remediation. This response has raised concerns in the security community, particularly given that Copilot CLI just reached general availability two days before the vulnerability was publicly demonstrated.

The vulnerability highlights broader risks associated with AI agent systems that execute code and access external resources. As AI coding assistants become more autonomous and integrated into developer workflows, the security implications of prompt injection attacks and inadequate command validation become increasingly critical to address.

  • The exploit can be triggered through malicious content in README files, web search results, or other untrusted sources processed by the AI
  • The vulnerability was discovered just two days after Copilot CLI reached general availability
AI AgentsCybersecurityEthics & BiasAI Safety & AlignmentPrivacy & Data

More from GitHub

GitHubGitHub
PRODUCT LAUNCH

GitHub Launches Squad: Open Source Multi-Agent AI Framework to Simplify Complex Workflows

2026-04-05
GitHubGitHub
PRODUCT LAUNCH

GitHub Launches Agentic Workflows in Technical Preview, Enabling AI-Driven Repository Automation via Markdown

2026-04-04
GitHubGitHub
INDUSTRY REPORT

GitHub Experiences Service Disruptions Amid 1400% Surge in Commits

2026-04-03

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
GitHubGitHub
PRODUCT LAUNCH

GitHub Launches Squad: Open Source Multi-Agent AI Framework to Simplify Complex Workflows

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us