Critical Prompt Injection Vulnerability Discovered in Claude Code, Gemini CLI, and Copilot GitHub Actions
Key Takeaways
- ▸Three major AI agents (Claude Code Security Review, Gemini CLI Action, and Copilot Agent) are vulnerable to prompt injection attacks through GitHub comments and PR titles
- ▸Attackers can steal repository secrets and API keys by crafting malicious GitHub content that breaks out of prompt context and instructs AI agents to execute commands
- ▸The vulnerability stems from unsanitized interpolation of user-controlled GitHub data into prompts and insufficient tool restrictions in agent execution
Summary
Security researchers have discovered a new class of prompt injection attacks called "Comment and Control" that can hijack AI agents running in GitHub Actions, allowing attackers to steal sensitive credentials and API keys. The vulnerability affects three major AI-powered GitHub Actions: Anthropic's Claude Code Security Review, Google's Gemini CLI Action, and GitHub's Copilot Agent. Attackers can craft malicious GitHub pull request titles, issue bodies, and comments that break out of the prompt context and instruct the AI agents to execute arbitrary commands and leak the host repository's secrets, including ANTHROPIC_API_KEY, GEMINI_API_KEY, and GITHUB_TOKEN.
The attack exploits a fundamental design pattern shared across these AI agents: they read GitHub data (PR titles, issues, comments) as part of their task context without proper sanitization, then execute tools based on the content. The vulnerability was discovered through coordinated responsible disclosure involving Anthropic, Google, and GitHub. The researchers demonstrated that attackers can use GitHub itself as a command-and-control channel, with no external infrastructure needed—an attacker writes a comment, the agent processes it, executes malicious commands, and returns results within GitHub's own platform.
- GitHub's own platform is weaponized as a command-and-control channel, with no external infrastructure required for attacks
Editorial Opinion
This discovery highlights a critical security blind spot in how AI agents are being deployed at scale on GitHub. While prompt injection vulnerabilities have been theoretically discussed, seeing them weaponized across three major vendors simultaneously demonstrates that the industry has not adequately addressed sanitization and tool restriction best practices. The "Comment and Control" technique is particularly elegant in its exploitation of GitHub's architecture itself, turning a collaboration platform into an attack vector. Organizations deploying AI agents in CI/CD pipelines need immediate remediation—namely strict tool allowlisting, prompt sanitization, and secret rotation protocols.

