AI Coding Tool Cline Hit by Supply Chain Attack Through Prompt Injection Vulnerability
Key Takeaways
- ▸A vulnerability chain called "Clinejection" allowed attackers to compromise Cline's AI issue triage bot through prompt injection and publish unauthorized packages to npm
- ▸The attack affected Cline's 5+ million users for eight hours, installing the OpenClaw AI agent on developer machines with auto-updates enabled
- ▸The exploit required only opening a GitHub issue with a malicious title, demonstrating how low-complexity attacks can achieve supply chain compromise
Summary
On February 9, 2026, security researcher Adnan Khan disclosed a critical vulnerability chain dubbed "Clinejection" affecting Cline, a popular AI coding assistant with over 5 million users. The exploit turned Cline's own AI-powered issue triage bot into a supply chain attack vector through a combination of indirect prompt injection, GitHub Actions cache poisoning, and credential mismanagement. Eight days after public disclosure, an unknown attacker exploited the same vulnerability to publish an unauthorized version of the Cline CLI (version 2.3.0) to npm, which remained live for approximately eight hours and installed the OpenClaw AI agent on developer machines that updated during that window.
The attack chain demonstrated how seemingly low-severity AI vulnerabilities can be chained together to achieve high-impact supply chain compromise. The exploit required nothing more sophisticated than opening a malicious GitHub issue with a specially crafted title to trigger prompt injection in Cline's AI bot. From there, attackers pivoted through GitHub Actions cache poisoning and exploited the use of nightly credentials in production environments to gain publishing access to npm. While the actual payload was not overtly destructive, the incident highlighted the potential for pushing arbitrary malicious code to millions of developers with auto-updates enabled.
Security firm Snyk, which has an existing partnership with Cline focused on AI-assisted coding security, analyzed the attack and emphasized that AI agents represent a new attack surface in CI/CD pipelines. The incident underscores the critical need for organizations deploying AI agents in development workflows to implement proper permission boundaries, credential separation, and input validation to prevent similar exploits. The attack serves as a wake-up call for the industry about the security implications of granting AI systems excessive permissions in automated development pipelines.
- AI agents with excessive permissions in CI/CD pipelines represent a significant new attack surface for supply chain attacks
- The incident highlights critical security gaps including prompt injection vulnerabilities, credential mismanagement, and insufficient permission boundaries for AI systems
Editorial Opinion
The Clinejection attack represents a watershed moment for AI security, demonstrating that AI agents are not just productivity tools but potential attack vectors that require the same rigorous security controls as any other infrastructure component. What's particularly alarming is how the attack leveraged the trust and automation developers place in AI systems to bypass traditional security boundaries. As AI agents become more deeply integrated into development workflows with broad permissions, the industry must urgently establish security standards specifically designed for AI systems in CI/CD pipelines, including prompt injection defenses, strict credential isolation, and minimal privilege principles for automated agents.



