LiteLLM Supply Chain Attack Compromises Major AI Agent Framework; Millions of Developers at Risk
Key Takeaways
- ▸LiteLLM, a critical gateway layer for AI agent frameworks with 95M monthly downloads, was compromised via PyPI account takeover, affecting downstream dependencies across the entire AI ecosystem
- ▸The attack used two sophisticated techniques—embedded malicious code and auto-executing .pth files—to harvest comprehensive credentials including API keys, cloud credentials, SSH keys, and CI/CD secrets
- ▸The approximately 4-hour exposure window (March 24, 09:00–13:30 UTC) created massive blast radius; any developer who ran pip install commands during this period could have pulled compromised versions as transitive dependencies
Summary
On March 24, 2026, the widely-used LiteLLM Python package suffered a critical supply chain attack when an attacker compromised the maintainer's PyPI publishing credentials and released two malicious versions (1.82.7 and 1.82.8). With 95 million monthly downloads and dependencies across major AI frameworks including CrewAI, Browser-Use, DSPy, and others, the attack potentially affected millions of developers. The malicious payload employed sophisticated techniques including a .pth file that executes automatically on Python startup, enabling comprehensive credential harvesting without requiring explicit imports.
The attack extracted sensitive data including API keys, AWS/GCP/Azure credentials, SSH keys, Kubernetes configs, Git credentials, Docker configurations, and CI/CD secrets through encrypted channels to an attacker-controlled domain. The stolen data was encrypted with AES-256-CBC and wrapped with a hardcoded RSA public key, ensuring only the attacker could decrypt the exfiltrated information. Comet, a major AI platform company, treated the incident as critical and conducted a full audit of over 50 repositories, identifying two compromised CI workflows with limited exposure to test credentials only.
- CI/CD pipelines were identified as the highest-risk targets due to their access to privileged credentials; developers should immediately audit GitHub Actions logs and rotate all secrets that may have been exposed
Editorial Opinion
This attack underscores a critical vulnerability in the AI infrastructure stack: the concentration of risk in widely-used but minimally-monitored open-source packages. LiteLLM's central role in AI agent frameworks makes it an attractive target with enormous blast radius potential. The sophisticated payload design—particularly the .pth file technique—demonstrates that attackers are investing significant effort in supply chain compromises against the AI ecosystem. The industry must urgently implement stronger package verification mechanisms, such as mandatory code signing, automated dependency auditing in CI/CD pipelines, and hardware security keys for open-source maintainers.



