LiteLLM Supply Chain Compromise: Malicious Package Deployed Credential Harvesting and Backdoor Access
Key Takeaways
- ▸LiteLLM versions 1.82.7 and 1.82.8 contained malicious code targeting over 50 categories of secrets including cloud credentials, SSH keys, and Kubernetes configurations
- ▸The compromise was part of TeamPCP's sophisticated multi-ecosystem campaign spanning multiple package managers and platforms, demonstrating deep understanding of Python execution models
- ▸AI proxy services that centralize API keys and credentials become high-value attack targets, with compromised upstream dependencies exposing all downstream users to credential exfiltration and infrastructure compromise
Summary
LiteLLM, a widely-used Python AI proxy package downloaded 3.4 million times daily, was compromised on PyPI with versions 1.82.7 and 1.82.8 containing malicious code. The compromise deployed a sophisticated three-stage payload capable of harvesting cloud credentials and SSH keys, performing Kubernetes lateral movement attacks, and establishing persistent backdoor access for remote code execution. The incident was discovered on March 24 when production systems running LiteLLM experienced runaway processes and resource exhaustion.
The LiteLLM compromise was part of a broader, coordinated multi-ecosystem supply chain campaign orchestrated by the threat actor group TeamPCP. The campaign cascaded through multiple platforms including PyPI, npm, Docker Hub, GitHub Actions, and OpenVSX, initially targeting security tools like Trivy and Checkmarx KICS before spreading to AI infrastructure. This attack highlights how AI proxy services that concentrate API keys and cloud credentials become high-value targets when upstream dependencies are compromised, exposing downstream users to credential theft and infrastructure compromise.
Editorial Opinion
This incident underscores a critical vulnerability in the AI development supply chain: widely-adopted infrastructure tools like LiteLLM concentrate sensitive credentials and API access, making them lucrative targets for sophisticated threat actors. The cascading nature of the TeamPCP campaign—moving from security scanners to core AI infrastructure—demonstrates how no tool is too trusted to escape compromise. Organizations must implement zero-trust principles for package dependencies, aggressive credential rotation practices, and real-time monitoring of API usage patterns to detect exfiltration.



