BotBeat
...
← Back

> ▌

LiteLLMLiteLLM
POLICY & REGULATIONLiteLLM2026-03-25

LiteLLM Supply Chain Attack Compromises 40,000+ Star AI Routing Library; Credentials Harvester Deployed via Malicious .pth File

Key Takeaways

  • ▸LiteLLM versions 1.82.7 and 1.82.8 contain a malicious .pth file that executes automatically on Python startup, exposing all processes in affected environments regardless of whether they import LiteLLM
  • ▸The attack is a two-stage credential stealer targeting SSH keys, AWS/GCP/Azure credentials, Kubernetes secrets, CI/CD configs, crypto wallets, and API tokens—representing comprehensive infrastructure compromise
  • ▸The compromise originated from a prior attack on Trivy (a dependency in LiteLLM's CI/CD), demonstrating how transitive supply chain vulnerabilities can cascade through the AI ecosystem
Source:
Hacker Newshttps://grith.ai/blog/litellm-compromised-trivy-attack-chain↗

Summary

On March 24, 2024, versions 1.82.7 and 1.82.8 of LiteLLM on PyPI were confirmed compromised in a sophisticated supply chain attack. LiteLLM is a widely-used LLM routing layer embedded in popular AI agent frameworks including Cline and OpenHands, with over 40,000 GitHub stars. The attack originated from a compromise of Trivy, a security scanning tool used in LiteLLM's CI/CD pipeline, marking an escalation of a campaign that began with a Pwn Request attack against Aqua Security on February 27.

The malicious payload was delivered via a .pth file (litellm_init.pth) included in the wheel package. Because Python automatically executes .pth files in site-packages on interpreter startup, any Python process in an environment with LiteLLM 1.82.8 installed was exposed—even if the code never explicitly imported the library. The payload is a two-stage credential stealer that harvests SSH keys, cloud credentials, API tokens, environment variables, Kubernetes secrets, Docker configs, crypto wallets, CI/CD configurations, and more. Collected data is encrypted with AES-256-CBC and exfiltrated to an attacker-controlled server at models.litellm.cloud.

The attack is attributed to a threat actor called TeamPCP, whose signature artifact (tpcp.tar.gz) has appeared across every stage of the campaign. The compromise represents a critical risk to organizations using LiteLLM, particularly those relying on AI agents in production environments, as attackers have gained access to sensitive infrastructure credentials and secrets.

  • Threat actor TeamPCP is orchestrating an ongoing campaign with escalating sophistication, suggesting this is not an isolated incident but part of a broader infrastructure targeting operation

Editorial Opinion

This incident exposes a critical vulnerability in how AI development tools are built and distributed: a single compromised dependency in a CI/CD pipeline can poison a downstream library used by tens of thousands of developers and deployed across mission-critical AI agent systems. The use of .pth file execution—a Python convention designed for legitimate purposes—as an attack vector highlights how established language features can be weaponized at scale. The LiteLLM team's rapid disclosure is commendable, but this underscores the urgent need for stronger verification mechanisms in package repositories, runtime sandboxing of dependency code, and comprehensive supply chain transparency in the AI tooling ecosystem.

AI AgentsCybersecurityRegulation & Policy

More from LiteLLM

LiteLLMLiteLLM
POLICY & REGULATION

Critical Supply Chain Attack: LiteLLM PyPI Compromise Exposes Millions of Developers

2026-04-02
LiteLLMLiteLLM
POLICY & REGULATION

LiteLLM Supply Chain Compromise: Malicious Package Deployed Credential Harvesting and Backdoor Access

2026-03-31
LiteLLMLiteLLM
RESEARCH

Security Researchers Discover Supply Chain Zero-Days in LiteLLM and Telnyx via Semantic Analysis

2026-03-29

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us