BotBeat
...
← Back

> ▌

Mistral AIMistral AI
INDUSTRY REPORTMistral AI2026-05-12

Shai-Hulud Campaign Compromises 160+ npm and PyPI Packages with Valid Cryptographic Signatures

Key Takeaways

  • ▸160-416 malicious packages compromised across npm and PyPI with valid cryptographic signatures and SLSA provenance, making detection nearly impossible through traditional verification methods
  • ▸Attackers exploited GitHub Actions vulnerabilities (cache poisoning, OIDC token theft) to gain legitimate publishing credentials and sign packages with authentic keys
  • ▸Malware persists through Claude Code and VS Code hooks, remaining active even after malicious packages are uninstalled
Source:
Hacker Newshttps://www.bleepingcomputer.com/news/security/shai-hulud-attack-ships-signed-malicious-tanstack-mistral-npm-packages/↗

Summary

The Shai-Hulud supply chain attack campaign has compromised over 160 npm and PyPI packages, including malicious versions of popular projects such as TanStack, Mistral AI, Guardrails AI, Bitwarden CLI, and official SAP packages. Attributed to the TeamPCP threat group, the attackers exploited critical GitHub Actions vulnerabilities to steal OpenID Connect (OIDC) tokens and publish malicious packages that carry valid SLSA Build Level 3 provenance attestations and legitimate GitHub Actions signatures, making them appear cryptographically authentic to developers.

The attack chain exploited three vulnerabilities in target repositories: risky pull_request-target workflows, GitHub Actions cache poisoning, and OIDC token theft from runner memory. Once installed, the malware steals developer credentials including GitHub Actions OIDC tokens, AWS secrets, Kubernetes credentials, HashiCorp Vault tokens, and cryptocurrency wallets from over 100 file paths. The malware persists through Claude Code hooks and VS Code auto-run tasks, remaining active even after package uninstallation.

Security researchers from StepSecurity, Endor Labs, and Aikido report 84 malicious package versions across 42 TanStack packages alone, with evidence of the campaign extending to dozens of other projects. The attackers used the Session P2P network for credential exfiltration, disguising malicious traffic as encrypted messenger communications to evade detection and blocking efforts.

  • Campaign attributed to TeamPCP threat group with multiple waves since September, targeting developer secrets from cloud providers, crypto platforms, and infrastructure tools

Editorial Opinion

The Shai-Hulud campaign represents a sophisticated and alarming escalation in supply chain attacks that fundamentally undermines the trust model of modern package registries. By leveraging legitimate CI/CD infrastructure and valid cryptographic signatures, attackers have demonstrated that traditional signature verification and provenance attestation are no longer sufficient for securing the software supply chain. Organizations must shift from signature-based trust to runtime behavioral monitoring and zero-trust package installation practices, as the distinction between a legitimate and compromised package has effectively blurred when both carry valid cryptographic proof of origin.

MLOps & InfrastructureCybersecurityPrivacy & DataOpen Source

More from Mistral AI

Mistral AIMistral AI
INDUSTRY REPORT

Massive Coordinated Supply Chain Attack Compromises 170+ npm and 2 PyPI Packages, Including Mistral AI SDKs

2026-05-12
Mistral AIMistral AI
UPDATE

Mistral AI Python Package Compromised: Backdoor Detected in Version 2.4.6

2026-05-12
Mistral AIMistral AI
UPDATE

Mistral AI's NPM Package Compromised in Shai Hulud Supply Chain Attack

2026-05-11

Comments

Suggested

AnthropicAnthropic
OPEN SOURCE

Anthropic Releases Prempti: Open-Source Guardrails for AI Coding Agents

2026-05-12
vlm-runvlm-run
OPEN SOURCE

mm-ctx: Open-Source Multimodal CLI Toolkit Brings Vision Capabilities to AI Agents

2026-05-12
AnthropicAnthropic
PRODUCT LAUNCH

Anthropic Unleashes Computer Use: Claude 3.5 Sonnet Now Controls Your Desktop

2026-05-12
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us