Security Researchers Discover Credential-Stealing Malware in Typosquatted Hugging Face Repository
Key Takeaways
- ▸Users who cloned and executed code from the malicious repository should treat systems as fully compromised and prioritize reimaging, as the infostealer malware harvests credentials from browsers, password managers, SSH keys, and cryptocurrency wallets
- ▸The attack chain uses jsonkeeper.com as a C2 channel, allowing attackers to update payloads without modifying the repository—a sophisticated evasion technique
- ▸All credentials accessed on compromised machines must be rotated from clean devices, including saved passwords, OAuth tokens, browser cookies, cloud provider tokens, Discord sessions, and cryptocurrency seed phrases
Summary
The HiddenLayer Research Team identified sophisticated malware in a Hugging Face repository named 'Open-OSS/privacy-filter' that impersonated OpenAI's legitimate Privacy Filter tool. The malicious repository appeared among Hugging Face's trending projects with over 200,000 downloads before being removed by Hugging Face. The attack employed a deceptive six-stage chain that executes hidden PowerShell commands to harvest credentials, browser sessions, cryptocurrency wallets, and sensitive authentication tokens from compromised Windows machines.
The malware uses a seemingly legitimate loader.py file that runs decoy code (a fake machine learning model and dataset) before silently disabling SSL verification and fetching commands from jsonkeeper.com, a public JSON paste service. A hidden PowerShell command then downloads update.bat from a blockchain-mimicking domain to perform credential harvesting and system reconnaissance. Security researchers warn that any user who executed files from the repository should assume complete system compromise and recommend immediate reimaging rather than cleanup, along with rotating all credentials accessed from the affected machine across browsers, password managers, cloud providers, and cryptocurrency wallets.
- The incident exposes the vulnerability of open-source AI platforms to typosquatting attacks and highlights the need for stronger repository verification mechanisms and official project authentication badges
Editorial Opinion
This incident reveals a dangerous vulnerability at the intersection of open-source trust and social engineering: as Hugging Face democratizes access to powerful AI models, attackers increasingly exploit the platform's legitimacy through sophisticated typosquatting. The six-stage attack chain—complete with decoy code, silent execution, and dynamic C2 updates—demonstrates that threat actors are becoming more sophisticated in disguising malware as legitimate AI tooling. Without stronger repository verification systems, cryptographic signing requirements, and trusted-maintainer authentication, Hugging Face and similar platforms risk becoming attack vectors that undermine trust in the entire open-source AI ecosystem.



