Researchers Uncover Supply-Chain Attack Using Invisible Unicode Code to Bypass Security Defenses
Key Takeaways
- ▸151 malicious packages using invisible Unicode characters were discovered across GitHub, NPM, and Open VSX between March 3-9
- ▸Attackers use Unicode Public Use Area (PUA) characters that are invisible to humans and most security tools but executable in JavaScript interpreters
- ▸Aikido Security and Koi suspect LLMs are being used to generate realistic, high-quality code changes that accompany the malicious payloads, making detection significantly harder
Summary
Aikido Security researchers have discovered a sophisticated supply-chain attack affecting GitHub, NPM, and Open VSX repositories, in which attackers upload 151 malicious packages containing code hidden using invisible Unicode characters. The attack leverages Public Use Area (PUA) characters in the Unicode specification that render as whitespace to human reviewers and most security tools, but execute as legitimate code when interpreted by JavaScript engines. The malicious packages are especially deceptive because their visible portions contain realistic, stylistically consistent changes such as documentation updates, version bumps, and bug fixes that pass manual code reviews.
Researchers from Aikido Security and security firm Koi suspect the attack group, dubbed Glassworm, is using large language models (LLMs) to generate convincingly legitimate code at scale. The invisible Unicode technique, which was largely dormant until 2024 when hackers began using it to conceal prompts from AI systems, has now evolved into a traditional malware delivery mechanism. The attack underscores a critical vulnerability in current code review and static analysis defenses, which are designed to detect suspicious patterns in visible code but are completely blind to invisible character encodings.
- Traditional defenses including manual code reviews, editors, terminals, and static analysis tools are rendered ineffective against this technique
- The invisible Unicode technique, originally developed decades ago, was repurposed in 2024 to conceal AI prompts before being adapted for traditional malware delivery


