Claude Code Stores Sensitive Credentials in Plaintext, Exposing Developers to Supply Chain Attacks
Key Takeaways
- ▸Claude Code stores plaintext credentials in `.claude/settings.local.json`, including API keys and database passwords from auto-approved commands
- ▸Supply chain attacks bypass `.gitignore` protections entirely, with malware having full disk access to read credentials across the home directory
- ▸Recent documented campaigns like LiteLLM's typosquatting attack have harvested hundreds of thousands of credentials from developer machines through similar plaintext file exfiltration
Summary
A security researcher discovered that Claude Code, Anthropic's AI coding assistant, stores sensitive credentials including API keys and database passwords in plaintext within the .claude/settings.local.json file on developer machines. The finding reveals a critical vulnerability in how the tool handles auto-approved commands, which can include credentials that persist unencrypted on disk. This discovery comes amid broader concerns about supply chain security, where malicious packages bypass Git's .gitignore protections to exfiltrate sensitive data from developer machines, including AWS credentials, SSH keys, shell history, and other secrets stored across the home directory.
The researcher highlighted that while .gitignore is treated as a security best practice, it only protects Git repositories and provides no defense against malware with disk access. Recent supply chain attacks, including the LiteLLM typosquatting campaign flagged by former OpenAI/Tesla AI lead Andrej Karpathy, have demonstrated that threat actors can harvest hundreds of thousands of credentials from developer machines by reading well-known plaintext configuration files. The addition of Claude Code's settings file to this attack surface raises questions about how AI coding assistants handle sensitive data and credential management.
- Runtime secret injection and credential management tools (rather than on-disk storage) are recommended as a more robust security approach
Editorial Opinion
This discovery exposes a fundamental tension in AI coding assistants: convenience and automation can silently create security liabilities. While Claude Code's auto-approval feature is designed to improve developer workflow, storing plaintext credentials tied to those approvals represents a serious gap in security-first design. As AI tools increasingly integrate deeper into developer workflows, they must adopt secrets management practices that keep sensitive data off disk entirely—not just hope that
.gitignorewill protect users.


