BotBeat
...
← Back

> ▌

AnthropicAnthropic
RESEARCHAnthropic2026-04-03

Claude Code Stores Sensitive Credentials in Plaintext, Exposing Developers to Supply Chain Attacks

Key Takeaways

  • ▸Claude Code stores plaintext credentials in `.claude/settings.local.json`, including API keys and database passwords from auto-approved commands
  • ▸Supply chain attacks bypass `.gitignore` protections entirely, with malware having full disk access to read credentials across the home directory
  • ▸Recent documented campaigns like LiteLLM's typosquatting attack have harvested hundreds of thousands of credentials from developer machines through similar plaintext file exfiltration
Source:
Hacker Newshttps://rentierdigital.xyz/blog/claude-code-security-secrets-disk↗

Summary

A security researcher discovered that Claude Code, Anthropic's AI coding assistant, stores sensitive credentials including API keys and database passwords in plaintext within the .claude/settings.local.json file on developer machines. The finding reveals a critical vulnerability in how the tool handles auto-approved commands, which can include credentials that persist unencrypted on disk. This discovery comes amid broader concerns about supply chain security, where malicious packages bypass Git's .gitignore protections to exfiltrate sensitive data from developer machines, including AWS credentials, SSH keys, shell history, and other secrets stored across the home directory.

The researcher highlighted that while .gitignore is treated as a security best practice, it only protects Git repositories and provides no defense against malware with disk access. Recent supply chain attacks, including the LiteLLM typosquatting campaign flagged by former OpenAI/Tesla AI lead Andrej Karpathy, have demonstrated that threat actors can harvest hundreds of thousands of credentials from developer machines by reading well-known plaintext configuration files. The addition of Claude Code's settings file to this attack surface raises questions about how AI coding assistants handle sensitive data and credential management.

  • Runtime secret injection and credential management tools (rather than on-disk storage) are recommended as a more robust security approach

Editorial Opinion

This discovery exposes a fundamental tension in AI coding assistants: convenience and automation can silently create security liabilities. While Claude Code's auto-approval feature is designed to improve developer workflow, storing plaintext credentials tied to those approvals represents a serious gap in security-first design. As AI tools increasingly integrate deeper into developer workflows, they must adopt secrets management practices that keep sensitive data off disk entirely—not just hope that .gitignore will protect users.

CybersecurityAI Safety & AlignmentPrivacy & Data

More from Anthropic

AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic's Claude Code Stores Unencrypted Session Data and Secrets in Plain Text

2026-04-04

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us