Anthropic's Claude Code Stores Unencrypted Session Data and Secrets in Plain Text
Key Takeaways
- ▸Claude Code stores complete, unencrypted session histories and all user prompts in the ~/.claude/ directory without user knowledge or official documentation
- ▸Secrets and API credentials passed through tool results are written to plaintext files, creating credential exposure risks for developers
- ▸The directory includes persistent anonymous identifiers and remotely-updated plugin blocklists, raising privacy and security concerns
Summary
A security vulnerability has been identified in Claude Code, Anthropic's AI coding assistant, where sensitive user data is stored in plaintext in an undocumented ~/.claude/ directory on user machines. The directory contains complete conversation histories, all user prompts, shell environment snapshots, API credentials passed through tool results, and persistent anonymous identifiers—all without encryption. Users are unaware of this data collection and storage method because the directory remains completely undocumented in Anthropic's official documentation.
The issue was formally reported via a GitHub issue that references a detailed community analysis documenting the directory structure, file formats, and security implications. The community guide reveals that secrets passed through tool results are written directly to disk in JSONL and JSON files, creating a significant credential exposure risk for developers. Additionally, the ~/.claude/ directory includes a Statsig stable ID that persists across sessions for anonymous tracking, and a remotely-updated plugin blocklist that can be modified during active user sessions.
This represents both a documentation gap and a material security and privacy concern for Claude Code users, who have no official guidance on what data is being stored, how long it persists, or the plaintext nature of the storage. Anthropic has not yet publicly addressed the vulnerability or provided guidance on data protection measures.
- A detailed community analysis is available to incorporate into official documentation, but Anthropic has not yet addressed the security implications or provided encryption/data protection guidance
Editorial Opinion
This disclosure highlights a critical gap between Anthropic's public messaging about AI safety and security and the actual security posture of their deployed products. Storing unencrypted user prompts, conversation histories, and credentials in plaintext on user machines contradicts basic security hygiene and represents a significant privacy risk for the developer community relying on Claude Code. Anthropic must urgently address this by providing encryption for sensitive data at rest, documenting the ~/.claude/ directory in official guidance, and offering users clear controls over data retention and deletion.



