BotBeat
...
← Back

> ▌

AnthropicAnthropic
POLICY & REGULATIONAnthropic2026-04-04

Anthropic's Claude Code Stores Unencrypted Session Data and Secrets in Plain Text

Key Takeaways

  • ▸Claude Code stores complete, unencrypted session histories and all user prompts in the ~/.claude/ directory without user knowledge or official documentation
  • ▸Secrets and API credentials passed through tool results are written to plaintext files, creating credential exposure risks for developers
  • ▸The directory includes persistent anonymous identifiers and remotely-updated plugin blocklists, raising privacy and security concerns
Source:
Hacker Newshttps://github.com/anthropics/claude-code/issues/43675↗

Summary

A security vulnerability has been identified in Claude Code, Anthropic's AI coding assistant, where sensitive user data is stored in plaintext in an undocumented ~/.claude/ directory on user machines. The directory contains complete conversation histories, all user prompts, shell environment snapshots, API credentials passed through tool results, and persistent anonymous identifiers—all without encryption. Users are unaware of this data collection and storage method because the directory remains completely undocumented in Anthropic's official documentation.

The issue was formally reported via a GitHub issue that references a detailed community analysis documenting the directory structure, file formats, and security implications. The community guide reveals that secrets passed through tool results are written directly to disk in JSONL and JSON files, creating a significant credential exposure risk for developers. Additionally, the ~/.claude/ directory includes a Statsig stable ID that persists across sessions for anonymous tracking, and a remotely-updated plugin blocklist that can be modified during active user sessions.

This represents both a documentation gap and a material security and privacy concern for Claude Code users, who have no official guidance on what data is being stored, how long it persists, or the plaintext nature of the storage. Anthropic has not yet publicly addressed the vulnerability or provided guidance on data protection measures.

  • A detailed community analysis is available to incorporate into official documentation, but Anthropic has not yet addressed the security implications or provided encryption/data protection guidance

Editorial Opinion

This disclosure highlights a critical gap between Anthropic's public messaging about AI safety and security and the actual security posture of their deployed products. Storing unencrypted user prompts, conversation histories, and credentials in plaintext on user machines contradicts basic security hygiene and represents a significant privacy risk for the developer community relying on Claude Code. Anthropic must urgently address this by providing encryption for sensitive data at rest, documenting the ~/.claude/ directory in official guidance, and offering users clear controls over data retention and deletion.

CybersecurityEthics & BiasPrivacy & Data

More from Anthropic

AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05
AnthropicAnthropic
OPEN SOURCE

Go-LLM-proxy v0.3 Released: Protocol-Translating Proxy Bridges Multiple Coding AI Models

2026-04-04

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
SourceHutSourceHut
INDUSTRY REPORT

SourceHut's Git Service Disrupted by LLM Crawler Botnets

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us