BotBeat
...
← Back

> ▌

AnthropicAnthropic
RESEARCHAnthropic2026-02-27

Research Reveals LLMs Generate Highly Predictable Passwords, Raising Security Concerns for Autonomous AI Agents

Key Takeaways

  • ▸Claude generated only 30 unique passwords from 50 attempts, with one password repeating 18 times (36% probability)
  • ▸All generated passwords followed predictable patterns: starting with uppercase letters, avoiding character repetition, and showing uneven character distribution
  • ▸The research highlights fundamental limitations of LLMs in tasks requiring true randomness, as their architecture is designed for pattern recognition rather than entropy generation
Source:
Hacker Newshttps://www.schneier.com/blog/archives/2026/02/llms-generate-predictable-passwords.html↗

Summary

New research highlighted by security expert Bruce Schneier demonstrates that large language models produce alarmingly predictable passwords when tasked with password generation. In a test of Anthropic's Claude, all 50 generated passwords started with a letter (usually uppercase 'G' followed by the digit '7'), used highly uneven character distributions, avoided repeating characters entirely, and excluded certain symbols like asterisks due to Markdown formatting conflicts. Most concerningly, the 50 attempts produced only 30 unique passwords, with the most common password appearing 18 times—a 36% probability far exceeding the expected 2^-100 for a truly random 100-bit password.

The findings underscore a fundamental limitation of LLMs: their architecture is designed to recognize and generate patterns, not produce true randomness. While this weakness may seem obvious in hindsight, it has serious implications as AI agents become more autonomous. These agents will increasingly need to create accounts and authenticate themselves across systems, requiring secure password generation as a basic capability. The research raises broader questions about authentication protocols for autonomous agents and whether LLMs can reliably perform tasks requiring genuine entropy.

Security commentators noted that while LLMs can describe proper password generation procedures when asked directly, they fail to apply this knowledge when generating passwords in practice—a disconnect between institutional knowledge and execution. Some experts suggest the solution lies in having LLMs defer to proper cryptographic tools like /dev/urandom rather than attempting password generation themselves. The broader concern is whether LLMs' inability to produce randomness undermines their reliability in other domains where stochastic processes are assumed to be operating correctly.

  • The findings raise critical security concerns for autonomous AI agents that need to create accounts and authenticate themselves
  • Experts suggest LLMs should defer to cryptographic tools rather than attempting password generation, revealing a gap between LLMs' knowledge of proper procedures and their practical execution

Editorial Opinion

This research exposes a critical blind spot in the rush toward autonomous AI agents. While it's hardly surprising that pattern-matching machines struggle with randomness, the implications extend far beyond password generation—if LLMs can't reliably produce entropy when needed, what other security-critical tasks are they quietly failing at? The 36% repetition rate for a supposedly random password is less a technical curiosity and more a canary in the coal mine for AI safety, suggesting we may be deploying agents in sensitive roles before understanding their fundamental limitations.

Large Language Models (LLMs)AI AgentsCybersecurityAI Safety & AlignmentResearch

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us