BotBeat
...
← Back

> ▌

AnthropicAnthropic
RESEARCHAnthropic2026-02-26

AI-Generated Passwords Proven Highly Vulnerable Due to Predictable LLM Patterns

Key Takeaways

  • ▸Leading LLMs including Claude, ChatGPT, and Gemini generate passwords with only 27 bits of entropy versus the 98 bits expected for secure 16-character passwords
  • ▸All tested models exhibited highly predictable patterns, with Claude favoring 'G' and '7' as starting characters, ChatGPT preferring 'v' and 'Q', and Gemini favoring 'K'
  • ▸LLM-generated passwords draw from narrow character subsets and avoid repeating characters—patterns that appear random but actually indicate lack of true randomness
Source:
Hacker Newshttps://gizmodo.com/ai-generated-passwords-are-apparently-quite-easy-to-crack-2000723660↗

Summary

New research from cybersecurity firm Irregular has revealed that passwords generated by leading large language models are fundamentally insecure and easy to crack despite appearing strong. The study tested Claude, ChatGPT, and Gemini's password generation capabilities and found that all three models exhibited highly predictable patterns that drastically reduce password entropy—a key measure of password strength.

When asked to generate 50 unique 16-character passwords, Anthropic's Claude Opus 4.6 consistently started passwords with uppercase 'G' and used '7' as the second character in most cases. OpenAI's ChatGPT began nearly every password with 'v' and frequently used 'Q' as the second character, while Google's Gemini favored starting with 'K' followed by '#', 'P', or '9'. All three models drew from a narrow subset of available characters rather than using the full alphabet, and notably avoided repeating characters—a pattern that actually reveals lack of true randomness.

The researchers found that while truly secure 16-character passwords should have approximately 98 bits of entropy, LLM-generated passwords produced only about 27 bits of entropy—making them extremely vulnerable to brute-force attacks that could crack them in seconds rather than the trillions of years required for properly random passwords. The findings raise particular concerns as AI agents increasingly handle coding and security tasks, potentially creating systemic vulnerabilities when they rely on LLMs for password generation. Some models like Gemini do warn users not to use their generated passwords for sensitive accounts, but the research highlights fundamental limitations in LLMs' ability to generate truly random sequences.

  • The vulnerability extends beyond individual users to AI agents that may use LLMs for password creation in automated workflows and coding tasks

Editorial Opinion

This research exposes a fundamental weakness in how LLMs approach randomization—a critical capability for security applications. The fact that leading models from OpenAI, Anthropic, and Google all exhibit similar predictable patterns suggests this is an architectural limitation rather than a simple oversight. As organizations increasingly deploy AI agents with coding and system administration capabilities, the implications extend far beyond individual password choices to potential systemic vulnerabilities in automated security infrastructure.

Large Language Models (LLMs)Machine LearningCybersecurityAI Safety & AlignmentResearch

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
SourceHutSourceHut
INDUSTRY REPORT

SourceHut's Git Service Disrupted by LLM Crawler Botnets

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us