AI-Generated Passwords Are Dangerously Predictable, Security Researchers Warn
Key Takeaways
- ▸Claude, ChatGPT, and Gemini produce passwords with predictable patterns despite appearing complex, with entropy estimates of only 20-27 bits versus 98-120 bits for truly random passwords
- ▸AI-generated passwords can feasibly be cracked in hours using specialized brute-force attacks informed by known LLM patterns, despite online checkers claiming they would take centuries
- ▸Users should rely on dedicated password managers rather than asking AI chatbots to generate passwords, as LLMs lack true randomization capabilities
Summary
Security researchers at Irregular have discovered that popular generative AI tools—Claude, ChatGPT, and Gemini—generate passwords that appear complex but are actually highly predictable and vulnerable to attack. Despite passing standard online password strength checkers, the AI-generated passwords contain consistent patterns that could allow hackers to crack them in hours rather than centuries. The researchers tested Claude's Opus 4.6 model 50 times and found only 30 unique passwords, with most starting and ending with identical characters and none containing repeated characters—telltale signs of non-randomness.
Using Shannon entropy analysis, Irregular estimated that 16-character LLM-generated passwords had entropy of only 20-27 bits, compared to 98-120 bits for truly random passwords. This means hackers with knowledge of LLM patterns could feasibly brute-force these passwords within hours on decades-old hardware. Tests on OpenAI's GPT-5.2 and Google's Gemini 3 Flash revealed similar consistency issues, particularly at the beginning of password strings. Google's Gemini 3 Pro notably included a security warning against using its generated passwords for sensitive accounts and recommended third-party password managers instead.
- The predictability extends beyond text generation—Google's image model produced identical password patterns when asked to generate passwords on a Post-It note image
Editorial Opinion
This research highlights a critical blind spot in generative AI systems: while they excel at pattern recognition and mimicking training data, they fundamentally cannot generate true randomness—a requirement that is mathematically essential for cryptographic security. It's encouraging that Google's Gemini 3 Pro includes explicit security warnings against using its generated passwords, but the fact that all three major AI platforms fail at this relatively straightforward task suggests developers need to better understand the limitations of LLMs before deploying them in security-sensitive contexts.


