AI Chatbots Generate 'Strong' Passwords That Are Actually Crackable in Hours, Study Finds
Key Takeaways
- ▸Claude, ChatGPT, and Gemini generate passwords with predictable patterns that can be cracked in hours despite appearing strong to standard checkers
- ▸LLM-generated passwords have entropy levels of only 20-27 bits versus the 98-120 bits required for true randomness, a critical security shortfall
- ▸Claude's testing showed duplicate passwords (18 identical from 50 attempts) and consistent character placement, proving lack of true randomness
Summary
A security analysis by AI firm Irregular has revealed that popular generative AI tools—Claude, ChatGPT, and Gemini—produce passwords that appear complex but are highly predictable and vulnerable to brute-force attacks. Despite passing online password strength checkers that claim the passwords would take centuries to crack, the researchers found that LLM-generated passwords contain consistent patterns and could feasibly be compromised in just a few hours on legacy hardware. Tests showed that Claude's Opus 4.6 model generated only 30 unique passwords from 50 attempts, with 18 being identical, and none contained repeating characters—a sign of non-random generation.
The research employed Shannon entropy analysis to quantify the security gap, finding that LLM-generated 16-character passwords had entropy values of only 20-27 bits, compared to the expected 98-120 bits for truly random passwords. Google's Gemini 3 Pro was the only tool to include a security warning advising against using AI-generated passwords for sensitive accounts and recommending third-party password managers instead. The findings underscore a critical limitation of generative AI: while these models excel at producing text that appears convincing and complex, they fundamentally lack true randomness, making them unsuitable for security-critical applications like password generation.
- Only Google's Gemini warned users not to use AI-generated passwords for sensitive accounts and recommended dedicated password managers instead
Editorial Opinion
This research exposes a dangerous blind spot in generative AI capabilities: these models can produce outputs that fool both humans and basic security tools, but fundamentally cannot generate the cryptographic randomness required for password security. The irony is troubling—users seeking AI assistance for 'strong' passwords are actually getting weakened security. Until LLMs integrate proper cryptographic randomness, they should never be relied upon for security-critical functions, and platforms like Claude and ChatGPT should either refuse password generation requests or, at minimum, display warnings as prominent as Google's.

