BotBeat
...
← Back

> ▌

AnthropicAnthropic
RESEARCHAnthropic2026-03-13

AI Chatbots Generate 'Strong' Passwords That Are Actually Crackable in Hours, Study Finds

Key Takeaways

  • ▸Claude, ChatGPT, and Gemini generate passwords with predictable patterns that can be cracked in hours despite appearing strong to standard checkers
  • ▸LLM-generated passwords have entropy levels of only 20-27 bits versus the 98-120 bits required for true randomness, a critical security shortfall
  • ▸Claude's testing showed duplicate passwords (18 identical from 50 attempts) and consistent character placement, proving lack of true randomness
Source:
Hacker Newshttps://www.theregister.com/2026/02/18/generating_passwords_with_llms/↗

Summary

A security analysis by AI firm Irregular has revealed that popular generative AI tools—Claude, ChatGPT, and Gemini—produce passwords that appear complex but are highly predictable and vulnerable to brute-force attacks. Despite passing online password strength checkers that claim the passwords would take centuries to crack, the researchers found that LLM-generated passwords contain consistent patterns and could feasibly be compromised in just a few hours on legacy hardware. Tests showed that Claude's Opus 4.6 model generated only 30 unique passwords from 50 attempts, with 18 being identical, and none contained repeating characters—a sign of non-random generation.

The research employed Shannon entropy analysis to quantify the security gap, finding that LLM-generated 16-character passwords had entropy values of only 20-27 bits, compared to the expected 98-120 bits for truly random passwords. Google's Gemini 3 Pro was the only tool to include a security warning advising against using AI-generated passwords for sensitive accounts and recommending third-party password managers instead. The findings underscore a critical limitation of generative AI: while these models excel at producing text that appears convincing and complex, they fundamentally lack true randomness, making them unsuitable for security-critical applications like password generation.

  • Only Google's Gemini warned users not to use AI-generated passwords for sensitive accounts and recommended dedicated password managers instead

Editorial Opinion

This research exposes a dangerous blind spot in generative AI capabilities: these models can produce outputs that fool both humans and basic security tools, but fundamentally cannot generate the cryptographic randomness required for password security. The irony is troubling—users seeking AI assistance for 'strong' passwords are actually getting weakened security. Until LLMs integrate proper cryptographic randomness, they should never be relied upon for security-critical functions, and platforms like Claude and ChatGPT should either refuse password generation requests or, at minimum, display warnings as prominent as Google's.

Natural Language Processing (NLP)CybersecurityAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us