Your AI-generated password isn't random, it just looks that way
Summary
Generative AI tools produce predictable and weak passwords, easily guessable despite appearing complex. Their common patterns make them vulnerable to brute-force attacks in hours. Avoid using LLMs for password generation.
AI chatbots fail password security tests
AI security firm Irregular found that ChatGPT, Claude, and Gemini generate weak passwords that hackers can easily guess. While these tools produce 16-character strings that appear complex, they follow predictable patterns that undermine their security.
Standard password checkers often label these AI-generated strings as "strong" or "unbreakable." Some checkers estimated that a standard PC would take centuries to crack them. These tools fail because they only measure character variety rather than the underlying randomness of the generation process.
Irregular researchers discovered that all three major chatbots rely on specific templates. If a hacker understands these internal patterns, they can significantly reduce the time needed for a brute-force attack. The researchers tested several models to confirm these findings:
- Claude Opus 4.6
- OpenAI GPT-5.2
- Google Gemini 3 Flash
- Google Nano Banana Pro
The study highlights a fundamental flaw in using Large Language Models (LLMs) for security. LLMs are designed to produce plausible and predictable text based on training data. This goal is the exact opposite of secure password generation, which requires absolute randomness.
Claude produces repetitive security strings
Researchers ran 50 separate tests using the Claude Opus 4.6 model to generate individual passwords. They used separate conversation windows for each prompt to ensure the model had no memory of previous outputs. The results showed a startling lack of variety.
Out of the 50 passwords generated, only 30 were unique. The model produced 20 duplicates, and 18 of those duplicates were the exact same string of characters. This level of repetition suggests the model relies on a very narrow set of high-probability character sequences.
The team also noted that the vast majority of these passwords started and ended with the same specific characters. Furthermore, none of the 50 passwords contained repeating characters. While this might look "random" to a human, true randomness often includes clusters and repetitions.
Irregular found similar issues with Google's Gemini 3 Flash and OpenAI's GPT-5.2. Both models showed consistent patterns at the beginning of their generated strings. Even the Nano Banana Pro image generation model failed the test when asked to "draw" a random password on a Post-it note.
Mathematical entropy reveals hidden weaknesses
The research team used the Shannon entropy formula to calculate the actual strength of these passwords. Entropy measures the level of uncertainty or randomness in a data set. A truly secure 16-character password should have a high bit-count to resist modern computing power.
Irregular used two different methods to estimate the entropy of the AI outputs. They looked at character statistics and log probabilities based on the model's internal patterns. The results showed that AI passwords are significantly weaker than they appear:
- 27 bits: Entropy based on character statistics.
- 20 bits: Entropy based on LLM log probabilities.
- 98 bits: Expected entropy for a truly random 16-character string.
- 120 bits: Expected entropy for a high-security random string.
These numbers translate to a massive security gap in the real world. A truly random password might take billions of years to crack. An LLM-generated password could be brute-forced in a few hours, even on a computer that is several decades old.
The gap exists because the LLM is not "calculating" a random string. It is predicting the next character based on a mathematical probability. This makes the output "low entropy" because the choices are limited by the model's training and architecture.
Hackers find patterns in GitHub code
The predictability of AI passwords has already created a trail of vulnerable data across the internet. Irregular researchers searched for common AI character sequences on GitHub and the wider web. They found thousands of matches in sensitive locations.
These searches returned test code, setup instructions, and technical documentation. Many developers use AI to generate "placeholder" passwords for open-source projects. Because these passwords follow known patterns, they are no longer secure placeholders but active vulnerabilities.
Anthropic CEO Dario Amodei previously stated that AI will likely write the majority of all computer code in the near future. Irregular warns that if this prediction comes true, the passwords embedded in that code will be fundamentally compromised. This could lead to a new era of automated brute-force attacks targeting AI-written software.
The researchers also tested Gemini 3 Pro, which offers three different output options for password generation. While the "randomized alphanumeric" option appeared more secure, the "high complexity" and "symbol-heavy" options followed the same weak patterns. Gemini 3 Pro was the only model to provide a security warning during the test.
Security experts recommend password managers
Irregular concludes that the weakness of AI-generated passwords is unfixable. No amount of prompting or "temperature" adjustments can force an LLM to be truly random. Users and developers should stop using these tools for any security-related task immediately.
Developers who have used LLMs to generate credentials should rotate those passwords as soon as possible. The industry must recognize that AI-assisted development, often called "vibe coding," introduces risks that traditional security scanners might miss. The gap between what an AI can do and how it behaves is a major concern for cybersecurity.
For secure password generation, experts recommend using dedicated tools rather than chatbots. These tools use cryptographically secure random number generators that do not follow predictable linguistic patterns. Recommended alternatives include:
- 1Password
- Bitwarden
- KeePassXC
- Native managers in iOS and Android
Google’s Gemini model even suggested using these third-party managers during the Irregular tests. It also recommended passphrases—long strings of random words—as a more secure and memorable alternative to AI-generated character strings. The lesson is clear: if you need a password, let a machine that doesn't "think" generate it for you.
Related Articles

These Malicious AI Assistants in Chrome Are Stealing User Credentials
Fake AI Chrome extensions like AiFrame, posing as ChatGPT or Gemini, have over 300,000 installs. They steal data via remote iframes. Check and remove suspicious extensions.
Update Chrome ASAP to Patch This High-Severity Security Flaw
Update Chrome now. A zero-day bug lets malicious webpages run harmful code. Patch is in version 145.0.7632.75/76 (Windows/macOS) or 144.0.7559.75 (Linux).
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.
