It might seem like a good idea at first, but it turns out that asking a generative AI (GenAI) model to generate a password for you is a bad idea. Research from cybersecurity firm Irregular found that passwords created by GenAI models are too easy to crack.
Irregular went to three popular large language models (LLMs), ChatGPT, Claude and Gemini, and asked them to create unique passwords. The models had to follow all the best practices of strong passwords in this day and age — they needed to be 16 characters long and use special characters, numbers and letters. The results looked good on the surface, but when put to the test they were all easy to crack.
It turns out that large language models are not actually good at randomization. Each bot used a predictable pattern across 50 passwords, making them easy to crack with brute force. “Passwords generated through direct LLM output are fundamentally weak, and this is unfixable by prompting or temperature adjustments,” Irregular said, because “LLMs are optimized to produce predictable, plausible outputs, which is incompatible with secure password generation.”