
AI‑generated passwords give attackers an easy path to compromise accounts, threatening both individual users and enterprise security. The finding highlights the need for strict policies against using public LLMs for credential creation.
Large language models excel at mimicking human‑like text, but that strength becomes a liability when they are asked to create passwords. The Irregular study showed that ChatGPT, Claude, and GPT‑5.2 default to recognizable patterns, yielding an average entropy of only 27 bits for a 16‑character string—far below the 98 bits recommended for strong credentials. Repeated outputs, such as Claude’s single password appearing 18 times, illustrate how statistical prediction overrides true randomness, leaving the generated secrets vulnerable to brute‑force attacks. Moreover, the models tend to favor a limited character set, further reducing complexity.
Enterprises that allow staff to copy‑paste AI‑suggested passwords expose themselves to rapid credential compromise. With entropy levels comparable to a one‑million‑guess brute‑force, attackers can crack such passwords in seconds using commodity hardware. Security leaders therefore recommend banning public chatbot use for any security‑sensitive function and mandating cryptographically secure password managers that draw from hardware‑based random number generators. Complementary controls—passkeys, biometric factors, and multi‑factor authentication—further reduce reliance on memorized secrets and mitigate the systemic risk introduced by AI‑generated credentials. Regular audits of password repositories can quickly identify any AI‑derived entries before they become exploitable.
The episode underscores a broader governance challenge: AI tools are being repurposed for tasks they were never designed to perform. As organizations adopt generative AI, policy frameworks must delineate acceptable use cases and embed continuous monitoring for unintended security outcomes. Training programs that highlight the statistical nature of LLM outputs can curb complacency, while vendors should improve model prompts to refuse password‑generation requests outright. Regulators are also beginning to draft guidelines that classify insecure AI‑generated credentials as non‑compliant under emerging cyber‑risk standards. Proactive oversight will ensure that the convenience of AI does not erode the fundamental pillars of cybersecurity.
Comments
Want to join the conversation?
Loading comments...