Cybersecurity News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Cybersecurity Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
CybersecurityNewsUsing AI to Generate Passwords Is a Terrible Idea, Experts Warn
Using AI to Generate Passwords Is a Terrible Idea, Experts Warn
EnterpriseAICybersecurity

Using AI to Generate Passwords Is a Terrible Idea, Experts Warn

•February 19, 2026
0
ITPro
ITPro•Feb 19, 2026

Companies Mentioned

Google

Google

GOOG

Why It Matters

AI‑generated passwords give attackers an easy path to compromise accounts, threatening both individual users and enterprise security. The finding highlights the need for strict policies against using public LLMs for credential creation.

Key Takeaways

  • •AI chatbots produce low‑entropy, predictable passwords.
  • •Claude repeated one password 18 times in tests.
  • •Estimated entropy ~27 bits versus 98‑bit standard.
  • •Gemini warned users about server‑processed passwords.
  • •Enterprises should ban AI password generation, use managers.

Pulse Analysis

Large language models excel at mimicking human‑like text, but that strength becomes a liability when they are asked to create passwords. The Irregular study showed that ChatGPT, Claude, and GPT‑5.2 default to recognizable patterns, yielding an average entropy of only 27 bits for a 16‑character string—far below the 98 bits recommended for strong credentials. Repeated outputs, such as Claude’s single password appearing 18 times, illustrate how statistical prediction overrides true randomness, leaving the generated secrets vulnerable to brute‑force attacks. Moreover, the models tend to favor a limited character set, further reducing complexity.

Enterprises that allow staff to copy‑paste AI‑suggested passwords expose themselves to rapid credential compromise. With entropy levels comparable to a one‑million‑guess brute‑force, attackers can crack such passwords in seconds using commodity hardware. Security leaders therefore recommend banning public chatbot use for any security‑sensitive function and mandating cryptographically secure password managers that draw from hardware‑based random number generators. Complementary controls—passkeys, biometric factors, and multi‑factor authentication—further reduce reliance on memorized secrets and mitigate the systemic risk introduced by AI‑generated credentials. Regular audits of password repositories can quickly identify any AI‑derived entries before they become exploitable.

The episode underscores a broader governance challenge: AI tools are being repurposed for tasks they were never designed to perform. As organizations adopt generative AI, policy frameworks must delineate acceptable use cases and embed continuous monitoring for unintended security outcomes. Training programs that highlight the statistical nature of LLM outputs can curb complacency, while vendors should improve model prompts to refuse password‑generation requests outright. Regulators are also beginning to draft guidelines that classify insecure AI‑generated credentials as non‑compliant under emerging cyber‑risk standards. Proactive oversight will ensure that the convenience of AI does not erode the fundamental pillars of cybersecurity.

Using AI to generate passwords is a terrible idea, experts warn

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...