Claude Shows LLMs Generating Predictable Passwords, Warns Schneier on Security
Photo by Kevin Ku on Unsplash
50 AI‑generated passwords all start with an uppercase “G” followed by “7,” Schneier reports, revealing stark, repeatable patterns that make LLM‑crafted passwords predictably weak.
Quick Summary
- •50 AI‑generated passwords all start with an uppercase “G” followed by “7,” Schneier reports, revealing stark, repeatable patterns that make LLM‑crafted passwords predictably weak.
- •Key company: Claude
Claude’s password‑generation quirks are more than a curiosity; they expose a systemic weakness in using large language models (LLMs) for security‑critical tasks. In a test of 50 AI‑crafted passwords, every string began with an uppercase “G” followed by the digit “7,” and the most frequent password—G7$kL9#mQ2&xP4!w—appeared 18 times, giving it a 36 % occurrence rate (Schneier on Security). A truly random 100‑bit password would have a probability of roughly 1 in 2^100, making Claude’s output orders of magnitude less secure than expected.
The distribution of characters further underscores the model’s bias. While symbols such as “$” and “#” and digits like “9” and “2” showed up in every sample, other characters—most of the alphabet, the “@” symbol, and the digit “5”—were virtually absent (Schneier). Claude also avoided the asterisk “,” likely because the model formats its output in Markdown, where “” has special meaning. This deterministic avoidance of certain characters reduces entropy and creates predictable patterns that attackers could exploit.
Claude’s internal logic appears to favor non‑repeating characters, a constraint that paradoxically makes passwords look “less random” to human observers. The model never produced a password with a repeated character, even though randomness would occasionally generate such repeats (Schneier). This self‑imposed rule shrinks the effective keyspace, further inflating the probability of any given password and making brute‑force attacks more feasible.
The implications extend beyond password creation. As autonomous AI agents begin to provision accounts, authenticate to services, and perform transactions without human oversight, the reliance on LLMs for credential generation could become a systemic risk. Schneier warns that “the whole process of authenticating an autonomous agent has all sorts of deep problems,” suggesting that weak, predictable passwords could be the weakest link in a chain of AI‑driven operations (Schneier).
Industry observers have long cautioned that LLMs excel at pattern completion but falter when true randomness is required. The Claude experiment validates those concerns with empirical data, reinforcing the need for dedicated cryptographic generators rather than repurposing generative AI for security functions. Until LLMs can reliably produce high‑entropy secrets, organizations should treat AI‑generated passwords as unsuitable for production environments and rely on proven password managers or hardware‑based key derivation methods.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.