Lovable’s AI‑built app leaks data of 18,000 users, researcher alleges
Photo by Markus Spiske on Unsplash
18,000 users’ data were exposed, Theregister reports, after researcher Taimur Khan uncovered 16 flaws—including six critical—in a Lovable‑hosted AI‑built app that the platform says users must patch themselves.
Quick Summary
- •18,000 users’ data were exposed, Theregister reports, after researcher Taimur Khan uncovered 16 flaws—including six critical—in a Lovable‑hosted AI‑built app that the platform says users must patch themselves.
- •Key company: Lovable
Lovable’s reliance on Supabase for every vibe‑coded backend turned out to be a single point of failure, according to security researcher Taimur Khan, who identified 16 vulnerabilities in one of the platform’s publicly listed apps — six of them classified as critical — and traced the breach to a flawed implementation of Supabase’s row‑level security (RLS) and role‑based access controls (RBAC) 【Theregister】. The app, which serves as a repository for exam questions and grade data, was built entirely by Lovable’s AI‑driven code generator. Because the AI omitted explicit RLS policies, the generated PostgreSQL functions permitted unauthenticated callers to execute privileged operations. Khan described the most egregious flaw as a “logic inversion”: an authentication routine that was supposed to block non‑admin users instead blocked all logged‑in users while granting access to anyone without a session token 【Theregister】. In practice, an attacker could retrieve every user record, send bulk emails, delete accounts, alter grades, and read admin‑only communications without presenting any credentials.
The exposed dataset comprised 18,697 records, of which 14,928 were unique email addresses, 4,538 belonged to students, 10,505 to enterprise users, and 870 contained full personally‑identifiable information 【Theregister】. The user base spanned K‑12 schools, university departments—including UC Berkeley and UC Davis—and corporate education programs, meaning minors were potentially affected. Khan’s analysis showed that the vulnerability was not an isolated coding mistake but a systemic issue: any Lovable app that fails to manually enable Supabase’s security layers will inherit the same back‑end weaknesses. Because the platform’s terms place the onus of patching on the app creator, the responsibility for the leak falls squarely on developers who rely on the AI without conducting a security review 【Theregister】.
Lovable’s rapid growth—nearing 8 million users according to a recent TechCrunch profile of CEO Anton Osik—has amplified the impact of such oversights 【TechCrunch】. The company’s valuation, now $6.6 billion after a $330 million Series C round reported by Reuters, reflects strong market demand for low‑code AI tools, yet the incident underscores a gap between speed of deployment and robustness of security 【Reuters】. While Lovable markets its “vibe‑coding” approach as a democratizing force that eliminates the steep learning curve of traditional software development, the breach illustrates how AI‑generated code can propagate insecure patterns at scale when the underlying infrastructure is not hardened by default.
Industry observers note that Supabase’s default configuration does not enforce RLS or RBAC, leaving developers to opt‑in to these protections — a step that AI generators are currently ill‑equipped to handle autonomously. In a typical Supabase setup, developers write policies that filter rows based on the authenticated user’s ID; without these policies, any query can traverse the entire table. Khan’s findings suggest that Lovable’s code‑generation pipeline does not embed policy scaffolding, resulting in back‑ends that appear functional but are fundamentally insecure 【Theregister】. The flaw is compounded by the platform’s “discover” page, which surfaces apps to a wide audience; the compromised app had amassed over 100 000 views and 400 up‑votes before the vulnerabilities were disclosed, indicating substantial exposure before remediation.
The episode raises broader questions about the responsibility model for AI‑assisted development platforms. Lovable’s current stance—that developers must patch security issues flagged during pre‑publish scans—mirrors the “shared responsibility” model used by cloud providers, yet the average user of a vibe‑coded app may lack the expertise to audit generated code. As the “vibe‑coding” term—Collins Dictionary’s Word of the Year for 2025—gains mainstream traction, regulators and industry bodies may soon demand baseline security guarantees from AI code generators. Until such standards emerge, incidents like the Lovable data leak are likely to recur, turning what was marketed as a shortcut into a vector for large‑scale data exposure.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.