Vibe Coding Security Risks: What Founders Need to Know Before Going Live

In early 2026, a platform called Moltbook launched. Built entirely with AI coding tools in a matter of days - the kind of speed that gets quoted in founder Twitter threads as proof that the future has arrived. Within weeks, 1.5 million API tokens had been exposed and 35,000 user email addresses had been leaked.
The founder did not get hacked in the traditional sense. No sophisticated attacker spent months probing the system. The vulnerabilities were not hidden deep in some obscure dependency. They were sitting in the open, the direct output of what the AI tool produced by default, never questioned, never reviewed, shipped straight to production.
This is the post that every AI builder platform cannot write, because writing it honestly would require them to acknowledge what their tools produce by default when nobody is checking. So nobody writes it. Founders find out the hard way instead - usually in front of real users, sometimes in front of regulators.
Here is what you actually need to know before you go live.
Why Vibe-Coded Apps Are Different
Security vulnerabilities in software are not new. Developers have been making security mistakes for as long as software has existed. What is different about vibe-coded apps is not that they have more vulnerabilities - it is that the vulnerabilities are invisible in a specific way.
When a developer writes insecure code, they at least understand what the code is doing. They might misconfigure something, skip a validation step, or make a wrong assumption. The mistake is theirs and they have the context to find it when something goes wrong.
When an AI tool writes the code, the founder sees the output but not the decisions. The app looks correct. The UI loads. The login page works. Data appears in the table. Nothing about the visible surface tells you whether the layer underneath is secure. The same output that impresses a demo audience and satisfies an early user test can be leaking credentials, skipping authentication checks, and accepting any incoming request as legitimate - all while looking completely normal.
A 2025 study found that 45% of AI-generated code fails basic security tests. Not advanced penetration testing. Basic security tests. That is nearly one in two apps with a security flaw significant enough to fail the first layer of review. The apps that launched anyway - because the founders running them had no way to know what they did not know - are out there.
The Six Places Vibe-Coded Apps Break
There are consistent patterns in how AI-generated apps fail security reviews. Not random failures - specific, repeatable failure modes that appear because AI tools make the same common-case assumptions every time. These are the six that show up most often.
1. Credentials Living in the Code
The most common and most immediately dangerous issue. When an AI tool connects your app to a service - a payment processor, an email provider, a database, an SMS gateway - it needs an API key to make that connection work. The path of least resistance, for a model optimizing to make the feature work immediately, is to put the key directly in the code.
The code works. The feature works. The problem is that code gets committed to version control, and version control often gets connected to GitHub, and GitHub repositories - even private ones - have a consistent history of getting exposed through misconfigured settings, accidental public switches, or third-party app permissions. Tools that scan GitHub for exposed credentials find hundreds of thousands of valid API keys every month.
The correct approach is to put credentials in environment variables - configuration that lives outside the code and gets injected at runtime. An AI tool will do this if you ask for it specifically. It often will not do it by default, because getting the feature working in the demo does not require it.
2. The Database That Anyone Can Query
This one has numbers attached to it now.
A researcher audited 50 vibe-coded apps across Lovable, Bolt, v0, Cursor, and Claude Code in early 2026. 88% had Supabase row-level security entirely disabled. Not misconfigured. Not partially implemented. Disabled - meaning the database would return any record to any query, with no enforcement at the database level of who was allowed to see what.
Row-level security is the mechanism that enforces, at the database layer, that user A cannot query user B's data. It is not a feature that makes your app look different. It is not a feature that comes up in a demo. It is entirely invisible in the output that an AI builder shows you - which is exactly why it is the most consistently missing piece in vibe-coded apps.
The way most AI tools handle data access is through application-level filtering: the code queries the database and then filters the results to show only the records belonging to the logged-in user. This works in the demo. The vulnerability is that application-level filtering can be bypassed by anyone who knows how to talk directly to the API. An attacker does not need your UI. They need your API endpoint and another user's ID - both of which are often discoverable by reading the JavaScript that ships to every browser.
With row-level security, even a direct API call returns nothing it should not. Without it, the database will hand over whatever it is asked for.
The Lovable showcase finding makes the scope of this concrete: a researcher scanning 1,645 publicly listed apps from Lovable's own platform found that 170 of them - 10.3% - had critical row-level security failures. These were not unfinished apps. They were live, publicly listed, and in front of real users.
3. Authentication That Works Backwards
In the same audit of 50 vibe-coded apps, 24% had authentication logic that was inverted. Authenticated users were locked out of the application. Unauthenticated users - people with no account, no login, no credentials whatsoever - had full access to everything.
This sounds impossible. It happens because AI models write authentication checks that test for a condition and redirect accordingly. When the condition is written backwards - which happens when the model makes an assumption about which direction the check should go - the entire auth layer inverts. The visual output looks identical. The login page renders. The redirect fires. The difference is which users the redirect blocks.
This is not a sophisticated attack vector. It is a configuration error in the code the tool wrote by default, visible only if you test your app with an account that has not logged in.
4. Admin Panels with No Lock on the Door
Most apps have operations that only certain users should be allowed to perform - deleting records, modifying other users' data, running reports, managing billing. These operations typically live behind an admin interface or in API endpoints reserved for administrators.
In vibe-coded apps, these endpoints frequently lack authentication entirely. The admin panel at /admin or the API route at /api/users/delete was built as a feature, the AI tool wrote the functionality, and the access control - the check that verifies the person calling this endpoint is actually an administrator - was either not added or was added only on the frontend.
Frontend access control is not access control. It is a UI choice that hides a button. Anyone who knows the URL of the endpoint can call it directly, bypassing the UI entirely. This means a delete endpoint without server-side authentication is a delete endpoint anyone with the URL can call.
5. Webhooks That Accept Any Caller
When you integrate a payment processor like Stripe, the correct flow for a subscription cancellation or a failed payment is: Stripe sends a webhook to your API endpoint, your endpoint validates that the request actually came from Stripe, then your endpoint updates the database accordingly.
The validation step - verifying the webhook signature using a secret key that only you and Stripe know - is what prevents someone else from sending a fake "payment succeeded" webhook to your endpoint and getting access to a paid feature without paying.
AI tools write webhook handlers that receive the incoming request and process it. They frequently skip the signature validation because the demo does not require it - Stripe's test events work without validation. The endpoint gets deployed to production, the validation is still absent, and anyone who can send an HTTP POST request to your webhook URL can trigger any payment event they want.
A payment succeeded. A subscription activated. A refund processed. All without any actual transaction taking place.
6. Environment Variables That Are Not Actually Private
There is a consistent confusion in AI-generated apps between public environment variables and private ones.
In a Next.js application, for example, any environment variable prefixed with NEXT_PUBLIC_ gets embedded into the JavaScript bundle that ships to every browser. This is intentional - it is how you expose a public API key like a Stripe publishable key or a Mapbox token to the client. The key is meant to be seen.
Private keys - your Stripe secret key, your database connection string, your email service API key - should never be prefixed with NEXT_PUBLIC_. They should live only on the server and never appear in any bundle the browser downloads.
AI tools, when building features quickly, sometimes expose server-side secrets as public environment variables because the feature works either way in development. The app functions correctly. The key works. The difference only becomes visible when someone opens the browser's developer tools, looks at the JavaScript bundle, and reads the secret key directly out of the source.
This is not a theoretical attack. It is a documented pattern in vibe-coded apps, and it requires no hacking skill - just the ability to open a browser's dev tools and search for strings that look like API keys.
What Happened With Lovable
Separate from the output-level issues above, there is a company-level issue worth understanding.
In December 2025, Lovable raised $330 million at a $6.6 billion valuation. The company's CEO described the product's ambition as becoming "the last piece of software companies ever buy."
Four months later, a security researcher disclosed that every Lovable project created before November 2025 had been exposed for 48 days. Any free account on the platform could access another user's source code, Supabase credentials, AI chat history, and customer data. The researcher's description of how technically difficult the exploit was: "This is not hacking. This is five API calls from a free account."
It was the company's third major security incident in thirteen months.
This is not evidence that Lovable is uniquely malicious or incompetent. It is evidence that platforms optimized for speed of output - for getting to a working demo as fast as possible - build in the direction of speed first and discover the security implications later. The same priority ordering that produces fast UIs also produces platform-level vulnerabilities that affect every user's data simultaneously.
The apps that founders build using these tools inherit a version of that same priority ordering. The tool gets to a working demo. The founder ships it. The security questions come later, usually because something went wrong.
The Specific Risk Profile for Business Apps
Not all apps carry the same risk. A marketing landing page with a contact form has a very different risk profile than a multi-user business app handling financial data, client records, or health information.
The risk profile escalates with each of these factors:
Multiple user roles. The moment your app has more than one user type - owner and employee, admin and client, manager and field rep - you have access control requirements that need to be enforced at the database level, not the UI level. Every vibe-coded app with multiple roles needs an explicit RLS audit before going live.
Financial data. Stripe integrations with missing webhook validation, or subscriptions managed through application state rather than verified server-side, are not theoretical problems. They are the mechanism through which real money moves without authorization.
Third-party credentials. Any integration with an external service requires a secret key. Every secret key in your codebase is a liability. Every secret key that made it into a public bundle is an immediate incident.
Client data. If other people's data lives in your database - contacts, leads, patients, customers - and your row-level security is disabled, you are one curious API call away from a data breach. For businesses in regulated industries, that is not just a reputation problem.
User-generated content. Apps that accept input from users and do something with it - store it, display it to other users, process it - need input validation. SQL injection and cross-site scripting are not sophisticated attacks. They are the first things any security scanner checks.
What Checking Security Actually Looks Like
Most founders who have built with AI tools have never seen a security review, do not know what one involves, and have no framework for knowing what they are missing. Here is a practical checklist - not exhaustive, but covering the failures that appear most often.
Before you go live:
Check your GitHub repository settings. If the repository is public and your code includes API keys, stop. Rotate every key that was ever in the codebase, not just the ones currently there - key rotation invalidates the exposed version and issues a new one.
Open your app in an incognito window. Try to access every page and endpoint without logging in. If any page or data that should require authentication is accessible without it, your authentication check is either missing or inverted.
Find your Supabase dashboard. Open the table editor for your most sensitive table - the one with customer records, financial data, or user information. Look for the RLS settings. If they are disabled, any query to your API can return any record in that table.
Search your codebase for strings that look like API keys - long random alphanumeric strings, anything that starts with sk_, pk_, API_KEY, or similar patterns. If any of those appear in files that are not .env or .env.local, they are exposed.
Check your webhook endpoints if you have payment integrations. Look for the signature verification step. If your Stripe webhook handler processes the request without calling stripe.webhooks.constructEvent(), it accepts any incoming POST.
After you go live:
Turn on Supabase's security advisor if you are using Supabase. It flags RLS configuration issues, exposed credentials, and weak policies. It is free and it surfaces issues that are invisible from the application layer.
Run your app's URLs through a basic security header checker. Missing Content-Security-Policy, X-Frame-Options, and X-Content-Type-Options headers are low-hanging fruit that tell security tools your app was not reviewed.
Watch your API logs for unexpected patterns - requests to endpoints that do not exist, unusually high request volumes from single IPs, requests with unusual parameter combinations. These are not definitive signs of attack but they are signs worth investigating.
The Honest Truth About This Category
AI builders are not going to add a security review step to their build flow. The entire value proposition of these tools is speed - you describe something, it appears, immediately. A security review before output would slow that down. It would add questions a non-technical founder cannot answer. It would make the tool feel harder to use than it is. And it would force the tool to acknowledge, on the way to every build, that the thing it is about to produce might have significant security gaps.
That is not going to happen.
The responsibility for security review sits with whoever is deploying the app to production - which, in most cases, is the founder. And most founders are building their first production app, have no background in security, and have no way of knowing what the right questions even are.
This is the gap in the current category. Speed to demo is solved. The tools are genuinely good at that. Security between demo and production is entirely on you.
If you are building something where other people's data, other people's money, or other people's trust is involved - and most business apps involve at least one of those things - the security review is not optional. It is the difference between launching a product and launching a liability.
The practical version of this is straightforward: know what you built, know what the tool assumed when it built it, and verify the assumptions before real users depend on them. Every vulnerability listed in this post is findable before launch. Most of them are fixable in an afternoon. None of them are obvious in the UI.
The founders who find them before launch are the ones who thought to look.