Hold the security: a vibe-coding story
Vibe-coded doesn't mean vibe-secure, so we should try and stop the internet being full of even more broken things

On the morning of Friday 6th Feb, holdtheline.org.uk appeared. It takes your postcode, finds your Labour MP and sends them an email asking them to back the Prime Minister, who was suffering challenges to his leadership. Guido Fawkes picked it up within hours. I got curious about who made it: it says it’s not affiliated with the Labour Party, which piqued my interest.
That question turned out to be less interesting than other things.
The site was made with Lovable, one of the new wave of AI-powered tools that let you describe what you want and get a working web application back. Lovable generates quite pretty frontends (in React) and wires them up to Supabase, a hosted database with authentication, an API, and serverless functions. Anyone can go from idea to live website in an afternoon, without having to think about any infrastructure - or indeed, any code. No technical knowledge required.
I want to say from the off that I’m a cautious but growing fan: vibe coding, or perhaps more optimistically put, “AI-assisted engineering”, is an impactful, important technology that’s going to help us do great things. But being impactful also means that it’ll do really terrible things if we’re not careful. And because it’s new, we’re all still learning how to use it effectively, without the terrible things happening.
This story is a case in point: Lovable does a pretty mixed job of making things secure. It does try, but it seems to get confused quite often, leading to the publication of websites with more or less no security.1 In 2025, security researcher Matt Palmer found that over 170 Lovable-built apps had exposed databases due to missing or insufficient security policies. So: it gives you a working app, but it’s your job to make sure it’s actually locked down.
For most websites, the database they use is a backend detail that you, as the user, don’t need to know. But Supabase doesn’t work that way. A project using Supabase allows the user’s browser to make requests directly to the database, instead of those requests being mediated by the web server. To allow this to happen, a project using Supabase has credentials - a key - that is used by your browser to make database requests. The key is designed to be public, to allow this to happen.
Instead of making the database secure by locking down access to it entirely, Supabase’s security relies on something called Row-Level Security (RLS) to control what the key actually lets you do. It means that the website developer can design a very granular set of permissions, allowing your browser to make database requests for your data, but not anyone else’s.
Think of it as the difference between giving someone a key to the building and giving them a key to every room inside it. The key is supposed to get you through the front door and into your room, but not farther - so you can’t go into other people’s. But on those sites Matt Palmer found, and almost certainly on many more, the key got you into every room.
I don’t know if that was the case for holdtheline.org.uk: without permission, it’s not something that can legally be tested. But, while it would have been illegal to try, it wouldn’t have been at all difficult. This is partly because every Supabase project discloses the structure of its database to the world, by design: if it didn’t, your browser wouldn’t know how to interact with it.
This design is convenient, but from a security perspective, quite unwise. It allows an attacker to tell exactly what is being stored and in what form, even when it’s data that’s not used or surfaced by the website. But it’s a feature of every Lovable site, including holdtheline.org.uk.2
But for holdtheline.org.uk, the exposure of the database wasn’t as concerning as the way emails were being sent to MPs. The site was configured to send emails via Resend, a popular email service. The MP’s email was looked up using the Parliament API by the user’s browser and passed to Resend. But because this was all happening in the browser, it’s trivial to manipulate it. Anyone who wanted to could send the campaign message to any email address, and make it appear to come from anyone they chose. It would have been easy for an attacker to send thousands of copies to every MP - emails that would have been indistinguishable from genuine ones from constituents.
I emailed the site’s contact address the evening it appeared and connected with the creator on Bluesky shortly after. To his credit, he responded quickly and positively, engaging constructively with the details, and getting all of these issues fixed. The site now appears locked down, with RLS policies, open signup disabled, rate limiting for sending emails, and the email sending function moved onto the server instead of being in the browser.
There’s a deeper issue here though: none of this is the site creator’s fault. Someone had a political moment they wanted to respond to, they used an AI tool to ship something overnight, and it worked, and things got done. That’s exactly the promise of these platforms. It’s exactly what they’re for.
But the arrival of these tools has - like all development tools that help get more done more easily - raised the security stakes. Tools like Lovable make building and deploying new websites trivial. In this case, one that collects personal data from real people: names, email addresses, postcodes, political opinions. They produce code that works, that looks professional, and that will pass a cursory glance from someone who isn’t a security specialist. But sometimes - often enough that it matters - they are built wide open and the tools don’t go to the trouble of checking, or telling you.
They are, in short, a massive footgun for the uninitiated.
This isn’t really an AI problem. It’s the same problem we’ve always had with any technology that lowers the barrier to entry: the barrier to doing it securely doesn’t lower at anything like the same rate. The difference is that when a team ships something, there’s at least a chance that someone in the room has thought about security before.3 When a non-developer builds something with an AI at four in the morning, there isn’t. And because it’s cheap and easy to build these things there are going to be a lot of them, and thereby, a lot more insecure things on the internet than there already are (which is a lot).
The irony is that these tools are capable of implementing things reasonably securely. The fixes here were straightforward: lock down the security policy, disable open signup, move the MP email lookup server-side so the client can’t manipulate it. If you asked Lovable to do these things, it probably would. In the case of holdtheline.org.uk, it probably did. But you have to know to ask, and if you’re using Lovable to build your website in the first place, you probably don’t.
Thankfully, these aren’t hard engineering problems. So we should expect the platforms to do something about it: quite a lot more than Lovable is at the moment. Secure default settings, implementing features with security in mind and reviewing code from a security perspective are all things that AIs can do: not perfectly and not comprehensively, but - newsflash - people can’t either. And every bit of effort moves the dial. Until that happens, we’re going to see a lot more of this: well-intentioned projects, built fast, collecting sensitive data, with the doors left open.
Twelve years ago, Quinn Norton wrote one of my favourite articles: Everything Is Broken. Twelve years on, she’s still right. And there’s about to be a whole lot more of it.
This is partly because of Supabase's permissive default security settings. The security features are there. They're just not turned on. It's completely maddening that this is still a thing in 2026. Lovable does often understand this and make things secure, but sometimes, it just… doesn’t. Spicy!
For holdtheline.org.uk, this showed several tables, including ones to store who had sent emails to which MPs and when, and which MPs had responded. I thought this was a bit alarming, but after speaking to the creator, I think this was not a real finding, hence relegating it to a footnote. The site’s creator told me they’d decided not to store any logs of the emails sent (Lovable just hadn’t deleted the tables), and that the database was therefore empty from the start of the campaign. A wise instinct and a good principle to follow.
On the wind, I can hear my security friends laughing at that idea, but I'm gonna stand by it.
