An AI Vibe Coding Horror Story

And so it happened, my first real-world AI vibe coding horror story, one that affected me personally. --> Deutsche version

I went to a medical appointment and was greeted by a friendly person. Shortly after the warm welcome, they mentioned watching a video explaining how easy it is for anyone to build software with AI these days. That sparked an idea: why use an industry-proven solution when you could just build your own patient management system?

So they did exactly that. They fired up a coding agent, built a custom patient management application, imported all their existing patient data into it, and published it to the internet. They even added a feature to record conversations during appointments and send the audio to not one, but two AI services for automatic summaries. No more manual note-taking.

Everything that could go wrong, did go wrong.

A few days later, I started poking around the application. Thirty minutes in, I had full read and write access to all patient data. Everything was unencrypted and completely exposed to the open internet. My first move was to notify the person immediately. The response I got was 100% AI-generated, thanking me warmly for reporting it and assuring me they had taken immediate action by adding basic authentication and rotating some access keys.

This person had no idea what they had built, or what the consequences could be. The data wasn't just wide open: it was stored on a US server without a Data Processing Agreement, voice recordings were being sent to major US-based AI companies, and I had never been informed any of this was happening. That is not how medical patient data can be handled.
They almost certainly violated multiple provisions of the nDSG law and potentially professional secrecy laws (Berufsgeheimnis) as well, though I'm not a lawyer.

Technical Background

The entire application was a single HTML file with all JavaScript, CSS, and structure written inline. The backend was a managed database service with zero access control configured, no row-level security, nothing. All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one curl command away from anyone who looked.

All audio recordings were sent directly to external AI APIs for transcription and summarization.

There was more, but this is already enough to get the idea.

Outlook

That's not the AI future I'm looking forward to. Personally, I'm using AI coding agents as well, but I'm able to understand what's happening, can read the code and have an idea of software architecture. Anyone just vibing away clearly won't give us a happy future.

You've successfully subscribed to Tobias Brunner aka tobru
Great! Next, complete checkout to get full access to all premium content.
Error! Could not sign up. invalid link.
Welcome back! You've successfully signed in.
Error! Could not sign in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.