If an AI-augmented engineer can build an app in a weekend, what happens to SaaS?
I'm a tech lead for data and integrations at a SaaS company. But I also run Zero Waste Tickets, a small side project, with real users.
I see software from inside of a mature product, and the solo operator building from scratch.
The code was never the hard part
I've rebuilt Zero Waste Tickets a few times. Each time the technology changed completely. Different stack, different architecture, different approach. What carried over was the operational knowledge. Everything I'd learned about what goes wrong.
AI coding tools are extraordinary. You can build in a weekend what used to take months. But you can't learn how to operate what you've built at the same pace. The code races ahead of your understanding. The gap between "it works in a demo" and "I'd trust it with someone's money" is where all the interesting problems live.
"Sounds like an edge case"
I recently spoke to someone who had vibe-coded their own ticket-selling application. Looked great. I asked how they prevented overselling. What happens when more people try to buy tickets than are available, all at the same time?
They hadn't thought about it. "Sounds like an edge case."
Overselling is not an edge case in a ticketing system. It's the core integrity problem of the domain. That's like building a banking app and calling incorrect balances an edge case. But this person wasn't careless or incompetent. They just hadn't encountered the problem yet because they hadn't operated the system under real conditions. The LLM that generated their code hadn't raised it either, because they hadn't thought to ask.
An LLM will build what you ask for. It won't know exactly what things matter most in production.
The payment timeout lesson
In an earlier iteration of Zero Waste Tickets I had a payment error from a production edge case I hadn't considered during design. A user started buying tickets. They got to the payment step, where the bank sometimes asks for additional verification. Then they walked away from their computer.
Completely reasonable human behaviour. But here's what happened underneath: the system had reserved their tickets. After a long period of inactivity it returned the reservation to the pool, as designed. Those tickets got bought by someone else. Then, hours later, the original payment completed. The bank said yes, money moved, but the order was now invalid because the tickets were gone. I had taken into account many cases, including declined transactions and payment processing delays, but I hadn't considered this particular case where the verification was delayed.
Three systems had each done the correct thing. But collectively it was broken. My reservation pool, my order state, and Stripe's payment intent all behaved correctly in isolation. The fix wasn't just atomic updates to reservations and orders, which I'd already been careful about across all three rebuilds. It was cleaning up the payment intent on Stripe's side when a reservation expired. I had thought about other delays in checkout, but nobody had ever walked away from their screen for that long mid-verification.
I learned a similar lesson with idempotency keys. Get them wrong and you enable double payments. That sounds like a technical detail until a real person sees two charges on their bank statement and loses trust in your system instantly.
Perhaps these are things you could anticipate by being smarter. But there will always be things you only learn by operating the system with real users, real money, and real behaviour over years.
What you're actually paying for
This brings me back to the SaaS question. I've worked in many software organisations. A lot of engineering time goes to handling complexity that only reveals itself at scale, over time, across thousands of different customer environments.
When you pay for a mature SaaS product, you're not paying for code. Code is increasingly cheap. You're paying for the operational knowledge baked into that system over years. Every edge case discovered. Every failure mode handled. Every "sounds unlikely" scenario that turned out to happen on the third Tuesday of every month.
Marty Cagan talks about the cost of supporting a product as a key product question. For my side project, this is critical: I have limited time, I want to keep it fun, and I need to be honest about what I can actually operate and support. I've grown Zero Waste Tickets deliberately. Simple first. Real money from day one. Added complexity only as the system proved itself. Invited other event organisers by word of mouth once I was confident it could handle the responsibility.
That deliberate pace isn't a weakness. It's the discipline. Every feature I added, I could also support. I understood the failure modes because I'd lived with the system long enough to encounter them.
This is what I was getting at in my post about overengineering a login form. Agentic coding decouples build speed from operational understanding. That's both its power and its risk. You can generate a system far more complex than you can comprehend, operate, or support. When something goes wrong, you won't have the mental model to diagnose it.
The knowledge that doesn't compress
Is SaaS under threat from AI coding? For simple, low-stakes tools, probably. If the consequences of failure are a minor inconvenience, generating something bespoke might make perfect sense.
But for anything involving money, trust, security, or reliability under pressure? The operational knowledge is the moat. Not because AI can't write the code. It can, and it keeps getting better. But because knowing what code to write requires having encountered the problems that only show up in production, over time, with real users doing unpredictable things.
Security is a another example. AI coding agents won't typically add CSRF protection unless you specifically ask. How many other security considerations are you not thinking to ask about? You don't know. That's the point.
The real value of mature software isn't the codebase. It's the deep domain knowledge that gets backed into the system and its operation.
What's next
I'm thinking a lot about where software goes as interactions become increasingly agent-to-agent rather than human-to-human. Headless software where there's no web UI at all, just APIs and agents talking to each other. That changes what "software" even means, and I think it has implications for what matters most: security, monitoring, measuring outcomes, improving over time. But that's a post for another day.
For now, my advice to anyone building with AI coding tools: enjoy the speed. It's genuinely transformative. But respect the gap between what you can build and what you can operate. That gap is where your users get hurt.
If the thing you're building handles someone else's money or trust, maybe consider whether a conversation with someone who's been through the wars might be worth more than the monthly SaaS fee suggests.
I'd love to hear from others who are thinking about this. Drop me a line.