I was a senior developer who resisted AI coding. I was convinced that no matter how much AI claimed to improve, it could never replace a human with a love of the craft and a pursuit of great code.

I have decades of experience writing code. I've shipped production systems and understand what it takes to keep them running. I've been through the hype cycles, the framework wars, the architecture fads. We've seen these bubbles before. But there's something different about this one.

I've now gone deep in agentic coding, building serious workflows around frontier models, constructing harnesses and guardrails, working across conservative production codebases, and, in a completely different way, on side projects where I get much more room for experimentation.

There are three good reasons to resist AI coding, and I've wrestled with all of them personally.

1. Principles

This is the hardest one, and I want to lead with it because it deserves to be taken seriously. It's a real barrier, and I haven't fully resolved it myself.

The tech industry has a consolidation-of-power problem. The major LLMs were trained on the creative and intellectual output of millions of people who never consented to that use. There are legitimate concerns about exploitation: the workers who laboured on data labelling and content moderation under terrible conditions, the open-source developers whose code was absorbed without attribution, the writers and artists whose work was scraped at scale.

Then there are the environmental costs. The energy consumption of training and running these models is enormous, and the companies building them have not been forthcoming about the true scale of it.

And the way these tools are promoted, often with misleading demos, inflated capability claims, and a Silicon Valley triumphalism that treats displacement as progress, makes it easy to feel that using them is participating in something you find morally objectionable.

If that's where you are, I respect it. You're not wrong.

Principled people disengaging doesn't slow anything down. It just means the people shaping how these tools get used don't share those concerns. The field moves on without them. I didn't ask for this to be happening. The capability exists whether I use it or not. What I can choose is whether to engage critically and bring my experience and my ethical concerns with me.

2. Ego

This is the one nobody wants to hear, so let me say it about myself first: coding is part of my identity. It has been for years. The feeling of solving a hard problem elegantly. An appreciation for the beauty and craftsmanship. That goes beyond just professional competence. That's who I am.

When a machine starts doing the thing that makes you you it feels like an existential threat. Maybe not a threat to your job, but a threat to the story you tell yourself about your own value.

And the more senior you are, the worse this gets. A junior developer has less to unlearn and less identity wrapped up in doing things a particular way. A veteran with 25 or 35 years has built an entire self-image around their ability to write code that machines can now approximate in seconds.

I noticed I had built defence mechanisms. I was finding flaws in the generated code (and there are always flaws) and using that as evidence that the whole approach is useless, rather than engaging with what the tools actually did well.

Identity crises are genuinely painful, and this industry has done nothing to prepare people for one. Nolan Lawson captured this beautifully in We Mourn Our Craft, writing about missing the feeling of holding code in your hands and moulding it like clay, the satisfaction of the artist's signature on a GitHub repo. It's an elegy, and it's worth reading because the grief is real.

Recognising the ego component is the first step toward moving past it. Your value was never really in your ability to type syntactically correct code. It was in your judgment, your architectural sense, your understanding of trade-offs, your ability to ask the right questions. And those skills are now more valuable than ever.

3. A skill nobody taught us

Using these tools well is a genuinely new skill. I made the mistake initially of judging the tools on their worst output. From the autocomplete that gets in the way to taking code snippets from chat output, the whole thing seemed overhyped.

When I did start using the frontier models properly, I realised there was a lot more to it than that. Effective agentic coding involves building harnesses, structured workflows that constrain and direct the model. It requires thoughtful context management, feeding the model the right information at the right time, understanding where the models are strong and where they have gaps, and designing processes around that.

None of this is obvious, and it's all completely new. This technology is inconsistent in ways the technologies we've worked with before aren't, and the non-determinism means we need to develop whole new ways of working to accommodate it. We're not used to that. This is not like the skills we learned before.

If you've spent decades getting good at something, investing time in replacing it with something that isn't as reliable is a hard sell. AI-generated code creates a maintenance burden, and expanding scope without expanding judgment makes worse software. The gap between "I tried it, and it's rubbish" and "I've built workflows that genuinely change what can be accomplished" takes months of deliberate practice to start bridging. Those hard-won lessons from decades of software engineering don't become obsolete. They become the thing that separates good agentic work from bad.

I built perc recently, a CLI that handles the full deployment pipeline for Rust web apps: cross-compilation, OCI images, SSH deployment, database provisioning. It's the kind of project that would have lived permanently in my "someday" list. I've written about this pattern of expanding what's possible before. I previously blogged about taking a hand-built bot-protection system and rebuilding it into something with multiple challenge algorithms, a risk broker, and behavioural analysis.

Where does that leave us?

I've been on both sides of this divide, and not long ago I was firmly on the other side. The models aren't perfect. The ethical concerns are serious. But the capability is here, and the people who engage with it critically and skilfully are going to build things that weren't possible before.

What I didn't expect is that all three of these barriers have to be addressed together. You can't just skill up if you haven't reckoned with the ego part. You can't engage with the tools in good faith if you haven't thought about the ethics. And the ethics conversation needs people who actually understand what the tools can do.

That's what keeps me going. It's not just that the way we work is changing. The scope of what a single person can build has expanded, and that expansion rewards exactly the kind of judgment and taste that experienced developers have spent years developing. The irony is that the people best equipped to use these tools well are often the ones most resistant to picking them up.