Thinking in Code
The thing I realise with AI-assisted coding is just how quickly I would previously have jumped into writing code. That's how I would have naturally thought about and explored problems. Open the editor, start sketching something out, let the shape of the solution emerge through the act of building it.
With AI coding, I realise we have far more leverage at the research and planning phases than we do at implementation.
I'm having to train myself to spend more time planning each change. It feels a bit like procrastination. But I can also see how valuable it is.
The Gap
There's a gap between high-level planning and implementation. In my experience, that gap used to be bridged inside the developer's head. You'd read the requirements, form a mental model, and start coding. The translation from "what needs to happen" to "how it happens in code" was implicit, happening almost unconsciously as you typed.
Thinking in Plans
What works now is different. It's a progressive refinement: requirements, to plan, to detailed plan, to even more detailed plan, to maybe this plan is finally detailed enough, to let's go implement. Each layer adds specificity and reduces ambiguity before the AI ever writes a line of code.
This is new territory for people who think in code.
Not Big Design Up Front
I know what this sounds like. But it's not Big Design Up Front. BDUF happens over weeks or months, tries to anticipate everything, and produces documents that are outdated before implementation begins.
What I'm describing is a continuous refinement within a single flow of work. For a substantial build, that planning phase might be a couple of days working with the LLM in different personas to stress-test requirements for security, performance, implementability, consistency, compliance. Then refining from requirements to high-level plan, and down through multiple levels of increasingly concrete detail. Implementation then happens across multiple sessions, working through the detailed plans and checking the code at each point.
A few days to plan and build a system that would have taken weeks before. That's the difference.
You're taking the next piece of work and progressively adding detail until execution becomes so obvious that the AI can't really get it wrong.
And there's a new skill emerging here that I don't think has a name yet: developing intuition for the right size for a piece of work for an AI to build in one go, and for the level of detail needed to make execution almost inevitable. Too vague and the AI makes bad assumptions. Too large and it loses coherence. Get the granularity and specificity right, and the code practically writes itself. And the quality is higher.
That intuition is something you can only build through experience. Nobody's teaching it. We're all just stumbling into it.
Feasibility Risk Isn't Dead
Marty Cagan calls out four types of risks in software development:
- value risk (whether customers will buy it or users will choose to use it)
- usability risk (whether users can figure out how to use it)
- feasibility risk (whether our engineers can build what we need with the time, skills, and technology we have)
- business viability risk (whether this solution also works for the various aspects of our business)
There's a position gaining traction in product circles that feasibility risk (which used to be one of the biggest risks in product development) is now irrelevant. That value risk is what matters most.
AI development has made many more things viable from an implementation perspective. There are things you can build now that would have been impractical two years ago.
But I'm pretty convinced that feasibility risk is still a factor. I'm happy to be wrong about this, but unless you're guiding the AI from an engineering and developer point of view, you're going to end up with an unmaintainable, expensive mess. The AI can produce working code quickly. But working code that's maintainable, performant, secure, and fits coherently into an existing system are very different things.
The feasibility risk hasn't disappeared. It's shifted. It used to be "can we build this?" Now it's "can we plan this so it gets built well?"
And that still requires someone who thinks like an engineer.
I'm getting good results with this approach, but I have a feeling I may be erring on the side of caution with overly detailed plans. I know vibe coders would dismiss a lot of this. Where are you at with this? Drop me a line if you want to discuss.