"It Is Difficult to Get a Man to Understand Something When His Salary Depends Upon His Not Understanding It" -- Upton Sinclair

"When the facts change, I change my mind." -- John Maynard Keynes

Last year I was using AI chat and Copilot but hadn't gone all in on coding agents yet. I was seeing AI slop everywhere and saw code review bots fixating on trivia or getting completely confused. The tools were useful for research and code completion, but agents felt like more hype than substance. And they were. They genuinely weren't ready.

Then December happened.

It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn't work before December and basically work since -- the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow.

-- Andrej Karpathy

Karpathy nails it. Agents before December were a novelty. After, they actually work. The models got better at holding context over long tasks, at recovering from mistakes, at understanding what you actually want. It wasn't gradual. It was a step change.

Two things pushed me further than I expected.

A friend who wouldn't shut up about spec-driven development. He explained his workflow in detail, and I pushed back to defend my craft. Kept doing things my way.

Then I hit a patch of ice on my bicycle. Broke my elbow, messed up my shoulder, ribs, and wrist. Suddenly, I couldn't type properly. I was forced to lean on AI agents far more heavily than I'd planned, and dictation software for everything else. No choice but to figure out how to make them actually work. (The dictation, by the way, turned out to be amazing. I'm not sure I'll go back.)

The timing was fortunate, if you can call breaking your elbow fortunate. Opus 4.5 had just dropped, and it's shockingly good at coding. I haven't written any code since the accident, but my output has gone up, not down. That's a strange thing to sit with.

Nolan Lawson put it well in "We Mourn Our Craft": "The worst fact about these tools is that they work." He frames it as grief, not conversion. He had a mortgage, a family, and junior colleagues strapping on "bazooka-powered jetpacks." The mind-change came not from an argument won, but from a reality that refused to wait for his permission.

It's OK to mourn our craft. I've permitted myself to do so. But I'm learning to build a new craft on the bones of the old one.

The effectiveness of these tools opens up a huge dilemma. Opting out entirely means giving up any influence over how this goes.

The Contradiction I'm Sitting With

Here's where I'm at: I need to adapt to stay relevant. Accumulated expertise doesn't evaporate overnight, but the speed of change is faster than I expected. At the same time, I genuinely believe the current structure of AI development is concentrating power, replicating the worst patterns of Big Tech, and creating environmental costs we're not seriously reckoning with.

What do you do when you need to use tools that you think are contributing to harmful outcomes?

What would Bucky do?

I've been thinking about this through the lens of Buckminster Fuller, partly because I've been reading his work recently, and partly because he spent a lot of time thinking about exactly this kind of bind. Fuller studied what he called the "Great Pirates", powerful maritime traders who operated across national boundaries, accumulated comprehensive knowledge, and eventually became the invisible power brokers behind modern finance and corporate structures. But he didn't study them to emulate them. He studied them to understand how power concentrates, and how to design alternatives.

Distinguishing the Tool from the Structure

Using AI effectively isn't the same as endorsing the concentration of its development in a few corporations, or the extractive data practices, or the environmental costs. I can be pragmatic about using the tools while being vocal about the structural problems.

Fuller didn't refuse to use electricity because power companies were monopolistic. He designed systems for more distributed energy.

For me this means learning to work with AI while pushing for open-source alternatives, better regulation, and environmental accountability. Being the person in the room who can say "this is impressive technically AND here's why the current trajectory is dangerous."

Deep expertise gives me standing that pure critics don't have.

Sharing Knowledge, Not Hoarding It

Fuller's response to the pirates' legacy was essentially: what if we made all knowledge accessible? What if we designed for everyone's success, not competitive advantage? What if we operated from abundance rather than scarcity?

My expertise becomes more valuable when I give it away, not less. I'm trying to document what I'm learning about AI publicly. The "competitive moat" thinking is pirate logic. Fuller would say security comes from being genuinely useful to the whole system.

New Tools, Old Discipline

The best practices for working with AI aren't new.

Write clear specs before you start coding. Break work into well-defined tasks. Review output carefully. Give good context. Think about architecture before implementation. We were supposed to be doing this for decades.

My friend who kept banging on about spec-driven development? He was right. Writing a proper spec before handing work to an AI agent produces dramatically better results than prompting and hoping. The spec forces you to think first. The thinking was always the valuable bit.

Anthropic published a piece about how their own teams use Claude Code. They treat AI agents like a development team. Give them proper context. Plan before executing. Maintain human oversight. Review before deploying. It reads less like a technology manual and more like a management handbook. Because that's what it is.

The people thriving with these tools aren't the ones who learned prompt engineering from scratch. They're the ones who already valued clear thinking, systematic review, and well-structured work. The craft didn't die. It shifted from typing to thinking. From writing code to specifying intent, reviewing output, and knowing when something's wrong.

That's why accumulated expertise still matters, even as the tools change underneath you. You need to know what good looks like before you can judge whether an AI produced it.

The Economic Argument

The economics of AI are fundamentally broken.

Billions invested in training runs. Models obsolete in months. Massive duplication of effort across competing companies. Each company is rebuilding similar capabilities from scratch. Energy and compute wasted on redundant training. Race dynamics forcing premature releases and corner-cutting.

Fuller would see this and say: this is competition-based scarcity thinking producing artificial scarcity while simultaneously creating massive waste. It's exactly backwards. He believed humanity's problems weren't resource problems, they were design and coordination problems. We have enough for everyone if we design efficiently and collaborate.

What if the massive investment was collaborative rather than competitive? Shared base models, openly developed. Companies compete on applications and implementations, not on rebuilding foundation models. Like how we don't have competing internets, we have shared infrastructure with competition at other layers.

What if we designed for longevity rather than obsolescence? Smaller, more efficient models that actually get refined over time. Focus on getting more capability from less compute. Sustainable rather than race-to-the-bottom dynamics.

The current model only "works" because venture capital and tech giants can sustain losses hoping for future monopoly. The race dynamic forces everyone to participate or be left behind. It's a prisoner's dilemma. Everyone would be better off cooperating, but no one can unilaterally stop competing.

Being a Trim Tab

Fuller's favourite metaphor was the trim tab. The small rudder that turns the big rudder that turns the ship. You don't have to move the whole ship yourself. You find the leverage point where a small action creates a larger change.

I can't change that major AI models are controlled by a few companies, or the massive energy consumption, or the global race dynamics. But I can change what problems I work on, how I share knowledge, what tools and alternatives I support, and what voice I lend to which conversations.

What This Means in Practice

For me, it means focusing on problems that actually help people. Not extraction and manipulation. Is this work helping people do more with less? Is it reducing drudgery? Creating genuine value?

The scarcity mindset says, "AI is taking my job, I need to protect my turf." I'm trying to think differently. AI can handle routine work, freeing me up for problems I haven't had capacity to address.

I'm deploying AI across my workflow now, orchestrating multiple agents, refining a process around human oversight at the points with most leverage. The bottleneck has shifted from writing code to reviewing it. Scaling the human judgement side is the interesting problem.

My expertise isn't a scarce resource to protect. It's a foundation to build something better on.

The Uncomfortable Reality

I don't have this fully resolved. The tension is real. The risks are real. But sitting it out isn't an option either.

The test isn't whether AI is good or bad. It's whether we can shape how it develops and who it benefits. That needs people who understand both the technology and its dangers to actually be in the room.

I'm still concerned. But I'm building with eyes open and values intact. If you're sitting with the same contradiction, I'd genuinely love to hear how you're thinking about it.