"It Is Difficult to Get a Man to Understand Something When His Salary Depends Upon His Not Understanding It" -- Upton Sinclair

"When the facts change, I change my mind." -- John Maynard Keynes

I've been sceptical of AI for a while now. After watching social media concentrate power, degrade discourse, and generally make a mess of things while everyone cheered about connection and democratisation, I wasn't eager to embrace the next wave of transformative technology from the same playbook.

A lot of what I saw reinforced that scepticism. Endless slop. I judged AI by its worst outputs and felt pretty comfortable dismissing the whole thing.

Three things changed my mind.

First, a friend who wouldn't shut up about spec-driven development. He explained his workflow in detail and tried to talk me around. I pushed back to defend my craft and kept doing things my way.

Second, I hit a patch of ice on my bicycle. Broke my elbow, messed up my shoulder, ribs, and wrist. Suddenly, I couldn't type properly. I was forced to lean on AI agents far more heavily than I'd ever planned to. No choice but to figure out how to make them actually work.

Third, Opus 4.5 had dropped. And honestly? It's shockingly good at coding. A genuine step change from what I'd been using before.

Now I'm rethinking. The effectiveness of the tools opens up a huge dilemma. To be principled would be to massively disadvantage myself.

The Contradiction I'm Sitting With

Here's where I'm at: I need to adapt to stay relevant. Years of accumulated expertise don't become worthless overnight, but pretending the landscape hasn't shifted would be foolish. At the same time, I genuinely believe the current structure of AI development is concentrating power, replicating the worst patterns of Big Tech, and creating environmental costs we're not seriously reckoning with.

What do you do when you need to use tools that you think are contributing to harmful outcomes?

I've been thinking about this through the lens of Buckminster Fuller, partly because I've been reading his work recently, and partly because he spent a lot of time thinking about exactly this kind of bind. Fuller studied what he called the "Great Pirates", powerful maritime traders who operated across national boundaries, accumulated comprehensive knowledge, and eventually became the invisible power brokers behind modern finance and corporate structures. But he didn't study them to emulate them. He studied them to understand how power concentrates, and how to design alternatives.

Distinguishing the Tool from the Structure

Using AI effectively isn't the same as endorsing the concentration of its development in a few corporations, or the extractive data practices, or the environmental costs. I can be pragmatic about using the tools while being vocal about the structural problems.

Fuller didn't refuse to use electricity because power companies were monopolistic. He designed systems for more distributed energy.

For me this means learning to work with AI while pushing for open-source alternatives, better regulation, and environmental accountability. Being the person in the room who can say "this is impressive technically AND here's why the current trajectory is dangerous."

Deep expertise gives me standing that pure critics don't have.

Sharing Knowledge, Not Hoarding It

Fuller's response to the pirates' legacy was essentially: what if we made all knowledge accessible? What if we designed for everyone's success, not competitive advantage? What if we operated from abundance rather than scarcity?

My expertise becomes more valuable when I give it away, not less. I'm trying to document what I'm learning about AI publicly. The "competitive moat" thinking is pirate logic. Fuller would say security comes from being genuinely useful to the whole system.

The Economic Argument

Here's what strikes me about the economics of AI: they feel fundamentally broken.

Billions invested in training runs. Models obsolete in months. Massive duplication of effort across competing companies. Each company is rebuilding similar capabilities from scratch. Energy and compute wasted on redundant training. Race dynamics forcing premature releases and corner-cutting.

Fuller would see this and say: this is competition-based scarcity thinking producing artificial scarcity while simultaneously creating massive waste. It's exactly backwards.

He believed humanity's problems weren't resource problems—they were design and coordination problems. We have enough for everyone if we design efficiently and collaborate.

What if the massive investment was collaborative rather than competitive? Shared base models, openly developed. Companies compete on applications and implementations, not on rebuilding foundation models. Like how we don't have competing internets, we have shared infrastructure with competition at other layers.

What if we designed for longevity rather than obsolescence? Smaller, more efficient models that actually get refined over time. Focus on getting more capability from less compute. Sustainable rather than race-to-the-bottom dynamics.

The current model only "works" because venture capital and tech giants can sustain losses hoping for future monopoly. The race dynamic forces everyone to participate or be left behind. It's a prisoner's dilemma—everyone would be better off cooperating, but no one can unilaterally stop competing.

Being a Trim Tab

Fuller's favourite metaphor was the trim tab. The small rudder that turns the big rudder that turns the ship. You don't have to move the whole ship yourself. You find the leverage point where a small action creates a larger change.

I can't change that major AI models are controlled by a few companies, or the massive energy consumption, or the global race dynamics. But I can change what problems I work on, how I share knowledge, what tools and alternatives I support, and what voice I lend to which conversations.

What This Means in Practice

For me, it means focusing on problems that actually help people. Not extraction and manipulation. Is this work helping people do more with less? Is it reducing drudgery? Creating genuine value?

The scarcity mindset says, "AI is taking my job, I need to protect my turf." I'm trying to think differently. AI can handle routine work, freeing me up for problems I haven't had capacity to address.

My expertise isn't a scarce resource to protect. It's a foundation to build something better on.

The Uncomfortable Reality

I don't have this fully resolved. The tension is real. The risks are real. But sitting it out isn't an option either.

The test isn't whether AI is good or bad. It's whether we can shape how it develops and who it benefits. That needs people who understand both the technology and its dangers to actually be in the room.

I'm still sceptical. I'm still concerned. But I'm building again, with eyes open and values intact. If you're sitting with the same contradiction, I'd genuinely love to hear how you're thinking about it.