I recently built an internal data tool my team needed. Six months ago it wouldn't have been viable. It would have just been manual work. Nobody would have dedicated engineering time to automate this task before. But with Claude Code and Opus 4.6, I built a production-ready tool in days that replaced all of that manual work.

AI changed the economics enough to make it worth doing. Is it worth the engineering investment for a tool that won't be needed long term? Before AI, the answer was no. The tool would not have got built. The work would have stayed manual with some SQL and Excel.

Anish Acharya at a16z coined the term "disposable software" to describe how software creation used to be constrained by ROI, but is now constrained by imagination. His examples are mostly consumer and personal. The enterprise version of this argument is more consequential: internal tools that cross the ROI threshold because development (feasibility) cost is now much reduced.

The first prototype

It started as a simple Python script. Load data into memory, process, and output the results. This is where the tool would have stopped in the old world. A script on someone's laptop. Maybe a Notion page explaining how to run it. Good enough. Move on.

Fast iteration

Each step below is a decision gate where someone would traditionally ask "is this worth the effort?" None of this would have been worth the effort without AI, but with AI it meant I could quickly iterate through multiple prototypes and not be scared about throwing away code.

"Might as well put a UI on this"

Move to Rust. I'm an experienced Rust developer, so a natural move for me. But great that I had Claude Code on hand to quickly translate the code from one language (Python) to another (Rust). Rust is the right tool for a memory-sensitive data processing task. The core logic was well understood from the Python prototype. Rewriting it was cheap. The app now serves up a simple HTML form.

"This needs to scale"

The first version loaded everything into memory. That won't work with real data volumes and the memory constraints of a containerised environment. I needed a streaming diff algorithm. I had a vague idea of how that should work. I didn't have to spend too long on working out the details because as I started explaining it to Claude Code, it could work out how to fill the gaps. I'm directing, but not vibe coding.

"We need to bulk run this"

Bolt on a CLI interface for batch operations. Straightforward addition but another thing that wouldn't have been worth the effort if I was implementing it manually. This will potentially save a lot more manual work. Who knows? It might not get used, but it was a simple addition to the spec.

"This needs to run in production"

There's a lot that needs adding to turn a simple tool into a production ready tool. Container config, observability, workers, cloud infrastructure, IAM roles, secrets management. The boilerplate of getting something actually running.

This is where I've used another AI coding pattern called "style transfer". Point Claude Code at an existing production service and say "make it like that."

Infrastructure config encodes institutional knowledge. How your organisation organises the configurations needed, what your deployment conventions are, how you handle secrets. AI pattern-matches against existing services without you having to write a docs page or copy-paste configs manually. You get something that follows your org's conventions because it learned them from a working example.

What the research is showing

Anthropic's 2026 Agentic Coding report describes tasks that required weeks of cross-team coordination becoming focused working sessions. MIT Technology Review reported on developers surrendering control over individual lines and focusing on overall architecture.

Rust as an AI-augmented development language

The Rust compiler is essentially a second reviewer for AI-generated code. When Claude Code writes Rust, it gets immediate, precise, actionable feedback from cargo check. Memory safety, lifetime issues, ownership violations, all caught at compile time, not in production.

The tight feedback loop matters enormously for AI agents. The compiler doesn't just say "error." It says what's wrong, where, and often how to fix it. That's ideal for an agentic coding tool iterating in a loop.

AI plus Rust's compiler creates a verification pipeline that lets you trust AI-generated code faster. I wrote about the importance of verification in my process previously. The compiler is an automated verification step that runs on every iteration. And the compiler error messages with Rust are brilliant and help AI coding agents track down and fix issues much faster.

Language choice for AI-augmented development should optimise for the strength of the automated verification feedback loop.

Others are arriving at the same conclusion independently. Adam Benenson argues in "The Compiler Is the Harness" that Rust's strictness is what makes it easy for AI agents. Agentic coding lives or dies on feedback loops. If code compiles, it has already satisfied a whole class of nontrivial constraints. Mykhailo Chalyi makes a complementary point in "Rust Is Winning the AI Code Generation Race": the writability problem of Rust disappears with AI agents, while the readability, type safety, and performance advantages remain.

Anthropic's own C compiler project is telling here too. Sixteen parallel Claude agents producing 100 thousand lines of Rust. The choice of Rust was deliberate. The type system and ownership model serve as natural guardrails, and test-driven development with tight feedback loops was the critical enabler.

The shifting economics

The tool I made is temporary and won't be needed for long. In the pre-AI world, it would have never been built. But it's saved a lot of manual and error-prone work that would have been needed otherwise.

It's not that existing software is cheaper to build. That's true but not the interesting part. New categories of short-lived, purpose-built internal tools become viable. Things your team needs but nobody would dedicate engineering time to. Data migration utilities, reconciliation jobs, debugging aids, one-off reporting tools.

I couldn't have justified building the final product upfront. I had to discover the requirements iteratively. AI made each iteration cheap enough to keep going.

Observations

The compiler is your safety net. When choosing a stack for AI-augmented work, optimise for the quality of the automated verification loop. Rust's borrow checker and type system aren't friction. They're a trust accelerator for AI-generated code.

Style transfer for infrastructure. You already have production services that encode your org's patterns. Use them as templates. Point the AI at a working example and say "match that." This is where AI saves the most tedious, error-prone time.

Iterative discovery over upfront planning. AI makes it cheap to explore the design space. Start with a script. See if it's useful. Add a UI. Hit scaling limits. Redesign. Each pivot is cheap. You learn what you actually need by building.

The ROI threshold has moved. Recalibrate what's "worth building." Short-lived internal tools, data migration utilities, debugging aids. Things your team needs but nobody would dedicate engineering time to. These are now viable.

What's worth building now?

The interesting question isn't "how fast can AI write code." It's "what becomes worth building when the cost drops this much?"

I think we're still in the early days of answering that. The threshold has moved more than most people realise.