[ d w ]

The Taste Moat

Published 12/03/2026

#AI#software engineering#dev productivity#agentic coding#career

If speed isn't the main advantage of AI-assisted coding, what is?

That's where I left things last time. The research showed the speed gains are messier than the headlines suggest, and the way you use the tools matters more than whether you use them. But I didn't address the bigger question.

I've been thinking about it a lot. Here's where I've landed.

Code is getting cheaper

This isn't a prediction. It's already happening.

Andrej Karpathy said AI coding agents have made programming "unrecognisable." Boris Cherny - who built Claude Code at Anthropic - said AI wrote every line of code for his team in a month. 200 pull requests, no IDE required. The floor for producing functional code has dropped through the basement.

But there's a detail in the Karpathy story that doesn't get shared as often. When he built his own project - Nanochat - he hand-wrote the whole thing. The agents "just didn't work well enough" for what he needed. The person who coined "vibe coding" chose not to vibe code when it mattered to him.

The floor dropped. The ceiling didn't.

Producing code that works is increasingly cheap. Producing code that's right - architecturally sound, maintainable, appropriate for the context - still requires someone who can tell the difference.

The part that's hard to automate

There's a story from the Linux kernel community that's stuck with me. A maintainer rejected an AI-generated patch - technically correct, passed the tests, would have worked fine. They rejected it because it introduced unnecessary complexity, made architectural assumptions that didn't fit the codebase's direction, and buried constraints that would cause problems later.

The code worked, but it didn't belong.

That distinction - between "does it work?" and "should it exist?" - is what I keep coming back to. Steve Jobs (you may not have heard of him, he was pretty underground in the tech scene) had a framing for this: taste comes from exposing yourself to the best things humans have done, understanding why they're good, and bringing that understanding forward.

In engineering terms, it's exposure to well-designed systems, an understanding of long-term consequences, and the pattern recognition to know when something feels off before you can fully articulate why.

None of that is something you can delegate to a model. The model doesn't know your codebase's trajectory. It doesn't know which trade-offs your team has already made or why. It doesn't know that the technically elegant solution is wrong because it assumes a data model you're planning to migrate away from next quarter.

Every review is a taste exercise

Coming back to The Perception Gap. The Anthropic study found that developers who used AI for comprehension - asking why, interrogating the output - scored dramatically higher than those who just delegated. I think what those developers were actually doing was building taste.

Every time you review AI-generated code and think "this works but I don't like it" - that's your evaluation function running. You're comparing what's in front of you against an internal standard built from years of reading code, shipping code, debugging code at 2am and fighting code that someone else thought was fine.

OpenAI's Codex team writes about 30% of their code by hand. That 30% isn't random - it's the parts where judgment matters most. The architecture decisions, the integration points, the parts where getting it wrong would compound.

That ratio will shift over time. But the need for someone who knows which 30% matters? That's not going anywhere.

What the moat is actually made of

I wrote in my first post that engineering feels like it's heading toward an "architect/code reviewer/agent line manager hybrid." The research from The Perception Gap backs that up. This post is the third piece: if that's the role, taste is the core skill.

Not taste in the aesthetic sense - nobody's asking you to pick fonts (unless, y'know. They do. In which case, have fun). Taste in the engineering sense: systems thinking, architectural judgment, the ability to look at something that passes every test and say "no, this isn't right, here's why."

"I know React" becomes less valuable when Claude Code writes React as well as you do. "I can design systems that are reliable under load" is more valuable than ever because that requires judgment the tools don't have. Years of experience writing code starts to matter less than years of experience making good calls.

The moat doesn't get cheaper

The tools are going to keep getting better at producing code. That's not a threat if you're getting better at knowing what good looks like.

The developers in the Anthropic study who asked "why does this work this way?" weren't just learning more effectively. They were developing the evaluation function that lets them look at AI output - or anyone's output - and make a judgment call. They were building taste.

That's the moat. And unlike code generation, it doesn't get cheaper over time.