14 Comments
User's avatar
Niko's avatar
Oct 15Edited

Warhammer 40k coming early. We'll all be tech-priests soon 🙏.

Thanks for another insightful article! I wonder how you think senior engineers should prepare for the near- to medium-term future.

I have ~12yr professional experience from data-engineering to web development. I personally haven't experienced much of an increase in productivity from AI coding tools. Outside of small scripts and starting points for things you shouldn't be implementing yourself anyways (e.g. JSON float parsing), most of my experiments with AI coding tools end in frustration. When I'm in a mature code base that I am familiar with, then I already have a couple of solutions in my head. Typing them out is not the problem. On the contrary, the process itself forces me to interact with the code base and shows me which of the existing patterns make this new implementation difficult. I can then choose a different approach or refactor first. As a side effect I become intimately familiar with that code base, and that familiarity is where the real productivity gains come from. This is something that most managers (that I've met) under-appreciate, causing them to under-value retention and team stability. You are at your most useless when you inherit a legacy code base. With AI written code, it always feels that way.

During the early stages of your engineering career, you are mostly trying to get the thing to work at all. In that stage, AI can help, but I'm worried that it also robs you of the opportunity to think through hard problems yourself. However, as a senior you should be choosing from several working solutions and considering the impact to the rest of the system and the team. AI is bad at that. I was hoping that it could speed up prototyping of ideas, but that doesn't seem to work very well in larger code bases where a feature touches many different places. It just creates a huge mess. It never simplifies, always complicates, breaks stuff, and then capitulates. (Or, my personal favorite, claiming to fix a bug by placing a comment that says: "// Here we fix the bug that ...")

My personal take-away is to focus on strengthening my fundamentals, so that future me is in a better position to deal with the inevitable problems that the AI is unable to fix, or that the AI itself caused. To the extent that AI tools – mostly the chat kind – can help with that, I will use them. However, I don't feel that I have much of an edge over less experienced developers "driving" the AI coding agents, so I'm trying to avoid it as much as possible. Unfortunately, that is not a stance that I can comfortably publicize in my organization. It feels a bit like I'm undercover in a cult.

You probably know the Principle Skinner meme: "Am I so out of touch? No, it's the children who are wrong!" Well, nobody want's to be that guy, so I intend to check on the latest tools every few months. Would you say that I'm missing the boat by not committing to AI coding? How would you strike a good balance? Also, should I recommend to juniors to rely less on AI tools even if that means delivering less? That might hurt their performance review in the current climate.

Expand full comment
Denis Stetskov's avatar

This is one of the sharpest takes I’ve read on this.

You’re absolutely right, familiarity is the real productivity gain, and AI strips it away.

What I’m seeing now is that juniors are skipping the “painful familiarity” phase entirely.

They ship faster, but they don’t build intuition, the thing that makes seniors valuable in the first place.

In the short term, that looks efficient.

In the long term, it’s how we lose engineering as a discipline and turn it into prompt assembly.

Expand full comment
Ken Granville's avatar

I strongly agree with you, Denis. I believe there is a way to address this problem. It involves treating AI as a tool and assessing it on its real merits. Upon doing so, it should become clear to business leaders that there is a better path that is more human aligned.

Expand full comment
Denis Stetskov's avatar

That’s the sane path, but it requires leaders to value time horizons longer than a quarter.

AI isn’t misaligned because of the tech. It’s misaligned because incentives are.

Expand full comment
Ken Granville's avatar

I actually think there is a gap in innovative solutions. In short, programming languages lack the semantic or meaning basis to create the human alignment I refer to. Bridging that gap has inherent attributes that enable human alignment. That’s not a leadership problem as much as it is technology and philosophy challenges. It’s about first principles thinking regarding what both humans and machines operate from. The former is driven by intent and meaning. The latter by meaning translated into instructions.

Expand full comment
Denis Stetskov's avatar

That’s a great point, intent vs. instruction is exactly where alignment fails.

Machines don’t understand meaning; they only understand hierarchy. Humans design hierarchies based on incentives, not intent.

So when language becomes the interface, we don’t get alignment — we get amplification of whatever drives the prompt.

And right now, that’s profit, not purpose.

Expand full comment
Ken Granville's avatar

Correct. It absolutely matters what the goals are from the outset. In order to create a solution that is transcendent, it cannot be constrained by design decisions that won’t work. It begins with a question. If 99.9% of humans don’t use programming languages and machines don’t either, what would work for both of them? It needs to be a precision approach. Not guessing. There must be transparency and control. In other words, not a language model.

Expand full comment
Ken Granville's avatar

BTW, I think an intent-driven approach has bigger economic potential than code-driven approaches because it unlocks ideas. A new marketplace for ideas without the wasted energy, complexity, and other problems that come from building on top of substrate that behaves like quicksand.

Expand full comment
Ken Granville's avatar

As far as language being the interface goes, it can work by being based on semantics and dialog. It also has to be comprehensive and flexible. These are big topics to consider.

Expand full comment
Cozmopolit's avatar

As a software engineer with over 35 years in the industry that uses AI agents heavily, I couldn't agree more.

One of my favorite citation on the issue is:

“It’s like getting the mushroom in Super Mario Kart — it makes you go faster, but it doesn’t make you a better driver.”

Joseph Carson (Black Hat USA 2025, 1Password Panel)

Expand full comment
Denis Stetskov's avatar

That’s a perfect analogy.

AI amplifies direction, not competence.

And when the direction is wrong, acceleration causes us to collapse faster.

"Supervising AI feels like managing 50 literal junior engineers at once — fast, obedient, and prone to hallucinations. You can’t out-code them. You must out-specify them." My quote from https://techtrenches.substack.com/p/supervising-an-ai-engineer-lessons

Expand full comment
M Suliman's avatar

I'm almost afraid to say that this sort of de-labouring feels intentional. Between 2010 and the time I resigned in 2019 I'd say more than 85% of the headcount at my company was gone. COVID bought some time, but the local office just closed. Now the company is just a cloud licence mill that outsources deployment and development to the lowest bidder.

Expand full comment
Denis Stetskov's avatar

Feels that way because it is.

It’s cheaper to rent talent than to build it, until there’s no one left who remembers how things actually work.

Expand full comment
Tim Hardcastle's avatar

There's a tragedy-of-the-commons situation here, though. Even if everyone at a given company recognizes that only juniors can grow up to be seniors, they can't stop those seniors from going and working for someone else. So it's in their interest that lots of people should be trained, but not that they should do it themselves.

Expand full comment