Software Engineering in times of AI and LLMs


Software is not a goal. Solving complex technical problems is not a goal. Writing bug-free code is not a goal. Using sexy programming languages is not a goal. Add revenue. Reduce costs. Those are your only goals.Don’t Call Yourself A Programmer, And Other Career Advice by Patrick McKenzie

After having spent 2 months last year using Cursor for coding with AI using suggestions only, in the last weeks I explored Claude Codes “promting” approach and I have to admit the experience has had a fundamental transforming effect on me. Whereas I was still indecisive after my Cursors “suggestions” experience, this time Claude Code fully sold me on AI coding.

Essentially the way I look at Coding with AI now is that when used in the right way it super charges the speed and immediacy of the learning feedback loop which is at the heart of every successful Software Engineering project. Used in the right way means to me: directing AI in a way so that it creates feedback as quickly as possible for the thoughts, ideas and hypotheses you put forward yourself, to test and validate them. Then in a dialogue of asking and criticizing, developing and iterating it into a production solution. I think of it as a horde of senior devs you are directing that generate feedback very quickly - you do the thinking and directing, they do the fact checking and typing.

Without going into too much boring detail, Claude Code supported me in developing a small project in my free time, fleshing out a fully functional Angular UI, Rust backend, Ethereum Smart Contract, integrating it all, writing tests and doing impressive work in debugging (it’s actually fun!) - all based on small, focused architectural specs as well as a high-level vision describing the features and the terminology of the project. I did it all in my free time, in a fraction of the time I would have needed without AI, and probably would have never really been able to tackle, given my demanding full time job, as well as having recently become a father. Below is how I approached the development using Claude Code, which also shows how I think what it means to use AI in the right way:

  • You can’t start with nothing, you need good foundational knowledge and enough experience, such as architecture (systems, application) and programming. This is the stuff you gonna leverage into your AI interactions, and I believe the more and broader background you have (with a good T-Shape), the more effective you will become.
  • Learn how to learn with AI, that means learn how to talk to and interact with it so that you get out the most learning. Ask questions more than you give commands. I found that Claude Code did a remarkable job explaining and answering such questions:
    • Why did you do X?
    • Why did you do X and Y but not Z?
    • So you are saying A, then am I understanding it correctly that B and C? But what about D?
  • Invest in reading and learning architectural concepts such as systems and application architecture - from actual books.
  • While reading and learning these concepts use AI to discuss them, by asking further questions or to clarify. I felt it did a good job there. This helps you clarifying and structuring your own thoughts and clearing up potential confusions.
  • When you want to get into a new programming language like Rust or Go or Solidity, don’t use AI to write Code for you, but go the hard way and learn from scratch, do the investment. AI can always help clarifying thoughts and answering questions.
  • When you use AI to piece together a project using prompting make absolutely sure you invest time into properly reviewing and understanding what it produced.
  • Discuss and critique the stuff the AI generated for you and let it explain it to you and more importantly: explain your understanding back to the AI, which is the fundamental learning loop.
  • Always use Git/some Version Control to go step-by-step to be always able to throw away the changes AI made in case it gets stuck (which is possible).

I had a lot of discussions with devs about AI and it surprises me how negative it is often perceived and judged, with a lot of anger and denial involved. An often-raised argument is that AI makes stupid mistakes like incorrectly counting letters in a word or miscalculating 8.8-8.11=-0.31. My point is that it doesn’t matter and that the argument is beside the point: you are in the wrong for using AI for these (trivial) tasks, better make sure to match the tool to the problem, not the other way round. Also, I feel that when devs complain about AI producing garbage it is generally when they fall into the trap of outsourcing their thinking to AI, that is, letting AI take over fully, which is exactly the wrong way and will get you (and humankind) into trouble sooner or later (see below).

I believe that the wide use of AI will be inevitable because of the economic incentives and because everyone is doing it. Based on my subjective experience in various fields such as Academia, Startups and larger Enterprises I think we will see the following changes:

  • A fundamental change in developers roles and how they work. The role of a developers was a very different one beginning of the 2010s, as well as in the 2000s, let alone in the early 90s - I think a similar transformation will happen now. Therefore I highly doubt that Developers will become obsolete anytime soon, as someone still needs to talk to AI, feed it the right thing in the right way and check if the results are correct.
  • Price pressure on software service companies. Devs can do more in the same time using AI or put otherwise: you can do the same with (far) fewer devs.
  • More Startups, as 2-3 people can tackle now the same as 7-10 before.
  • Consulting will get increasingly under pressure to provide true value, which has to be unique experience, project management and people skills, and not just providing knowledge.
  • It’s not about the code anymore, which hopefully will lead us to a radical and unapologetically (re)focus on value generation. This is the emphasis of the quote on the top: the job of a dev is to produce value to the customer by whatever means necessary - be it in deciding to write no software, or using AI, or doing something else. It is not about the dev or the software but about the endproduct that is generating / satisfying a value to the customer.
  • T-Shaped generalists are gonna win in this revolution. Unless you are some extreme hardcore expert I assume certain specialists knowledge and/or skills might become obsolete/provided at much faster and cheaper rates by AI, making the specialists vulnerable to getting replaced.
  • The end of offshoring, especially the one that requires highly specific and detailed up-front instructions simply to be executed by the offshore devs. In this case you are better off with a capable AI as it gives you the benefit of a tremendously fast feedback loop.
  • Companies that have outsourced software development are put into a stronger position when using AI themselves in negotiations: analyzing code bases, quality checks of delivered artifacts, effort estimation supported by AI.
  • Unis adopt their curriculum, fully embracing AI. The introductory semesters are hard-core foundation building, doing programming on paper and theory as well as learning concepts, but at the same time using AI to understand and tutor them, and then later semesters using AI to build stuff. The profile of the teacher will change fundamentally, focusing on teaching students how to become effective learners, that is, how they can learn learning.

Maybe what we are seeing right now is a massive AI Bubble that is gonna burst very soon, because it can’t live up to its promises and expectations, with current models plateauing, only wasting electricity and AI companies unable to price in the actual costs as it would drive away customers. However, I don’t think this is gonna happen - yes it is very much a Hype but I doubt it is a Bubble, because we are seeing way too many compelling use cases that already generate tremendous value to the customer. And besides, we have opened pandoras box, having come never that far in terms of AI (basically solved the Turing Test), so I am afraid there is no turning back anymore, humankind needs to deal with the spirits it has conjured. I expect humankind to split 3 ways:

  1. People that embrace AI and leverage it to supercharge their learning feedback loops to reach unimaginable productive heights.
  2. People that delegate to AI to do the thinking for them, sinking into brain rot.
  3. People that reject AI completely and live happily ever after as long as a strong social state is gonna provide for them.

With this 3-way split it is clear that we are set up for big changes and I believe we truly are at a crossroad for humankind that is gonna be more far reaching than the printing press and blackpowder revolution. In an essence I see AI as a mirror of our personalities: it is the biggest temptation ever to face humankind, that is, to reach paradise without hard work and without hard thinking, but that paradise is an illusion and if we fall for it, hell will follow. Therefore, paraphrasing a dear friend: Stay strong, maintain eye contact and assert dominance - only then we might usher in a new utopian era of humankind.