10 minute read

This is an opinion piece about the future of software engineering. It was not written as a prediction but more of a conversation starter.

2025 has been the year of agentic AI in software engineering. Copilot, Claude Code, Codex, Cursor… There is no shortage of tools to assist software engineers and these are getting better every month. A friend recently told me that with the latest Codex app and gpt 5.4, writing code manually was definitely over.

What do software engineers do?

Does that mean that software engineering is over? It would if it only involved writing code. Fortunately it does not. I would even dare to say that writing the actual code is probably the easiest part of the job. It is what you do when everything else is figured out: modeling, system design, algorithms, software patterns,…
For example, I was working on a FastAPI backend in the last few months. I adopted a clean architecture pattern. Once this is done and you have clearly defined an object like a booking, do you really need to write CRUD endpoints by hand? Of course not, AI does that for you just fine. Luckily, the value of a SWE is not in writing CRUD endpoints but rather in designing a data model, a coding pattern, clear naming conventions that make sense to business. Once this is done the actual endpoints are mostly boilerplate.
So yes, maybe writing code is over, or soon will be. Now is that a problem? Well, as usual it depends. As I said, if you are in a senior position, your value is not in actually writing the code but more in system design and understanding business needs. So you may not write code by hand anymore, but that just means you will be faster. For juniors however, it is an entirely different story. Juniors usually do not have the experience to reason in terms of systems. They start by executing a senior’s vision by writing the actual code. That part is clearly at risk and I think today AI can write code bettern than most juniors, and most seniors too btw. The problem is that juniors today are the seniors of tomorrow. So eventually, if AI replaces juniors, it will need to replace seniors as well in the near future. Which means it will need to be able to do everything I mentioned above.

AI that does everything

Will AI be able to do system design, understand business requirements, think about the budget for trade-offs, and all that? Most likely yes. Actually it can already do some of that, in addition to writing code. My take is that AI will be able to replace SWE entirely in the next few years, regardless of seniority.
I am not an AI doomer though, and I struggled with that thought lately. But if I am being honest, the pace of AI progress is astonishing and a lot of things we thought were not possible have been tackled and solved within a few months. Hence, if I set my ego aside and try to be realistic and not act as a gatekeeper, I can only admit that AI will be able to do a SWE job entirely. Not now, maybe not next year, but within the next 5 years? Highly likely.
There are a lot of questions and reactions that arise from this hypothesis. Like defiance, and it is visible in the community. You don’t have to look far to read about AI deleting a production database, or developers complaining about poor AI performances.
However let’s keep that hypothesis: AI will be able to fill any technical position. In that case, not only SWE are at risks but every job that involves doing something on a computer, which is a lot. However, let’s focus on SWE here.
In that world, whole apps can be created in a few minutes by prompting. Vibe coding becomes the de-facto way to build anything digital. Let’s think about that for a minute. You are a founder, with little to no technical knowledge, you can tell an AI “Build an application that does this and that” in a simple chat interface, and it does it. The app works, it scales, everything is good.
And one day production breaks. Not because of poor AI code. The code is actually really good, follows best practices, implements relevant trade offs, it is actually brilliant. No, production breaks because that’s what production does. The best systems are not the ones that never break, these are the ones that are resilient, recover fast, have great monitoring and alerting. Nevertheless, they do break because they always do eventually. Because a edge case was not anticipated, because a library is not behaving as anticipated, or a third-party, or anything. Anyway you get my point, systems break.
You could argue that AI would be able to build systems that never break. I don’t believe that for a second. I agree that AI could build systems that break far less than current ones. Where current apps break 0,1% of the time, future AI-built apps could build ones that break 0,01%, 0,001% or even less. But regardless, it will never be 0.
What does this founder do when production breaks? Probably ask AI to repair it. And it will. Maybe the founder won’t even have to ask. AI will monitor the logs continuously and will fix without any human intervention. The founder will not even know. However the users will.
If the system is well architected it may just be failed requests, increased latency, perhaps side features switched off. The bigger and more complex the system, the more parts could break. Downtimes may still happen.
So even when everything is automated there will still be issues. And issues can be expensive in production, even a few minutes of downtime. Check how much it costs Google or Amazon.

Who is responsible?

When users experience issues, they are usually not happy. They lose time, money, reputation, and they want to blame someone. In that case, they blame the system that is down. But a system is not accountable for itself. So they blame the people behind it. In this world where a founder builds an app on his own with AI, they blame the founder.
The problem is that the founder has no technical knowledge. So they don’t want to be accountable for technical issues. Will they blame the AI then? Yes, certainly. Will that hold? Absolutely not. AI is not a person, it is not accountable for anything. So they will blame the AI providers. If your app that Claude Code generated for you breaks, can you blame Anthropic? You can try, but it does not hold legally. Because when you use Anthropic products, you accept their Terms of Use, which clearly states that you accept responsiblity.
So the founder is the only person that would be legally responsible. Sometimes that represents millions in losses. But the founder does not want that. They are not technical, why should they be responsible for technical issues?
My hypothesis is that when these kind of issues arise (and remember, they will), founders will start hiring technical people to oversee the AI production. Not to write any code or actually do anything technical, but to be responsible. These roles will not need to actually write anything but to be responsible, they will need to understand what the system actually does. These are the future CTO roles.
So when production breaks again (because again, it always does), these CTOs will be responsible. Founders will still take some heat but they will have an actual person with legal accountability to blame. If there is one thing I am sure of, it is that people will never stop looking for someone to blame. CTOs will not have the technical illiteracy excuse. However, as codebases grow larger, they will need to understand more and more things. How could you accept to be accountable for something that you don’t even understand? This accountability will come at a cost. I suspect these CTOs won’t be cheap. Today we are already saying that C-level are not paid according to what they do but to the level of responsibilities they assume.
But as codebases grow, no amount of money will convince CTOs to bear the responsibility for everything. So they will need to hire other technical roles, each accountable for a specific part of the codebase. Something resembling the staff engineers of today. And this will go on and on as codebases grow larger.

The future of SWE

As I thought about all that I reckoned it was kind of a shame. It is too bad to have a tool so powerful it can automate an entire aspect of your life but not be able to entirely trust it because of legal accountability. This led me to think about self-driving cars.
We have been hearing about this technology for about two decades. It was a super hot topic ten years ago. The technology is supposedly ready and drives better and safer than the majority of humans. But how many do you see riding the streets and highways today? None. Even with the Teslas you are supposed to be aware and watchful at all times (and obviously everyone is). We are barely at level 3 out of 5 of autonomous driving. Why is that? Because of accidents.
Like production failures, road accidents will always happen, there are just too many unpredictable variables. Even when self driving cars are safer than human drivers, sadly there will always be car crashes with victims (hopefully much fewer). The problem then is the same as above, who is responsible? Is it the car? Not a person. Is it the manufacturer? Their reputation will take a hit for sure, but they specifically say that you should be careful and are legally responsible, except in very specific conditions. Is it the driver? Their hands was not on the wheel, so they don’t want that, but legally it leans towards them yes.
I think that’s why you don’t see self driving cars today although the tech performs good in test environments. It is not a technical problem, it is a legal and ethical problem. I believe it is the same thing for AI-generated systems in production. No there won’t be victims (hopefully) but there will be losses.

Just because AI would be able to do the job of a SWE, it does not mean it will, not entirely on its own anyway. Because there is a level of risk people just cannot accept, not without someone to blame.
I realize this is a rather bleak picture of the future for SWE roles. To be the fall guy for companies technical issues. But it is a strong possibility. I assume there will be fewer roles and these will be very senior. However, as I said earlier, you still need juniors to be the seniors of tomorrow. This will be a long term issue. To ensure their future, companies will still need to hire juniors and train them.
Will the job be easier though? After all, no need to write code or design systems anymore, nor understand business requirements. I am not so sure. In order to be accountable, SWE will need to understand what’s happening. So they may not write or design, but they will need to understand the outputs. Given the level of AI today and that it will only get better, one can assume the code will follow rigorous patterns, use clever algorithms, cutting edge libraries and tools… Understanding all that is no easy task.

Conclusion

So will software engineers disappear? My take is they won’t, not for as long as there is software. Not because they will need to write it, maintain it, or even deploy it, but because we will always need to understand what is going on. Because no one wants to take the blame on something they do not even understand.