Software Engineering Isn’t Dead Yet
When developers returned from Christmas 2025, many of them were shaken. They had spent the holiday experimenting with a new version of Claude Code, and watched it autonomously build projects that would have taken them weeks. One senior Google engineer reported that it recreated a year’s worth of their work in an hour. A developer on LinkedIn documented building a functional Slack clone, real-time messaging, channels, threads, file uploads and search included, in fourteen days. The conclusion many people drew is obvious: if a machine can do all of this, what exactly is left for the human?
The people closest to the technology seem to agree that the answer is: not much. Boris Cherny, the creator of Claude Code at Anthropic, said on a podcast in February that he has not manually edited a single line of code since November, that coding has effectively been “solved,” and that the job title software engineer will “start to go away” by the end of this year. Dario Amodei, Anthropic’s CEO, told The Economist that the industry may be just six to twelve months away from AI handling most or all of software engineering work end to end. An OpenAI researcher made it pithier: “Programming always sucked. I don’t write code anymore.” Jack Dorsey then cut half of Block’s ten thousand employees, watched the stock jump 24%, and told the market that within a year most companies will do the same.
These are not random pundits. These are the people who built the tools. The anxiety among working engineers is legitimate, the layoffs are real, and anyone who waves all of this away as hype is not paying attention. The question is not whether something significant is happening. It is whether what is happening is what these people say it is.
Consider who is speaking, and why
Boris Cherny built Claude Code and works for a company that is preparing for an IPO. Dario Amodei runs that company. Jack Dorsey needed a compelling narrative for investors on the night he halved his payroll, and the stock market gave him a 24% reward for providing one. This does not make any of them wrong, but it should prompt the same scepticism you would apply to a bank’s assessment of its own loan book, or a pharmaceutical company’s summary of its own drug trial. Bloomberg has already labelled the Block announcement “AI-washing,” and analysts have noted that Block’s gross margins are significantly below comparable payments companies like Visa and Mastercard, suggesting a business that needed to restructure for ordinary financial reasons and found AI a convenient explanation.
The broader pattern is worth naming. Block grew from 3,835 employees at the end of 2019 to over 10,000 before the recent cuts, and is now returning to roughly its pre-pandemic size. Meta, Amazon, Microsoft and Salesforce have all conducted large layoffs over the past two years, most citing AI as a contributing factor. All of them also massively over-hired during the pandemic years, when near-zero interest rates made burning cash on headcount look rational. The Federal Reserve held rates at 0-0.25% from March 2020 until early 2022, and the Bank of England held at 0.1% for the same period; both subsequently raised rates to 5.25% by mid-2023. That swing in the cost of capital is a sufficient explanation for the hiring collapse on its own, without requiring AI to do any of the work. The truth is probably that both forces are operating simultaneously, and AI provides the better press release.
What is genuinely true
Having said all of that, it would be intellectually dishonest to conclude that nothing has changed. The mechanical act of translating a precise specification into working code, the part that required knowing syntax, design patterns, API documentation and boilerplate, is now largely automatable for a wide range of problems. Microsoft reports roughly 30% of code is now AI-generated, whilst Anthropic reports a company-wide figure between 70% and 90%. A study published in Science this year found that around 29% of Python functions on GitHub in the US are now AI-written. The Slack clone built in fourteen days was not a toy: it ran on Node.js, PostgreSQL, Socket.io and React, with 93 commits and a proper deployment story, and it took one person a fortnight to complete something that would previously have required a small team and several months.
Claude Code is genuinely impressive. Anyone who has used it for a sustained project has experienced the specific vertiginous feeling of watching something that used to take days materialise in hours. That experience is real, and the people who have had it are not wrong to find it unsettling.
Coding and engineering were never the same thing
Here is where the death narrative makes its mistake. There is a useful way to think about what a software engineer actually does, which has nothing to do with typing. The job is to translate ambiguous human requirements into a specification so precise and unambiguous that a machine can deterministically execute it. This has historically been the hard part. A compiler does not negotiate, does not infer intent, does not fill in gaps charitably. Every ambiguity in the requirements must be resolved by the engineer before the machine will cooperate. Writing the code was never the bottleneck; it was the residue left over after all the genuinely difficult thinking had already been done.
What Claude Code has changed is which ambiguities you actually need to resolve. For commodity behaviour such as authentication flows, pagination, error handling, form validation, Claude draws on a vast statistical model of how software typically works, and its defaults are usually correct. You no longer need to specify these things because the answer genuinely is close to what most systems do, and Claude knows that. The ambiguities that remain are the ones specific to your situation: the business rule that looks odd until you understand the regulatory history behind it, the edge case that only matters for your particular user base, the architectural decision that is wrong for everyone except you. These are precisely the cases where a statistical model of general software behaviour is useless, where being wrong is most expensive, and where the gap between what Claude produces and what you actually needed is invisible until something fails in production.
Teams that have already made the transition to AI-first development are discovering this in practice. The bottleneck that has emerged is not a shortage of code; it is a shortage of people who can write precise, well-reasoned specifications. Getting Claude to build the right thing turns out to require exactly the skills that distinguished good engineers from mediocre ones in the first place, just expressed differently.
The abstraction stack
Every engineer is always operating at some level of abstraction above the machine. The most junior are thinking about individual functions and whether they work. Mid-level engineers think about how modules fit together. Senior engineers think about systems, architecture and operational consequences. The best think about users and business problems and work backwards from there. This hierarchy has always existed; what has changed is how far up it AI can now reach.
The answer, for the lower levels, is: quite far. Given a clear specification of what a module should do, Claude will produce something that works with impressive reliability. The problems emerge as you move upward, because the higher levels of the stack are exactly where organisational context lives. Architectural decisions require knowing what this system needs to do in the future, not just today. User experience decisions require knowing how this specific user population actually behaves, not how users behave in general. Business logic decisions require knowing why the constraints exist, not just what they are. These are not things you can retrieve from a statistical model of how software is generally built.
The implication is that the engineers most at risk are those whose value was concentrated at the levels AI now handles credibly, like the people for whom writing the code was the primary contribution. The engineers operating higher up the stack are not threatened; they are more powerful, because the cost of executing their decisions has dropped dramatically. A Staff engineer who previously needed a team of five to implement an architectural vision can now achieve the same outcome with Claude and one other person. That is a productivity gain for the Staff engineer and a structural reduction in demand for the people below them, which is uncomfortable but not the same thing as the profession being permanently eradicated.
Why context is not a soft argument
The most obvious challenge to this argument is that AI will simply get better at context too, and the organisational knowledge advantage will evaporate. It is worth being precise about why this is harder than it sounds, because the answer is not a vague appeal to human intuition but a concrete technical limitation.
A large language model working on a software problem has access to an enormous amount of general world knowledge, but a practically limited working context: somewhere in the range of 200,000 tokens before recall quality degrades significantly, even in the largest current models. Against that, consider what a software engineer with three years at a company actually carries: the decision history of the codebase, the reasons the odd architectural choice in the payments module exists, the knowledge of which enterprise client demanded the edge case that nobody can remove, the understanding of why the data model was designed the way it was even though it looks wrong, the memory of every production incident and what caused it. Capturing that in digital form and feeding it to a model at the right moment is not a solved problem; it is where the frontier of AI research and tooling is currently concentrated, and progress is slow precisely because the context is tacit, distributed and often never written down at all.
What the employment data actually shows
Stepping back from the theoretical argument, the numbers do not support a profession in collapse. The most recent data from the US Bureau of Labor Statistics in February 2026, shows more than 6.6 million workers currently employed in tech occupations across the United States, with an unemployment rate of 3.6% against a national rate of 4.3%. Tech workers are unemployed at a meaningfully lower rate than the general population.
Job postings are down roughly 35% from their 2022 peak, and this is the number that tends to drive the panic. The problem with using 2022 as a baseline is that 2022 was one of the most anomalous hiring environments in the history of the industry. VC investment hit all-time records, digital transformation spending was at fever pitch, and virtually free money meant that over-hiring was rational. Comparing current postings to that peak is like measuring the decline of restaurant attendance by comparing to the last Saturday before a lockdown. The more informative comparison is to pre-pandemic levels, and on that measure the market is softer but is far from collapse. The underlying demand for people who can specify, architect and take responsibility for complex software systems has not gone away. What has gone away is the demand for large volumes of graduates to do the mechanical implementation work that sits below it.
The junior market will clear, eventually
The most acute and legitimate pain in the current market is at the entry level. Graduate hiring at the fifteen largest US tech companies is down 55% since 2019, according to SignalFire. The junior developer market in the UK is running about 40% below its November 2022 levels. This is real, and it is causing genuine hardship for people who made decisions based on a pre-AI world.
The likely resolution is a new equilibrium at a substantially lower entry-level wage, structured more like medicine than the gold rush of the past decade. A doctor in the United States spends four years in undergraduate education, four years in medical school at a total cost that can exceed $300,000, and then earns $60,000 to $80,000 a year during a residency lasting three to seven years depending on speciality. The toleration of this arrangement rests on a simple calculation: the path to attending physician salary, which reaches $400,000 or more in competitive specialities, is real and credible. An engineering apprentice in the new equilibrium, earning perhaps $30,000 to $40,000 a year while developing the organisational context and judgment that AI cannot replicate, is in a less punishing position than a medical resident: cash flow positive from day one, no six-figure educational debt, and a path to senior compensation that is increasing rather than decreasing as the premium for genuine expertise rises.
The title is going, the job is not
Cherny’s specific prediction, that the title software engineer will disappear and be replaced by something like “builder,” is probably correct about the label and wrong about the implication. The role that is emerging from AI-augmented development is broader than the traditional software engineer, not narrower. It requires understanding users well enough to specify what they actually need, decomposing complex problems into pieces that an AI can execute faithfully, reviewing and validating the result with enough technical depth to catch the plausible-but-wrong outputs, and understanding the operational, security and business consequences of what gets shipped. That is more than the traditional software engineer role required of most practitioners, not less.
The argument that this somehow represents the end of software engineering rests on conflating the typing with the thinking. By the same logic, the invention of the calculator was the end of mathematics, spreadsheet software was the end of accounting, and computer-aided design was the end of engineering. In each case the tools automated the laborious mechanical work while the underlying discipline became more important and more powerful. The abstraction floor rose, the profession moved with it, and the people who understood what they were actually doing found themselves with more leverage than before. Software engineering in 2026 is at the same inflection point. The people who understood what they were actually doing were never just writing code.

