English is the new programming language
Here is a thesis that will annoy a lot of programmers: English is becoming a programming language, and code as we know it is becoming an intermediate representation that humans rarely need to read.
This sounds like the kind of breathless AI hype that evaporates on contact with reality, so let me be precise about what I mean. C is a specification language with a deterministic compiler. You write your intent in C, a compiler translates it to machine code, and you debug at the level of C when things go wrong. You do not need to understand the assembly output, and you certainly do not need to understand how NAND gates switch on your processor. The abstraction holds.
English is becoming a specification language with a probabilistic compiler. You write your intent in English, an LLM translates it to code, and you debug at the level of English when things go wrong. The abstraction, increasingly, holds.
The abstraction holds for debugging
The obvious objection is that the abstraction breaks down when something goes wrong. When your C program crashes, you debug in C. When your vibe-coded system crashes, surely you have to descend into the generated code, read it, understand it, and fix it manually?
In practice, no. When a vibe-coded system breaks, you tell the LLM: “When I do X, Y happens, but I want Z to happen.” This is a statement that the system does not faithfully implement the specification. The LLM generates a fix. You test again. The loop closes in English, just as it closes in C.
This is not hypothetical. Claude Opus 4.5 paired with Claude Code is, right now, making complex changes to large brownfield codebases. METR estimates that it can complete tasks that take humans nearly five hours. On SWE-bench Verified, which tests the ability to resolve real GitHub issues in production repositories, Opus 4.5 scores 80.9%, surpassing both GPT-5.2 and Gemini 3 Pro. These are not toy problems or greenfield prototypes; they are real bugs in real systems.
The historical pattern
When compilers first emerged, programmers complained that the generated assembly was ugly and inefficient. A skilled assembly programmer could do better. This was true, and it was also irrelevant. What mattered was correctness and performance at an acceptable level, not whether the intermediate representation was aesthetically pleasing to human eyes. The correct response to “the generated code is ugly” has always been “so what?”
We do not write assembly anymore because we do not need to. The abstraction to C (and later, higher-level languages) held well enough that the generated assembly became someone else’s problem, which is to say, nobody’s problem. The same transition is happening now, one level up the stack.
Specification was always the hard part
There is a common misconception that programming is hard because translating ideas into code is hard. This gets the difficulty backwards. Specification is the hard part; it always was. The challenge is knowing what you want, decomposing a vague problem into precise requirements, designing systems that can be meaningfully tested, and understanding a domain deeply enough to formalise it.
The translation step was not hard so much as slow and expensive. It required humans who had memorised syntax, APIs, and idioms, and who could type accurately for hours. This created a bottleneck that made the translation step feel important. LLMs have removed the bottleneck, and in doing so, revealed what was always underneath: specification and verification are the actual work.
Consider what remains hard even with LLMs as probabilistic compilers. You still need to know what you want. You still need to decompose systems into testable components. You still need to design verification strategies that cover the behaviour space. You still need to understand tradeoffs between competing constraints. None of this has gotten easier; if anything, it has become more exposed now that the typing is near-free.
The unbundling of software engineering
Civil engineers do not lay bricks. Mechanical engineers do not operate lathes. Electrical engineers do not solder every connection. The engineering is the specification, the design, the verification, the systems thinking. The translation to physical artifact has always been a separate concern, handled by trades and manufacturing.
Software was strange because the engineer was the manufacturing line. You designed the system and then you typed it into existence. The two activities got conflated, and we started believing that writing code was the engineering. It was not. It was simply that the translation step happened to require the same person.
LLMs unbundle software engineering from coding in the same way that CAD unbundled mechanical engineering from drafting. The field does not get easier; it gets purified. The engineering was always precise problem decomposition, system design under constraints, verification strategy, tradeoff analysis, and domain modelling. We used to charge for that bundled with ten thousand hours of typing. Now the typing is essentially free.
The skill distribution shifts
If you accept this framing, the implications for who thrives are significant. People who were excellent at coding but mediocre at system thinking lose their moat. The value of knowing syntax and APIs declines toward zero. The value of deeply understanding a domain, of being able to specify precisely what correct behaviour looks like, of designing systems that are testable at all, goes up.
This reframes the last two decades of tech hiring. If most of the job was translation, taking a spec from a product manager and turning it into code, then you did not need deep domain expertise or years of education. You needed people who could type accurately, learn syntax quickly, and churn through tickets. Bootcamps made sense. “Learn to code in 12 weeks” made sense. Hiring 22-year-olds at mass scale made sense. The industry optimised for the bottleneck, and the bottleneck was typing.
If the bottleneck shifts to specification and verification, the skill profile inverts. Healthcare software needs people who understand healthcare. Financial systems need people who understand finance. The value of “I can learn any tech stack in two weeks” drops; the value of “I have spent fifteen years understanding how hospitals actually work” rises. The undermining of traditional education in favour of bootcamps and self-taught programmers may turn out to have been a local phenomenon, tied to a temporary bottleneck that is now disappearing.
The job title “software engineer” is finally becoming accurate. Engineering has always meant applying human ingenuity to societal problems, producing precise solutions that can be compiled and verified. Civil engineers do not lay bricks; they specify structures. Software engineers, increasingly, do not type code; they specify systems. The mechanical translation is outsourced to the probabilistic compiler, leaving the actual engineering behind.
The remaining objection
The one remaining objection with any teeth is reliability. A deterministic compiler produces the same output every time. A probabilistic compiler does not. Run the same prompt twice, get different code.
This matters less than it first appears. You do not need reproducibility at the generation step; you need reproducibility at the behaviour step. If the generated code passes your tests and meets your specification, it does not matter that a second generation would have produced different code that also passes your tests and meets your specification. Correctness is defined by behaviour, not by the specific tokens in the intermediate representation.
The deeper question is whether the probabilistic compiler becomes reliable enough, fast enough, across a wide enough range of complexity. Opus 4.5 scores 37.6% on ARC-AGI-2, more than doubling GPT-5.1’s score of 17.6%. This is a test of fluid intelligence and novel problem-solving, not memorisation. The trajectory is steep, and if it continues, the complexity ceiling will keep rising.
English as engineering
I want to be clear that none of this makes software engineering easy. Calculators did not make mathematics easy; they made arithmetic easy, which revealed that the hard part of mathematics was always problem formulation, knowing which operations to apply, and interpreting results.
LLMs do not make software engineering easy. They make coding easy, which reveals that the hard part was always figuring out what to build, how to decompose it, and how to verify that it is correct. This is still a highly technical discipline. You still need to understand distributed systems, eventual consistency, authentication models, failure modes, and a thousand other concepts. You simply express that understanding in English rather than in Python.
Engineering, in the broadest sense, is the application of human ingenuity to societal problems, producing precise solutions that can be compiled (probabilistically or otherwise) and verified. That definition does not require any particular syntax. It requires clear thinking, domain expertise, and the ability to specify what you want. English, it turns out, is sufficient for that.

