The End of Programming as We Know It – O’Reilly


There’s a lot of chatter in the media that software developers will soon lose their jobs to AI. I don’t buy it.

It is not the end of programming. It is the end of programming as we know it today. That is not new. The first programmers connected physical circuits to perform each calculation. They were succeeded by programmers writing machine instructions as binary code to be input one bit at a time by flipping switches on the front of a computer. Assembly language programming then put an end to that. It lets a programmer use a human-like language to tell the computer to move data to locations in memory and perform calculations on it. Then, development of even higher-level compiled languages like Fortran, COBOL, and their successors C, C++, and Java meant that most programmers no longer wrote assembly code. Instead, they could express their wishes to the computer using higher level abstractions.


Learn faster. Dig deeper. See farther.

Betty Jean Jennings and Frances Bilas (right) program the ENIAC in 1946. Via the Computer History Museum

Eventually, interpreted languages, which are much easier to debug, became the norm. 

BASIC, one of the first of these to hit the big time, was at first seen as a toy, but soon proved to be the wave of the future. Programming became accessible to kids and garage entrepreneurs, not just the back office priesthood at large companies and government agencies.

Consumer operating systems were also a big part of the story. In the early days of the personal computer, every computer manufacturer needed software engineers who could write low-level drivers that performed the work of reading and writing to memory boards, hard disks, and peripherals such as modems and printers. Windows put an end to that. It didn’t just succeed because it provided a graphical user interface that made it far easier for untrained individuals to use computers. It also provided what Marc Andreessen, whose company Netscape was about to be steamrollered by Microsoft, dismissively (and wrongly) called “just a bag of drivers.” That bag of drivers, fronted by the Win32 APIs, meant that programmers no longer needed to write low-level code to control the machine. That job was effectively encapsulated in the operating system. Windows and macOS, and for mobile, iOS and Android, mean that today, most programmers no longer need to know much of what earlier generations of programmers knew.

There were more programmers, not fewer

This was far from the end of programming, though. There were more programmers than ever. Users in the hundreds of millions consumed the fruits of their creativity. In a classic demonstration of elasticity of demand, as software was easier to create, its price fell, allowing developers to create solutions that more people were willing to pay for.

The web was another “end of programming.” Suddenly, the user interface was made up of human-readable documents, shown in a browser with links that could in turn call programs on remote servers. Anyone could build a simple “application” with minimal programming skill. “No code” became a buzzword. Soon enough, everyone needed a website. Tools like WordPress made it possible for nonprogrammers to create those websites without coding. Yet as the technology grew in capability, successful websites became more and more complex. There was an increasing separation between “frontend” and “backend” programming. New interpreted programming languages like Python and JavaScript became dominant. Mobile devices added a new, ubiquitous front end, requiring new skills. And once again, the complexity was hidden behind frameworks, function libraries, and APIs that insulated programmers from having to know as much about the low level functionality that it was essential for them to learn only a few years before.

Big data, web services, and cloud computing established a kind of “internet operating system.” Services like Apple Pay, Google Pay, and Stripe made it possible to do formerly difficult, high-stakes enterprise tasks like taking payments with minimal programming expertise. All kinds of deep and powerful functionality was made available via simple APIs. Yet this explosion of internet sites and the network protocols and APIs connecting them ended up creating the need for more programmers.

Programmers were no longer building static software artifacts updated every couple of years but continuously developing, integrating, and maintaining long-lived services. Even more importantly, much of the work at these vast services, like Google Search, Google Maps, Gmail, Amazon, Facebook, and Twitter, was automated at vast scale. Programs were designed and built by humans, not AI, but much of the work itself was done by special-purpose predecessors to today’s general purpose AIs. The workers that do the bulk of the heavy lifting at these companies are already programs. The human programmers are their managers. There are now hundreds of thousands of programmers doing this kind of supervisory work. They are already living in a world where the job is creating and managing digital co-workers.

“Google, Facebook, Amazon, or a host of more recent Silicon Valley startups…employ tens of thousands of workers. If you think with a twentieth century factory mindset, those workers spend their days grinding out products, just like their industrial forebears, only today, they are producing software rather than physical goods. If, instead, you step back and view these companies with a 21st century mindset, you realize that a large part of the work of these companies – delivering search results, news and information, social network status updates, and relevant products for purchase – is done by software programs and algorithms. These are the real workers, and the programmers who create them are their managers.”—Tim O’Reilly, Managing the Bots That Are Managing the Business,” MIT Sloan Management Review, May 21, 2016

In each of these waves, old skills became obsolescent—still useful but no longer essential—and new ones became the key to success. There are still a few programmers who write compilers, thousands who write popular JavaScript frameworks and Python libraries, but tens of millions who write web and mobile applications and the backend software that enables them. Billions of users consume what they produce.

Might this time be different?

Suddenly, though, it is seemingly possible for a nonprogrammer to simply talk to an LLM or specialized software agent in plain English (or the human language of your choice) and get back a useful prototype in Python (or the programming language of your choice). There’s even a new buzzword for this: CHOP, or “chat-oriented programming.” The rise of advanced reasoning models is beginning to demonstrate AI that can generate even complex programs with a high-level prompt explaining the task to be accomplished. As a result, there are a lot of people saying “this time is different,” that AI will completely replace most human programmers, and in fact, most knowledge workers. They say we face a wave of pervasive human unemployment.

I still don’t buy it. When there’s a breakthrough that puts advanced computing power into the hands of a far larger group of people, yes, ordinary people can do things that were once the domain of highly trained specialists. But that same breakthrough also enables new kinds of services and demand for those services. It creates new sources of deep magic that only a few understand.

The magic that’s coming now is the most powerful yet. And that means that we’re beginning a profound period of exploration and creativity, trying to understand how to make that magic work and to derive new advantages from its power. Smart developers who adopt the technology will be in demand because they can do so much more, focusing on the higher-level creativity that adds value.

Learning by doing

AI will not replace programmers, but it will transform their jobs. Eventually much of what programmers do today may be as obsolete (for everyone but embedded system programmers) as the old skill of debugging with an oscilloscope. Master programmer and prescient tech observer Steve Yegge observes that it is not junior and mid-level programmers who will be replaced but those who cling to the past rather than embracing the new programming tools and paradigms. Those who acquire or invent the new skills will be in high demand. Junior developers who master the tools of AI will be able to outperform senior programmers who don’t. Yegge calls it “The Death of the Stubborn Developer.”

My ideas are shaped not only by my own past 40+ years of experience in the computer industry and the observations of developers like Yegge but also by the work of economic historian James Bessen, who studied how the first Industrial Revolution played out in the textile mills of Lowell, Massachusetts during the early 1800s. As skilled crafters were replaced by machines operated by “unskilled” labor, human wages were indeed depressed. But Bessen noticed something peculiar by comparing the wage records of workers in the new industrial mills with those of the former home-based crafters. It took just about as long for an apprentice craftsman to reach the full wages of a skilled journeyman as it did for one of the new entry-level unskilled factory workers to reach full pay and productivity. The workers in both regimes were actually skilled workers. But they had different kinds of skills.

There were two big reasons, Bessen found, why wages remained flat or depressed for most of the first 50 years of the Industrial Revolution before taking off and leading to a widespread increase of prosperity. The first was that the factory owners hoarded the benefits of the new productivity rather than sharing it with workers. But the second was that the largest productivity gains took decades to arrive because the knowledge of how best to use the new technology wasn’t yet widely dispersed. It took decades for inventors to make the machines more robust, for those using them to come up with new kinds of workflows to make them more effective, to create new kinds of products that could be made with them, for a wider range of businesses to adopt the new technologies, and for workers to acquire the necessary skills to take advantage of them. Workers needed new skills not only to use the machines but to repair them, to improve them, to invent the future that they implied but had not yet made fully possible. All of this happens through a process that Bessen calls “learning by doing.”

It’s not enough for a few individuals to be ahead of the curve in adopting the new skills. Bessen explains that “what matters to a mill, an industry, and to society generally is not how long it takes to train an individual worker but what it takes to create a stable, trained workforce” (Learning by Doing, 36). Today, every company that is going to be touched by this revolution (which is to say, every company) needs to put its shoulder to the wheel. We need an AI-literate workforce. What is programming, after all, but the way that humans get computers to do our bidding? The fact that “programming” is getting closer and closer to human language, that our machines can understand us rather than us having to speak to them in their native tongue of 0s and 1s, or some specialized programming language pidgin, should be cause for celebration.

People will be creating, using, and refining more programs, and new industries will be born to manage and build on what we create. Lessons from history tell us that when automation makes it cheaper and easier to deliver products that people want or need, increases in demand often lead to increases in employment. It is only when demand is satisfied that employment begins to fall. We are far from that point when it comes to programming.

Not unsurprisingly, Wharton School professor and AI evangelist Ethan Mollick is also a fan of Bessen’s work. This is why he argues so compellingly to “always bring AI to the table,” to involve it in every aspect of your job, and to explore “the jagged edge” of what works and what doesn’t. It is also why he urges companies to use AI to empower their workers, not to replace them. There is so much to learn about how to apply the new technology. Businesses’ best source of applied R&D is the explorations of the people you have, as they use AI to solve their problems and seek out new opportunities.

What programming is will change

Sam Schillace, one of the deputy CTOs at Microsoft, agreed with my analysis. In a recent conversation, he told me, “We’re in the middle of inventing a new programming paradigm around AI systems. When we went from the desktop into the internet era, everything in the stack changed, even though all the levels of the stack were the same. We still have languages, but they went from compiled to interpreted. We still have teams, but they went from waterfall to Agile to CI/CD. We still have databases, but they went from ACID to NoSQL. We went from one user, one app, one thread, to multi distributed, whatever. We’re doing the same thing with AI right now.”

Here are some of the technologies that are being assembled into a new AI stack. And this doesn’t even include the plethora of AI models, their APIs, and their cloud infrastructure. And it’s already out of date!

AI Engineering Landscape,” via Marie-Alice Blete on GitHub

But the explosion of new tools, frameworks, and practices is just the beginning of how programming is changing. One issue, Schillace noted, is that models don’t have memory the way humans have memory. Even with large context windows, they struggle to do what he calls “metacognition.” As a result, he sees the need for humans to still provide a great deal of the context in which their AI co-developers operate.

Schillace expanded on this idea in a recent post. “Large language models (LLMs) and other AI systems are attempting to automate thought,” he wrote. “The parallels to the automation of motion during the industrial revolution are striking. Today, the automation is still crude: we’re doing the cognitive equivalent of pumping water and hammering—basic tasks like summarization, pattern recognition, and text generation. We haven’t yet figured out how to build robust engines for this new source of energy—we’re not even at the locomotive stage of AI yet.”

Even the locomotive stage was largely an expansion of the brute force humans were able to bring to bear when moving physical objects. The essential next breakthrough was an increase in the means of control over that power. Schillace asks, “What if traditional software engineering isn’t fully relevant here? What if building AI requires fundamentally different practices and control systems? We’re trying to create new kinds of thinking (our analog to motion): higher-level, metacognitive, adaptive systems that can do more than repeat pre-designed patterns. To use these effectively, we’ll need to invent entirely new ways of working, new disciplines. Just as the challenges of early steam power birthed metallurgy, the challenges of AI will force the emergence of new sciences of cognition, reliability, and scalability—fields that don’t yet fully exist.”

The challenge of deploying AI technologies in business

Bret Taylor, formerly co-CEO of Salesforce, one-time Chief Technology Officer at Meta, and long ago, leader of the team that created Google Maps, is now the CEO of AI agent developer Sierra, a company at the heart of developing and deploying AI technology in businesses. In a recent conversation, Bret told me that he believes that a company’s AI agent will become its primary digital interface, as significant as its website, as significant as its mobile app, perhaps even more so. A company’s AI agent will have to encode all of its key business policies and processes. This is something that AI may eventually be able to do on its own, but today, Sierra has to assign each of its customers an engineering team to help with the implementation.

“That last mile of taking a cool platform and a bunch of your business processes and manifesting an agent is actually pretty hard to do,” Bret explained. “There’s a new role emerging now that we call an agent engineer, a software developer who looks a little bit like a frontend web developer. That’s an archetype that’s the most common in software. If you’re a React developer, you can learn to make AI agents. What a wonderful way to reskill and make your skills relevant.”

Who will want to wade through a customer service phone tree when they could be talking to an AI agent that can actually solve their problem? But getting those agents right is going to be a real challenge. It’s not the programming that’s so hard. It’s deeply understanding the business processes and thinking how the new capability can transform them to take advantage of the new capabilities. An agent that simply reproduces existing business processes will be as embarrassing as a web page or mobile app that simply recreates a paper form. (And yes, those do still exist!)

Addy Osmani, the head of user experience for Google Chrome, calls this the 70% problem: “While engineers report being dramatically more productive with AI, the actual software we use daily doesn’t seem like it’s getting noticeably better.” He notes that nonprogrammers working with AI code generation tools can get out a great demo or solve a simple problem, but they get stuck on the last 30% of a complex program because they don’t know enough to debug the code and guide the AI to the correct solution. Meanwhile:

When you watch a senior engineer work with AI tools like Cursor or Copilot, it looks like magic. They can scaffold entire features in minutes, complete with tests and documentation. But watch carefully, and you’ll notice something crucial: They’re not just accepting what the AI suggests…. They’re applying years of hard-won engineering wisdom to shape and constrain the AI’s output. The AI is accelerating their implementation, but their expertise is what keeps the code maintainable. Junior engineers often miss these crucial steps. They accept the AI’s output more readily, leading to what I call “house of cards code” – it looks complete but collapses under real-world pressure.

In this regard, Chip Huyen, the author of the new book AI Engineering, made an illuminating observation in an email to me:

I don’t think AI introduces a new kind of thinking. It reveals what actually requires thinking.

No matter how manual, if a task can only be done by a handful of those most educated, that task is considered intellectual. One example is writing, the physical act of copying words onto paper. In the past, when only a small portion of the population was literate, writing was considered intellectual. People even took pride in their calligraphy. Nowadays, the word “writing” no longer refers to this physical act but the higher abstraction of arranging ideas into a readable format.

Similarly, once the physical act of coding can be automated, the meaning of “programming” will change to refer to the act of arranging ideas into executable programs.

Mehran Sahami, the chair of Stanford’s CS department, put it simply: “Computer science is about systematic thinking, not writing code.”

When AI agents start talking to agents…

…precision in articulating the problem correctly gets even more important. An agent as a corporate frontend that provides access to all of a company’s business processes will be talking not just to consumers but also to agents for those consumers and agents for other companies.

That entire side of the agent equation is far more speculative. We haven’t yet begun to build out the standards for cooperation between independent AI agents! A recent paper on the need for agent infrastructure notes:

Current tools are largely insufficient because they are not designed to shape how agents interact with existing institutions (e.g., legal and economic systems) or actors (e.g., digital service providers, humans, other AI agents). For example, alignment techniques by nature do not assure counterparties that some human will be held accountable when a user instructs an agent to perform an illegal action. To fill this gap, we propose the concept of agent infrastructure: technical systems and shared protocols external to agents that are designed to mediate and influence their interactions with and impacts on their environments. Agent infrastructure comprises both new tools and reconfigurations or extensions of existing tools. For example, to facilitate accountability, protocols that tie users to agents could build upon existing systems for user authentication, such as OpenID. Just as the Internet relies on infrastructure like HTTPS, we argue that agent infrastructure will be similarly indispensable to ecosystems of agents. We identify three functions for agent infrastructure: 1) attributing actions, properties, and other information to specific agents, their users, or other actors; 2) shaping agents’ interactions; and 3) detecting and remedying harmful actions from agents.

There are huge coordination and design problems to be solved here. Even the best AI agents we can imagine will not solve complex coordination problems like this without human direction. There is enough programming needed here to keep even AI-assisted programmers busy for at least the next decade.

In short, there is a whole world of new software to be invented, and it won’t be invented by AI alone but by human programmers using AI as a superpower. And those programmers need to acquire a lot of new skills.

We are in the early days of inventing the future

There is so much new to learn and do. So yes, let’s be bold and assume that AI codevelopers make programmers ten times as productive. (Your mileage may vary, depending on how eager your developers are to learn new skills.) But let’s also stipulate that once that happens, the “programmable surface area” of a business, of the sciences, of our built infrastructure will rise in parallel. If there are 20x the number of opportunities for programming to make a difference, we’ll still need twice as many of those new 10x programmers!

User expectations are also going to rise. Businesses that simply use the greater productivity to cut costs will lose out to companies that invest in harnessing the new capabilities to build better services.

As Simon Willison, a longtime software developer who has been at the forefront of showing the world how programming can be easier and better in the AI era, notes, AI lets him “be more ambitious” with his projects.

Take a lesson from another field where capabilities exploded: It may take as long to render a single frame of one of today’s Marvel superhero movies as it did to render the entirety of the first Pixar film even though CPU/GPU price and performance have benefited from Moore’s Law. It turns out that the movie industry wasn’t content to deliver low-res crude animation faster and more cheaply. The extra cycles went into thousands of tiny improvements in realistic fur, water, clouds, reflections, and many many more pixels of resolution. The technological improvement resulted in higher quality, not just cheaper/faster delivery. There are some industries made possible by choosing cheaper/faster over higher production values (consider the explosion of user-created video online), so it won’t be either-or. But quality will have its place in the market. It always does.

Imagine tens of millions of amateur AI-assisted programmers working with AI tools like Replit and Devin or enterprise solutions like those provided by Salesforce, Palantir, or Sierra. What is the likelihood that they will stumble over use cases that will appeal to millions? Some of them will become the entrepreneurs of this next generation of software created in partnership with AI. But many of their ideas will be adopted, refined, and scaled by existing professional developers.

The Journey from Prototype to Production

In the enterprise, AI will make it much more possible for solutions to be built by those closest to any problem. But the best of those solutions will still need to travel the rest of the way on what Shyam Sankar, the CTO of Palantir, has called “the journey from prototype to production.” Sankar noted that the value of AI to the enterprise is “in automation, in enterprise autonomy.” But as he also pointed out, “Automation is limited by edge cases.” He recalled the lessons of Stanley, the self-driving car that won the DARPA Grand Challenge in 2005: able to do something remarkable but requiring another 20 years of development to fully handle the edge cases of driving in a city.

“Workflow still matters,” Sankar argued, and the job of the programmer will be to understand what can be done by traditional software, what can be done by AI, what still needs to be done by people, and how you string things together to actually accomplish the workflow. He notes that “a toolchain that enables you to capture feedback and learn the edge cases to get there as quickly as possible is the winning tool chain.” In the world Sankar envisions, AI is “actually going to liberate developers to move into the business much more and be much more levered in the impact they deliver.” Meanwhile, the top-tier subject matter experts will become programmers with the help of AI assistants. It is not programmers who will be out of work. It will be the people—in every job role—who don’t become AI-assisted programmers.

This is not the end of programming. It is the beginning of its latest reinvention.


On April 24, O’Reilly Media will be hosting Coding with AI: The End of Software Development as We Know It—a live virtual tech conference spotlighting how AI is already supercharging developers, boosting productivity, and providing real value to their organizations. If you’re in the trenches building tomorrow’s development practices today and interested in speaking at the event, we’d love to hear from you by March 5. You can find more information and our call for presentations here.



Leave a Reply

Your email address will not be published. Required fields are marked *