The end of programming as we know it
Programming

The end of programming as we know it

22 min read

This is not the end of programming. It is the end of programming as we know it today. The first programmers wired up physical circuits to perform each calculation. They were replaced by programmers who wrote machine instructions as binary code, entered one bit at a time using switches on the front of the computer. Then assembly language programming ended that. It allowed the programmer to use human-like language to tell the computer to move data into specific memory locations and perform calculations on them. Then the development of even higher-level compiled languages ​​such as Fortran, COBOL, and their successors C, C++, and Java meant that most programmers no longer wrote assembly code. Instead, they could express their wishes to the computer using higher-level abstractions.

Eventually, interpreted languages ​​became the norm, and were much easier to debug.

BASIC, one of the first to achieve great success, was initially perceived as a toy, but soon proved to be a new wave. Programming became accessible to children and entrepreneurs in garages, not just to back-office workers of large companies and government agencies.

Consumer operating systems have also played a big role in history. In the early days of personal computing, every computer manufacturer needed software engineers who could write the low-level drivers that did the work of reading and writing to memory circuits, hard drives, and peripherals like modems and printers. Windows put an end to that. It was successful not only because it provided a graphical user interface that made computers much easier for the untrained eye to use, but also because it was what Marc Andreessen, whose company Netscape was about to be crushed by Microsoft, dismissively (and wrongly) called “just a bag of drivers.” That bag of drivers, hidden behind the Win32 API, meant that programmers no longer had to write low-level code to control the machine. That task was effectively handled by the operating system. Because of Windows and macOS, and iOS and Android (for mobile devices), most programmers today no longer need to know much of what previous generations of programmers knew.

There were more programmers, not fewer

This was far from the end of programming. There were more programmers than ever before. Hundreds of millions of users were consuming the fruits of their creativity. A classic example of the elasticity of demand: as software became easier to create, its price fell, allowing developers to create solutions that more people were willing to pay for.

The Internet became another “end of programming.” Suddenly, the user interface consisted of human-readable documents displayed in a browser, with links that could in turn call programs on remote servers. Anyone could create a simple “app” with minimal programming skills. “No code” became the buzzword. Soon, everyone needed a website. Tools like WordPress made it possible for non-programmers to create websites without coding. However, as technology grew more powerful, successful websites became more and more complex. There was an increasing separation between “front-end” and “back-end” programming. New interpreted programming languages ​​like Python and JavaScript became dominant. Mobile devices added a new, ubiquitous front-end that required new skills. Once again, complexity was hidden behind frameworks, function libraries, and APIs that insulated programmers from having to know as much about the low-level functionality as they had needed to know just a few years earlier.

Big data, web services, and cloud computing have created a kind of “Internet operating system.” Services like Apple Pay, Google Pay, and Stripe have made it possible to perform previously complex, high-risk enterprise tasks like accepting payments with minimal programming knowledge. All sorts of deep, powerful functionality has become available through simple APIs. But this explosion of websites and the network protocols and APIs that connect them has ultimately created a need for more programmers.

Programmers were no longer creating static software artifacts that were updated every couple of years, but were continually developing, integrating, and maintaining long-lived services. More importantly, much of the work in these huge services like Google Search, Google Maps, Gmail, Amazon, Facebook, and Twitter was automated on a massive scale. The programs were designed and built by humans, not AI, but much of the work itself was done by specialized precursors to today’s general-purpose AIs. The workers who do most of the heavy lifting in these companies are already programs. Human programmers are their managers. There are now hundreds of thousands of programmers doing this kind of supervisory work. They already live in a world where work involves creating and managing digital colleagues.

In each of these waves, old skills became obsolete. Still useful, but no longer necessary, and new ones became the key to success. There are still a few programmers writing compilers, thousands writing popular JavaScript frameworks and Python libraries, but tens of millions writing web and mobile apps and the backend for them. Billions of users consume what they produce.

Could things be different this time?

Suddenly, a non-programmer could simply talk to an LLM or specialized software agent in plain English (or a human language of your choice) and get back a useful prototype in Python (or a programming language of your choice). There’s even a new buzzword for this: CHOP, or “chat-oriented programming.” The rise of advanced models is starting to show AI capable of generating even complex programs with a high level of hints explaining the task to be performed. As a result, many are saying, “This time is different,” that AI will completely replace most human programmers and, indeed, most knowledge workers. They say we are facing a wave of pervasive human unemployment.

I still don’t believe it. When a breakthrough occurs that puts advanced computing power into the hands of a much larger group of people, yes, ordinary people can do things that were once the preserve of highly skilled specialists. But that same breakthrough also opens up new kinds of services and demand for those services. It creates new sources of deep magic that few understand.

The magic that is happening now is the most powerful of all. And that means we are entering a deep period of exploration and creativity, trying to figure out how to make this magic work and extract new benefits from its power. Smart developers who embrace this technology will be in demand because they can do so much more by focusing on a higher level of creativity that adds value.

Learning by doing

AI won’t replace programmers, but it will transform their work. Ultimately, much of what programmers do today may be as obsolete (for anyone but embedded programmers) as the old skill of debugging with an oscilloscope. Programmer and technology visionary Steve Yegge notes that it won’t be junior and mid-level programmers who will be replaced, but rather those who cling to the past and don’t embrace new tools and programming paradigms. Those who learn or invent new skills will be in greater demand. Junior developers who master AI tools will be able to outperform senior programmers who don’t. Yegge calls this “The Death of the Stubborn Developer.”

My ideas are shaped not only by my own 40 years of experience in the computer industry and the observations of developers like Yegge, but also by the work of economic historian James Bessen, who studied how the first industrial revolution played out in the textile mills of Lowell, Massachusetts, in the early 1800s. As skilled artisans were replaced by machines run by “unskilled” labor, people’s wages actually declined. But Bessen noticed something odd when he compared the wage rates of workers in the new industrial factories with those of the former home-based artisans. It took about as long for a journeyman artisan to reach the full wage of a skilled journeyman as it took a new entry-level unskilled factory worker to reach full wage and productivity. The workers in both regimes were, in fact, skilled workers. But they had different kinds of skills .

Bessen found that there were two main reasons why wages remained stagnant or low for much of the first 50 years of the Industrial Revolution before it took off and led to widespread prosperity. The first was that factory owners hoarded the benefits of new productivity rather than sharing them with their workers. But the second problem was that it took decades for the biggest gains in productivity to materialize, because the knowledge of how best to use the new technology was not yet widely shared. It took decades for inventors to make machines more reliable, for those who used them to devise new kinds of work processes to make them more efficient, to create new kinds of products that could be made with them, for a wider range of businesses to adopt the new technology, and for workers to acquire the necessary skills to take advantage of it. Workers needed new skills not just to use the machines but to repair them, to improve them, to invent the futures they implied but had not yet made fully possible. All this happens through a process that Bessen calls “learning by doing.”

It is not enough for a few people to be ahead of the curve in learning new skills. Bessen explains that “what matters for a plant, an industry, and society as a whole is not how long it takes to train an individual worker, but how long it takes to create a stable, trained workforce” ( Learning by Doing , 36). Today, every company that will be affected by this revolution (that is, every company) must lend a helping hand. We need a workforce that is proficient in AI. What is programming, after all, if not the way humans make computers do our bidding? The fact that “programming” is getting closer and closer to human language, that our machines can understand us rather than us having to speak to them in their native language of ones and zeros or some specialized programming language, should be cause for celebration.

People will create, use, and improve more software, and new industries will emerge to manage and grow what we create. History tells us that when automation makes it cheaper and easier to deliver products that people want or need, the increase in demand often leads to an increase in employment. Only when demand is met does employment begin to fall. We are far from that point when it comes to programming.

It’s no surprise that Wharton professor and AI evangelist Ethan Mollick is also a fan of Bessen’s work. That’s why he makes such a compelling case for “always engaging AI,” incorporating it into every aspect of your work and exploring the “sharp edges” of what works and what doesn’t. That’s also why he encourages companies to use AI to empower their workers, not replace them. There’s so much to learn about how to apply new technologies. The best research for business is researching your employees using AI to solve their problems and find new opportunities.

What is programming will change

Sam Schilles, one of Microsoft’s vice chief technology officers, agreed with my analysis. In a recent conversation, he told me, “We’re in the process of inventing a new programming paradigm around AI systems. When we moved from the desktop to the internet, everything in the stack changed, even though all the layers of the stack stayed the same. We still have languages, but they moved from compiled to interpreted. We still have teams, but they moved from waterfall to agile to CI/CD. We still have databases, but they moved from ACID to NoSQL. We moved from single user, single application, single thread to distributed everything. We’re doing the same thing with AI right now.”

These are some of the technologies that are being assembled into the new AI stack. And that doesn’t even include the many AI models, their APIs, and their cloud infrastructure. And that’s already outdated! The explosion of new tools, frameworks, and practices is just the beginning of how programming is changing. One problem, Schilles noted, is that models don’t have the same memory as humans. Even with large context windows, they struggle to do what he calls “metacognition.” As a result, he sees a need for humans to continue to provide much of the context in which their AI co-developers operate.

Schilles expanded on this idea in a recent post . “Large language models (LLMs) and other AI systems are trying to automate thinking,” he wrote. “The parallels with the automation of movement during the Industrial Revolution are striking. Today, automation is still crude: we perform the cognitive equivalent of pumping water and hammering — basic tasks like creating a summary, recognizing patterns, and generating text. We haven’t yet figured out how to build reliable engines for this new energy source — we’re not even at the locomotive stage of AI yet.”

Even the locomotive stage was largely an extension of the raw power humans could use to move physical objects. The big next step was to increase the means of controlling that power. Schilles asks, “What if traditional software engineering isn’t quite right for this? What if building AI requires fundamentally different practices and control systems? We’re trying to build new kinds of thinking (our analog to movement): high-level, metacognitive, adaptive systems that can do more than just repeat pre-designed patterns. To use them effectively, we’ll need to invent entirely new ways of working, new disciplines. Just as the problems of early steam power gave birth to metallurgy, the problems of AI will give rise to new sciences of cognition, reliability, and scalability—areas that don’t yet fully exist.”

The Problem of Implementing AI Technologies in Business Bret Taylor, the former co-CEO of Salesforce, the former CTO of Meta, and the former leader of the team that built Google Maps, is now the CEO of AI agent company Sierra, which is at the center of developing and deploying AI technology in business. In a recent conversation, Bret told me that he believes a company’s AI agent will become its primary digital interface, as significant as its website, as significant as its mobile app, perhaps even more so. A company’s AI agent will have to code all of its key business policies and processes. This is something that AI will eventually be able to do on its own, but today Sierra has to assign each of its customers an engineering team to help with the implementation .

“That last mile, where you take a cool platform and a bunch of your business processes and build an agent, is actually quite difficult,” Bret explained. “There’s a new role emerging now that we call an agent engineer , a software developer who’s a little bit like a front-end developer. That’s the archetype that’s most common in software. If you’re a React developer, you can learn how to build AI agents. What a great way to reskill and make your skills relevant.”

Who wants to wade through a tree of customer service numbers when they could talk to an AI agent who can actually solve their problem? But getting these agents right will be a real challenge. It’s not the programming that’s hard, it’s the deep understanding of business processes and how new capabilities might transform them to take advantage of new capabilities. An agent that simply replicates existing business processes will be as awkward as a web page or mobile app that simply recreates a paper form. (And yes, those still exist!)

Addy Osmani, Google Chrome’s head of user experience, calls this the 70% problem: “While engineers report significant performance gains with AI, the actual software we use every day doesn’t seem to be getting noticeably better.” He notes that non-programmers working with AI code generation tools may produce a great demo or solve a simple problem, but they get stuck in the last 30% of a complex program because they don’t know enough to debug the code and guide the AI ​​to the right solution. Meanwhile: When you watch a senior engineer work with AI tools like Cursor or Copilot, it looks like magic. They can build entire features in minutes, complete with tests and documentation. But watch closely, and you’ll notice something important: They’re not just accepting what the AI ​​suggests… They’re applying years of hard-won engineering wisdom to shape and constrain the AI’s output. The AI ​​speeds up their implementation, but their expertise is what makes the code maintainable. Junior engineers often skip these important steps. They accept the AI’s output more readily, resulting in what I call a “code house of cards” — one that looks complete, but crumbles under the pressure of the real world.

In this regard, Chip Huen, author of the new book “ AI Engineering ,” made the following insightful observation in an email to me:

I don’t think AI introduces a new type of thinking . It shows what actually requires thinking .

No matter how manual the work, if it can only be done by a handful of the most educated people, it is considered intellectual. One example is writing, the physical act of copying words onto paper. In the past, when only a small portion of the population was literate, writing was considered intellectual. People even took pride in their calligraphy. Today, the word “writing” no longer refers to this physical act, but to the higher abstraction of organizing ideas into a readable format.

Likewise, once the physical process of coding becomes automated, the meaning of the word “programming” will change to mean the process of organizing ideas into executable programs.

Mehran Sahami, head of the computer science department at Stanford University, put it simply: “Computer science is about thinking systematically, not writing code.”

When AI agents start talking to agents… …precision in the correct formulation of the problem becomes even more important. The agent as a corporate frontend that provides access to all business processes of the company will communicate not only with consumers, but also with the agents of these consumers and agents of other companies.

This whole side of the agent equation is much more speculative. We haven’t even begun to develop standards for collaboration between independent AI agents! A recent paper on the need for agent infrastructure notes:

Current tools are largely insufficient because they are not designed to shape how agents interact with existing institutions (e.g., legal and economic systems) or actors (e.g., digital service providers, humans, other AI agents). For example, reconciliation methods by their nature do not guarantee counterparties that a person will be held accountable when a user instructs an agent to perform an illegal action. To fill this gap, we propose the concept of agent infrastructure: technical systems and shared protocols external to agents that are designed to mediate, influence, and impact their interactions with their environment. Agent infrastructure includes both new tools and reconfigurations or extensions of existing tools. For example, to facilitate accountability, protocols linking users to agents could build on existing user authentication systems such as OpenID. Just as the Internet relies on infrastructure such as HTTPS, we argue that agent infrastructure will be similarly indispensable for agent ecosystems. We identify three functions for agent infrastructure: 1) assigning actions, properties, and other information to specific agents, their users, or other entities; 2) shaping agent interactions; and 3) detecting and remediating malicious agent actions.

There are huge coordination and design problems to solve here. Even the best AI agents we can imagine won’t solve such complex coordination problems without human guidance. There’s so much programming involved that even AI-enabled programmers will be busy for at least the next decade.

In short, there is a whole world of new software that needs to be invented, and it will be invented not just by AI, but by human programmers using AI as a superpower. And these programmers need to learn a lot of new skills.

We are in the early stages of inventing the future. There’s so much to learn and do. So yes, let’s be bold and assume that AI co-developers make programmers 10x more productive. (Your metric may vary, depending on how eager your developers are to learn new skills.) But let’s also assume that once that happens, the “programmable surface” of business, of science, of our built infrastructure, will grow in parallel. If there’s 20x more programming power to make a difference, we’ll still need twice as many of those new 10x programmers!

User expectations will also rise. Companies that simply use greater productivity to cut costs will lose out to companies that invest in developing new capabilities to create better services.

Simon Willison, a veteran software developer who has been at the forefront of showing the world how programming can be easier and better in the age of AI, notes that AI allows him to “be more ambitious” in his projects.

Take a lesson from another area where the possibilities have exploded: rendering a single frame of one of today’s Marvel superhero movies can take as long as rendering the entire first Pixar film, even though CPU/GPU price and performance have benefited from Moore’s Law. It turns out that the film industry wasn’t content to deliver low-res, crude animation faster and cheaper. The extra cycles went toward thousands of tiny improvements in the realism of fur, water, clouds, reflections, and many, many other pixels of resolution. Technological improvement led to higher quality, not just cheaper/faster delivery. There are some industries that were made possible by choosing cheaper/faster over higher production values ​​(think of the explosion of user-generated video on the internet), so it won’t be an either/or choice. Quality will have its place in the marketplace. It always has.

Imagine tens of millions of amateur programmers working with AI, working with tools like Replit and Devin , or enterprise solutions like those from Salesforce, Palantir, or Sierra. What are the chances that they’ll stumble upon use cases that appeal to millions? Some of them will become entrepreneurs of this next generation of AI-powered software. But many of their ideas will be adopted, improved, and scaled by existing professional developers.

Journey from Prototype to Production In business, AI will greatly improve the ability of those closest to any given problem to create solutions. But the best of those solutions will still have to navigate the rest of the journey, what Shyam Sankar, Palantir’s CTO, calls “ the journey from prototype to production .” Sankar noted that the value of AI in the enterprise is “in automation, in the autonomy of the enterprise.” But, he also noted, “automation is limited to edge cases.” He recalled the lessons of Stanley, the self-driving car that won the DARPA Grand Challenge in 2005: It can do something remarkable, but it needs another 20 years of development to fully master the edge cases of urban driving.

“Workflow is still important,” Sankar argues, and the programmer’s job will be to figure out what can be done with traditional software, what can be done with AI, what still needs to be done by humans, and how to tie it all together to actually execute the workflow. He notes that “a toolchain that allows you to get feedback and explore edge cases to get to the goal as quickly as possible is a winning toolchain.” In the world Sankar envisions, AI “will actually free up developers to be much more involved in the business and much more engaged in the impact they have.” Meanwhile, top subject matter experts will become programmers with the help of AI assistants. It won’t be programmers who are out of work. It will be people, in every role, who won’t become programmers with the help of AI.