2022-12-03 Paul Williamson
There has been some fretting lately among programmers about the future of their jobs in a world where AI/ML seems capable of writing code automatically. To think about this, we need a realistic model of AI/ML capabilities. If we just assume that AI/ML can be as smart as a human programmer, maybe smarter, definitely faster, and much cheaper to reproduce and operate, then the whole game is over. In that world, humans are outcompeted by machines in every important way, and programmers will have lots of company in the unemployment lines. Humans will have bigger things to worry about than jobs. That scenario is so far “out there” that even the SF writers haven’t found a way to understand it without artificially crippling the AIs in some way. If we really believe that is coming to pass, we (humans) ought to seriously consider getting started on the Butlerian jihad against AI.
Instead, let’s think about the consequences of AI/ML as it is currently starting to exist in the programming world: as a sophisticated search engine and an improved method of facilitating code re-use. AI/ML applications like ChatGPT are (starting to be) able to find some existing code that seems responsive to a query. That’s the search engine part. Let’s assume that they are then (getting better at) combining multiple pre-existing examples in a “smart” way that seems likely to preserve the good parts of each example in such a way as to solve the requested problem. That’s the code re-use part.
Today’s programmer is very familiar with the efficiency of solving coding problems by searching Stack Overflow for related solutions. With a bit of luck, the search will yield something close enough to the required solution to be re-used in solving the problem at hand. For common well-defined problems, it might be a complete drop-in solution. For less common problems, it might just have some of the parts worked out, and we have to pick and choose and maybe write a bit of new logic to glue it all together. Only when nothing similar is found will we have to write something entirely from scratch. To programmers, this feels a lot like what we imagine that the AI does when it coughs up some mutated open-source code in response to a simple query. Because we systematically underestimate how hard it is for a computer to do the kind of logical reasoning that comes easily to human programmers, we imagine that computers are on the edge of surpassing human programmers.
Searching and code re-use have been around for a long time. A lot of computer science research has gone into looking for ways to make code re-use easier. That’s why we have high level languages, and extensive libraries of built-in functions, and Stack Overflow. These things have made it possible for humans to create more and more complex software systems, by reducing the number of details that have to be thought about all at once. So far, this has not resulted in any net loss of programming jobs. Rather the opposite, in fact.
Some kinds of jobs do seem less important now. When we think about programming jobs today, we probably aren’t thinking much about people who are skilled in writing assembly language. Those jobs still exist, though, albeit in specialized fields, such as writing compilers for higher-level languages. They are just embedded in a much larger ecosystem of programming jobs that no longer require thinking at such a low level on a routine basis.
It might be instructive to go back to one of the earliest ideas computer scientists used to think about reaching higher levels of abstraction: that computer programs are like electronic circuit designs. Computer scientists noticed that EE productivity went through the roof with the advent of the integrated circuit. That’s a module that only has to be designed once, with a well-defined documented function and well-defined documented interfaces, easy to mass produce in whatever quantity is required, all functionally identical, and relatively easy to glue together into more complex circuits. Maybe if software could be like that, it would also realize huge increases in productivity. That worked out pretty well, though a true “software IC” technology still seems elusive, and probably always will.
If you look at the job market for electrical engineers today, decades after the successful introduction of the integrated circuit, what do you see? On the digital side, you see a lot fewer listings for people who can design logic circuits out of transistors, or ALUs out of gates, or even CPUs out of ALUs and gates. The common board-level digital design engineer is more involved in picking existing devices and combining them than in actually designing circuits. Likewise, the FPGA or ASIC designer will often be able to get pretty far by integrating existing cores. The work is mostly at a higher level of abstraction. The engineer ideally needs to be aware of exactly how far they can trust the abstractions to hold, and what to do about it when the abstractions leak. Lots of adequate designs can be churned out with imperfect understanding of these issues.
If you look at the market for RF engineers, you see what’s perhaps a later stage of the same evolution. Not only are most RF designs done by using off-the-shelf RF modules, but it is getting very hard to find qualified RF designers at all. As the cohort of RF engineers who had to learn component-level design age out and retire, they are followed by a group of engineers who were mostly able to avoid worrying about that. Only relatively few picked up the lowest-level circuit design skills. Those that did are highly sought-after by companies with RF design requirements.
This trajectory for hardware engineers corresponds to the trend in software engineering toward use of higher level languages and environments and away from programming down to the metal. We can expect this trend to continue. Programmers with deep expertise at any level of abstraction will still be needed and employable. The fraction of jobs at the lower layers of abstraction will decrease, but so will the fraction of programmers with extensive lower layer skills.
This is a problem for the companies that need to hire engineers to work on projects requiring lower-layer expertise, but it’s a boon for those who are able to use higher-layer skills to turn reusable code into useful applications for the growing market in high-complexity systems. As the scope of the software we can re-use without much thought grows to include bigger chunks of functionality, our power to create highly functional systems increases. When the job is to deploy a system that would have been unthinkably complex just a few years ago, the availability of larger re-usable components can make it possible.
These new higher-layer jobs are not easier. They may be a better match for the kind of intelligence that human programmers have, so that they seem easy and pleasant to work on. That probably makes them a worse match for AI/ML capabilities. There’s no current reason to believe that AI/ML technology is on the verge of making a bot that’s able to understand how things need to fit together in the same way as a human. It seems clear that the current generation of AI/ML experiments is not a giant step in that direction. As an improvement over simple search-engine technology, they have potential to make it easier to collect the background knowledge we need to have at our fingertips. To the extent they’re able to turn a vague query into concrete code, they have potential to reduce the amount of rote typing we have to do.
But these robots are not on the verge of being able to do the whole job, working as a programmer without human intervention. As a programmer, you are not about to lose your job creating computer programs. Instead, your job is about to get easier and more fun, as the AI/ML takes on the more tedious parts. The main thing you have to worry about today is that this transformation is still going to take a long time to arrive.
Paul Williamson