In 2001 I interviewed the late Gordon Moore, the co-founder of Intel. He was in Cambridge to attend the opening of a new library that he and his wife, Betty, had endowed. We met in the university library – the central library of the university – and had an agreeable chat about the history of the tech industry and the role that he had played in it. As ever, he was wearing a tacky digital watch that served as a cue for a party trick he used to play on people. He would ask them what they thought it had cost, and most people would suggest a trivial sum – $10, say. Nope, he’d reply. The actual cost was $15m: because that was what it had cost Intel to get into – and out of – the market for digital watches. And one of the lessons he learned from that was that his company should stay away from selling consumer goods.
Moore was world famous because of an observation he had made in the early days of the semiconductor industry that Intel once dominated. He had noticed that the number of transistors on an integrated circuit (or chip) had been doubling every year since 1965, and this was likely to continue for several decades. Inevitably, this became known as “Moore’s law”, as if it were a law of physics rather than just an empirical observation and an extrapolation into the future.
Even so, it turned out to be an accurate prediction of how the business of making silicon chips would evolve. Since transistor density is correlated with processing power, what it meant was that computing power doubled every year until about 2010, after which it began to level off somewhat, largely because of the physical limits on the density of transistors that could be fitted on to a tiny rectangle. (Although it hasn’t prevented Apple from getting 19bn of them on the A17 chip inside my iPhone.)
But although Moore’s “law” was bound to run out of steam eventually, it shaped an entire industry and – more importantly – changed the way we thought about computing. In particular, it fostered a hubristic mindset: confidence that if a problem could be solved by computing, if today’s machines weren’t powerful enough, Moore’s law guaranteed that it would be soluble really soon.
As the ancient Greeks knew only too well, after hubris comes nemesis. In the computing world it came in the computational needs of machine learning, which were orders of magnitude greater than those of more conventional serial processing – computing in sequence, one thing at a time (albeit at astonishing speeds). In one of those happy accidents, there was a part of the computer industry – gaming, which needed processors that could do several calculations simultaneously, or “in parallel” – to ensure that fast-changing scenes could be rendered realistically. And one particular company, Nvidia, was a prominent caterer to this esoteric requirement by providing what became known as graphics processing units (GPUs).
At some point, Jensen Huang, a smart cookie who is the founder and chief executive of Nvidia, realised that his company had the technology that the burgeoning new field of machine learning (afterwards rebranded AI) needed, and he pivoted his entire company to focus on it. The rest, as they say, is history. Irrational exuberance about AI took over the tech industry, fuelling a gold rush in which Huang was the premier supplier of picks and shovels, and his company is now the second most valuable corporation on the planet, just behind Apple.
In 2018 Huang started brooding on the rate at which the computational power of GPUs was increasing. He noted that Nvidia GPUs were “25 times faster than five years ago” whereas Moore’s law would have expected only a tenfold increase. GPU performance was more than tripling every two years, significantly more than Moore would have estimated.
Inevitably, this empirical observation has now become “Huang’s law”. The comparison with Moore’s law is misleading, though, because Moore was referring just to chips – CPUs – whereas Nvidia GPUs are dense clusters of numerous components with associated software which makes them more like miniature supercomputers.
If, as seems likely, GPUs do become the basic building blocks of next-generation computers, then Huang’s prediction of another exponential acceleration in computing power – Moore’s law on steroids – will fuel a new wave of hubristic conviction that there is nothing that cannot be done with technology. Except of course the things that are really worth doing if humans are to survive into the next millennium. Those whom the gods wish to destroy, they first make complacent.
What I’ve been reading
Down with the boss
A timely essay by political science professor Alex Gourevitch on the Law and Political Economy Project blog on the problems of entrepreneurship.
Double vision
An interesting Substack post by Dan Gardner wondering why American political discourse is so radically different from people’s daily life.
Gun law
A remarkable essay by sociologist Kieran Healy on how reacting to a mass shooting has become one of the rituals in which US schoolchildren now have to participate.