Fifty years ago Gordon E. Moore, the co-founder of computer chip leviathan Intel, sat down at his typewriter and lit the blue touch paper that would ignite the information revolution.
Volume 38 of Electronics magazine, published on 19 April 1965, contained the dry but prophetic words that would enter the computing canon as Moore’s law.
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year [...] Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.
As Webopedia puts it:
[Moore's law is the] observation made in 1965 by Gordon Moore...that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future. In subsequent years, the pace slowed down a bit, but data density has doubled approximately every 18 months, and this is the current definition of Moore's Law, which Moore himself has blessed.
Of course Moore’s law isn’t a law at all, at least not in the sense that we would normally use the word law to represent a scientific principle. It was an observation or, as he told IEEE Spectrum, a wild extrapolation:
At the time I wrote the article, I thought I was just showing a local trend ... I looked back to the beginning of the technology I considered fundamental — the planar transistor — and noticed that the [number of components] had about doubled every year. And I just did a wild extrapolation saying it's going to continue to double every year for the next 10 years.
Moore was asked by Electronics magazine to predict the future and he did what most of us do when we’re asked that: he looked at the past. He noticed that the number of components that could fit on to a silicon chip had doubled every year and he used his expert judgement to predict that it could continue until 1975.
People who work with computers make predictions and educated guesses all the time.
Most of them aren’t as memorable as Thomas J. Watson’s “I think there is a world market for about five computers” or the quote that Bill Gates got lumbered with but probably didn’t say: “…640K ought to be enough for anyone”, but most of them are about as accurate.
Despite disastrous mis-predictions, IBM and Microsoft both did OK, and if Moore had been wrong Intel would have been OK too.
Nobody would have cared, nobody would have noticed and nobody would have held it against him – we overlook the countless bad predictions and move on.
But Moore wasn’t wrong.
To begin with, he wasn’t wrong either because he knew what he was talking about or because he was lucky. Later on, he wasn’t wrong because we decided he wasn’t wrong.
In the end, Moore wasn’t wrong about chip development in the same way that Kennedy wasn’t wrong about predicting a man would land on the moon.
Both made predictions they were in a position to understand better than the rest of us and able to influence directly and, after early and sustained success, both made millions of us believe them.
Once we decided that Moore’s was a reliable prophecy it became a simple, powerful, ambitious and self-fulfilling one for an entire industry, for 50 years.
In 2010 Moore predicted that his law had two or three generations left in it, giving us up to 20 years before progress continues down another path.
In terms of size [of transistor] you can see that we're approaching the size of atoms which is a fundamental barrier, but it'll be two or three generations before we get that far ... We have another 10 to 20 years before we reach a fundamental limit. By then they'll be able to make bigger chips and have transistor budgets in the billions.
Fifteen more years of exponential improvements will give us another great leap in technological progress and it’ll probably give us some serious security headaches too.
Every year our ability to crack passwords improves as the speed of computers improves in line with Moore’s law, but year after year we show little improvement in our ability to choose passwords. The gap between the passwords we choose and what’s crackable on unremarkable hardware is closing fast.
With a little more exponential improvement in transistor density surveillance could look very different in the near future.
As computer speed improves, our ability to store and process Big Data improves too. The progress of Moore’s law narrows the difference between the speed at which we do things and the speed at which those who gather data about us can crunch it all and figure out what we’re doing and thinking, or even what we’re about to do.
Moore’s Law will continue to drive down the size and cost of devices like GPS chips and CCTV cameras until it’s possible to install one on every signpost, street lamp, traffic light, bridge and underpass. When every inch of road has an ANPR camera pointed at it, you won’t need a government chip in your car (but you might want a bicycle).
The wildest extrapolation of all though is the idea of the singularity – the point in the not-so-distance future beyond which predictions are impossible because an Artificial Intelligence has broken free of its creator and is wondering what to do with all those Homo sapiens.
Gordon E. Moore, Chemical Heritage Foundation [CC BY-SA 3.0], via Wikimedia Commons