Transistors are, literally, a miracle. Phil Rhodes recently looked at how they work. Now, let's see where they're going to take us.
I feel I have a special connection with technology. It might be because I was born around the start of human space exploration. My dad was an aerospace materials scientist, and he encouraged me to follow the space race in microscopic detail. The numbers on the Lunar Module's primitive computer with digital displays transfixed me. We're used to seeing the number 4 on seven segment numerical displays now, but it looked like a character from an alien alphabet back then.
Valves were still the default active device in electronics a mere decade before that moon shot. You can make a valve act as a switch. Even though these thermionic devices were about as analogue as you can get, it's easy to make them transition suddenly when a voltage passes a pre-determined threshold. You can make a computer from valves, but they're big, slow, hot and power-hungry. They certainly don't scale well.
So it's fortunate that transistors arrived on the scene. Early examples were much smaller than valves but big enough to easily solder into simple circuits like transistor radios and amplifiers.
Integrated circuits shrank transistors dramatically, and this trend continued from the first example in 1959. Just twelve years later, in 1971, Intel announced the 4004 - a four-bit processor that was ideal for pocket calculators. It was shortly followed by the 8-bit Intel 8008, but Texas Instruments' TMX 1795 might reasonably claim to have been first, according to this article.
In the 70s and 80s, it was hard to believe that Integrated Circuits could contain so many transistors, especially with valves being such a recent memory. So it would seem not just unbelievable but certifiably insane to suggest that a mere 50 years later, ICs would contain fifty million times as many semiconductor devices.
And it's because of that astonishing progress that we live in a completely different world. Without that almost unremitting rate of change, we wouldn't have the internet, AI would be a dodgy science fiction concept, and we would - without any doubt whatsoever - be shooting films on, you know, film.
The world is based upon tiny switches
Let's go back to the notion that our entire electronic world is based on tiny switches. It quickly stops making intuitive sense when you zoom in on the idea. These switches - transistors - are the devices that allow us to shoot breathtaking digital images. But how can a bunch of on/off devices capture an image of, say, a rose, or a landscape, or a staggeringly lifelike portrait?
Many of us are familiar with analogue to digital conversion, sampling, quantisation and the complimentary reverse processes needed to make digital images visible in the real world. Invented decades ago, It's a process that is now so refined that, done well, it's as good as any analogue rendition.
More transistors on a chip meant more powerful processing. It's roughly analogous to an office building. Small companies can function with a small office - maybe ten or twelve people. Giant global organisations need tens or hundreds of thousands. Not only can you do more with more, but what you can do becomes increasingly abstract. Let me break that down for you.
With more transistors, you can do more maths. Maths is at the core of digital signal processing - digital audio and video, for example.
It's hard to think of a beautiful image or a symphony in the form of data and maths, but we're on the verge of very much greater levels of abstraction, courtesy of even more transistors. AI is poised to become a prominent - and eventually dominant - factor in our everyday lives. It will be a mainstay of our civilisation. And it will be able to do this because of the number of transistors available to us.
The numbers are truly staggering. The latest Apple Silicon chips have over fifty billion transistors. A computer with that many "switches" made from valves would cover an entire country and drain the national grid. And yet, we have these chips in our laptops. This vast progress has taken only sixty years. It is impossible to predict what the next sixty years will bring, and, alarmingly, it's only slightly less difficult to predict the state we'll be in only ten years from now. Next year? Who knows.
We are starting to see the effects of technology moving so fast that it makes everything a blur. It's like how when you're looking through the window on a train that's moving at 125 mph, you can't see the smaller stations as you pass through them.
Spontaneous AI
One example made me sit bolt upright as I read it. The October 2021 edition of New Scientist magazine carried an article by Mordechai Rorvig reporting that new capabilities start to emerge merely by increasing the scale of AI computing platforms. The magazine quoted the case of a natural language processing AI system that had spontaneously taught itself to do arithmetic. It was supposed to solve translating between pairs of languages, but it somehow taught itself how to solve equations. Noone trained it or taught it how to do this; it taught itself. Scale itself, not hardware specialisation, led to this surprising development.
What this suggests is a phenomenon that will make it even harder to predict the future. That phenomenon is the increasing power of software. Even if hardware stopped improving right now, which, despite the apparent demise of Moore's law, it won't, progress would continue at lightning speed because of the self-improving nature of AI software.
Which leaves us in a curious and quite unique position. Advanced technology has the potential to save the planet and all of us who live here. But technology is on the verge of running out of control. The transistor has moved our civilisation faster and further than any period in history - within a single lifespan. What happens next will be the biggest test of imagination, kindness and self-restraint that humanity has ever faced.
Tags: Technology microchip
Comments