• FF Daily
  • Posts
  • 👾 Econ 07 | The Hardware Revolution Behind AI

👾 Econ 07 | The Hardware Revolution Behind AI

In the previous article, we discussed the rapid advancement of software and the advantages that made it possible. However, the advancements in hardware right from the earliest days to date are an equally important part of the AI story. Let’s get into the details of this, as it has both technological and economic implications. 

British inventor and engineer Charles Babbage can be seen as the first ever to attempt to build a computer - a general programmable device, aided by his compatriot, the mathematician and writer, Ada Lovelace, who is seen as the world’s first programmer.  

Ada Lovelace has been seen as the
world’s first programmer

However, this was around the mid 1800s, and despite all the advances of the Industrial Revolution in Victorian Britain, the physical technology wasn’t ready enough, it was an entirely mechanical endeavor - electronics hadn’t been invented yet! Babbage and Lovelace were a full century ahead of time. 

It was almost exactly a century later when technology had arrived at a point of feasibility. There are many stars in this constellation but I’ll mention two stalwarts, Hungarian-American mathematician and polymath John von Neumann, and British mathematician and cryptanalyst  Alan Turing. 

Turing, the man who broke the German enigma code for the Allied forces during WWII perhaps turning the tide of the war, famously proposed what we call ‘Turing completeness’: to put it extremely simply, the notion that a set of instructions given to a machine should be computationally universal for the device to be a Turing machine, in essence, a real computer, as opposed to, for example, a single-purpose non-programmable device such as the calculator.   

A key player in the technological leap around this time is the transistor, the very device that makes manipulation of electric signals and circuit boards and microprocessors possible. This is when it all took off!

In the 1950s and 1960s, computers were massive machines, using vacuum tubes - a bulky predecessor to transistors, taking up all the space in a large room. Following the invention of the transistor, the Integrated Circuit (a chip as we also call it) was an innovation to integrate all these interconnected electronic components - transistors, along with capacitors, resistors and suchlike, on a single device, typically made of silicon - hence the use of that word in computing contexts, including the nickname ‘Silicon Valley’. 

Silicon Valley gets its name because
Integrated Circuits are made of Silicon

Advancements in how the IC was fabricated meant that over time it was possible to include more and more transistors into a single chip. Improvements in the design of such electronic components leading to them becoming smaller and smaller was an important part of this advancement. 

This led Gordon Moore, co-founder and later CEO of Intel, to claim that the number of transistors in an integrated circuit doubles every two years. This prediction, which we call Moore’s Law, has largely stood the test of time (with some minor revision) for several decades. 

Extending this concept even more broadly, the inventor and futurist Ray Kurzweil, in his own proposed Law of Accelerating Returns, states that the best achieved price-performance in computations per second per constant 2023 dollar has improved from 0.001 in 1950 to, in 2023, 100,000,000,000 (100 trillion) - an increase of 1016 % (10 quadrillion%)!!

Among others, there are a couple of other breakthroughs that are worth touching upon here (I might take up the others ones in a more detailed treatment in another article)

The main device to allow for the execution of software has been the CPU (Central Processing Unit), which is implemented on an integrated circuit microprocessor. Going back to breakthroughs pioneered by aforementioned John von Neumann, the design of a CPU has been to carry out serial processing - in essence, one software instruction at a time (with later optimizations but still in the same spirit). This contrasted with the parallel-processing designs that emerged in the design of AI such as neural nets. 

On a separate thread of innovation not connected to AI, there was the development, Nvidia being a significant player, of the GPU (Graphics Processing Unit) designed to, as the name hints, render graphics on a computer screen, with a particular focus on gaming. In the 2010s, it was discovered that the GPU approach was eminently superior for AI processing than the serial-based CPU, leading to revolutionary advances in AI capabilities.

And finally, for now, there arose another phenomenon called virtualization, a phenomenon of building a complete functional computer using software. Called ‘virtual machines’ these software applications, still running on underlying hardware as any other program, abstract the details of that underlying hardware (this is again a very simplistic outline but it captures the essence).

This has led, over the last couple of decades, to cloud computing, because with virtualization, it’s possible to combine large clusters of hardware to build large (or small) virtual machines and related software setup whose scale and behavior is essentially independent of the underlying hardware. 

These advances combined to allow for ramping up hardware capabilities to significantly larger scales leading to, after more than half a century of trying painstaking research and multiple AI winters, AI finally becoming viable. 

The rest is history as one might want to say, but stay tuned because there are many more interesting stories yet to be covered.

Subscribe to FF Daily to get the next article in this series delivered straight to your inbox.

About the author

Ash Stuart

Engineer | Technologist | Hacker | Linguist | Polyglot | Wordsmith | Futuristic Historian | Nostalgic Futurist | Time-traveler

Reply

or to participate.