👾 Econ 09 | AI, Energy, and Economic Efficiency

This is a series about the economics of AI. The economic question is somewhat related to the ecological question - the environmental impact. Let’s dig deeper into these related questions - energy, efficiency, economy and ecology / environment, for these are, I’m sure you’ll agree, very important. 

In conventional media, you’ve no doubt come across articles as to how energy-intensive a data center that runs AI is. While the underlying concern is very much valid, I’d propose that such a view in itself does not take into account the overall context; for a crucial point is, what is the corresponding resource consumption of the pre- or non-AI alternatives for the same output. 

So firstly, what does AI cost? Specifically, in our main case, Generative AI in the form of large language models (LLMs) that produce human-like output, which has been the focus of our discussion. 

There are two main costs (and processes) in relation to this. Firstly, building these LLMs, and then running them for user access. We’ve discussed some of the details of this in my first series on AI Concepts (see here), but here’s a quick recap. 

Building an LLM entails, as a one-off, feeding a neural net (an algorithm capable of ‘learning’ patterns) large amounts of data (text), such as in the latest cases, practically all the content of the Internet, and subjecting it to further training and reinforcement, including that carried out by humans (RLHF). To get to the scale of the latest LLMs and the emergent capabilities achieved at such scale, which is what makes Gen AI viable, this is a very energy-intensive process, the costs going to a few millions of dollars. 

And once such an LLM is ready, it’s deployed in a large data center, typically housing the equivalent of at least a few hundred thousand units of the computer you have at home. Again here, compared to the costs of you running say Office software on your local machine, the costs of Inference (LLMs processing a request and producing an answer) are significantly higher. Other related costs here are cooling and maintenance, which at the scale of these data centers can also be quite high.

Within the current architecture of Generative AI technologies, the costs of both of these are so high that it’s only a handful of very large firms with billions of dollars of cash that are capable of attempting this. This is what these articles in traditional media talk about when they discuss the high energy costs of AI. But let’s put this in the wider context as I hinted, because this in itself is not the whole picture.  

So what is the cost of comparative human labor for the same amount of productivity, for the same output?

A few months ago, a video went viral of a PhD student, who while testing a latest LLM found that within an hour or so, the LLM was able to independently recreate the solution to his PhD problem that had taken him about 2 years to come up with. And the new solution, he admitted, was elegant, and it worked pretty much first go! 

This is perhaps a more drastic scenario, but let’s take even the most mundane or conservative comparison of human labor versus AI. 

Some of the latest AI models have been compared to a mid-level professional in any field (engineering, medicine, law) by way of capabilities and productivity. I will revisit this comparison with specific figures, but let’s take a cursory glance at what’s involved by way of human resource investment of such a professional. 

  • Years of education and training from childhood to graduation

  • Corresponding physical infrastructure (schools, universities, offices)

  • Transportation and commuting

  • Daily sustenance and support systems including work benefits etc

There are a few other factors to note, some of which we have in previous articles such as Econ 06. Software is non-rival - you can make multiple copies, say of the LLM, with negligible additional costs. What about human knowledge and experience, of say, said mid-level professional? As of today, it is not possible to clone a mid-level professional to multiple mid-level professionals! 

Elsewhere we’ve touched upon things like, AI doesn’t go on sick leave (save some small usually scheduled and predictable maintenance downtime) and certainly won’t get drunk or have family issues leading to distraction and thus reduced productivity. 

An AI’s productivity won’t be hampered because of
falling in love or getting drunk!

Even without exact $ figures against each of these costs, it’s not hard to see where I’m going with this. And also, this is based on the state of technology as of today - Generative AI today is where flight was in the time of the Wright brothers, it’s undoubtedly going to improve, and likely in an exponential way. Newer architectures could obviate the need for such high training and inference costs, there might be other improvements we cannot even imagine. However, it’s safe to say that the human skull, and thus the brain encased in it, are not going to, by biological evolution, increase say 10-fold to make humans that much smarter, any time soon! 

There’s a lot more to be said on all this, and I expect to return to some of it in future articles, but let me end by adding another dimension to this, perhaps another ‘e’ - ethical: 

AI is a democratizing force: by decreasing the cost of intelligence, as Andrew Ng wisely pointed out, we will make the fruits of intelligence – such as advanced medical treatment, ambitious scientific research, recourse to law and justice – accessible to one and all, across the globe, in countries rich and poor. And that itself is worth the costs of running a data center?

—

Subscribe to FF Daily for more content from Ash Stuart.

About the author

Ash Stuart

Engineer | Technologist | Hacker | Linguist | Polyglot | Wordsmith | Futuristic Historian | Nostalgic Futurist | Time-traveler

Reply

or to participate.