Tyler's Site

Abstract

The dream of Artificial Intelligence is one that has existed for over a century; a machine that can learn and understand concepts in a similar way to a human building skills like problem solving, decision making, and general intuition. While we are closer than ever to that dream of a man-made intelligence that can operate on those levels, we are still years (at least) away from doing that. However, we are generally still living with and paying the investment to create newer and better AI systems whether we like it or not. This article is going to go over what the costs of AI are, and how we as a society can use AI in a good way to offset some of the massive cost that we have, and are currently still paying. So, what is AI anyway?

What is AI?

The definition of AI is rather muddled and the current landscape of the tech industry trying to integrate AI into everything is certainly not helping. Before we really begin talking about ‘AI’, we need to understand what it is that we are actually talking about. Wikipedia defines Artificial Intelligence as follows:

Artificial intelligence (AI) is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making.

This definition is as good of a starting point as any. It outlines the dream of what AI might become one day, but as many might catch, AI is not currently capable of any of the examples of things listed in the definition on its own. Rather it is able to take data and find patterns that emulate things like reasoning, problem solving, and perception in a manner that a human might express the same ideas. This is because the popular AI of today tend to be Large Language Models (LLMs). Great…so, what is an LLM? Again, referencing Wikipedia:

A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) and provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on.

This definition is a bit more complicated, but essentially says that LLMs are designed for understanding and creating text to emulate human writing. They can also be tuned to perform better by guiding the LLM through prompt engineering to tailor results to a specific task. The last key point is that they inherit biases and inaccuracies from the training data that was used.

The Cost of AI

During the past few years that AI has been rising the cost of those AI systems have grown, and not just monetarily. There is a massive amount of time, energy, and raw materials that are being used for AI and AI research, and the demand is still increasing under the guise of creating better AI. These costs are ones ordered by a small group of people, but everyone gets to pay. Below are three examples of non-monetary costs that society gets to pay to receive the “benefits” of modern and future AI systems.

Computational Cost

This one sounds a bit silly, of course AI uses computation, but does that cost anything? Yes, it has the obvious cost of the computational power required to actually create and run the AI in data centers, but also for the added resources required for serving the websites being scraped by AI systems and for consumer devices trying to use online resources. This seems like a silly cost to consider, what is the cost of visiting or serving a resource online? Well, that depends on what the resource is doing, how much processing needs to be done, and where that processing is occurring. On small static HTML websites, resources load quickly and they are not difficult to serve as it is just throwing text files between machines, However, something like a WordPress site is much more complex that requires much more in the way of processing and loading resources. This takes time, both for the clients visiting the website as well as for the server trying to serve it. AI scrapers have become so prevalent that it is actually starting to cause major problems for websites even as large as Wikipedia. As much fun as it would be for me to write down an explanation for what is occurring with this episode of 2.5 Admins will likely explain it better than I would, especially in written format. The AI scrapers are so egregious that they are even negatively impacting well established and large-scale websites like Wikipedia.

Environmental Cost

Another major problem with AI is the environmental cost of creating, training, and running the models; the US department of energy has released a report regarding the increasing energy demands coming from data centers. To be fair on this point, the increased energy demand coming from data centers is not entirely from AI directly; things like crypto currency mines are still around that suck up massive amounts of energy to run GPUs for mining currency, as well as smaller use-cases such as the demands of doing ‘normal’ things on the Internet such as doing email and browsing the web do still require some energy resources. However, AI, specifically the training of models, can be held responsible for a large portion of the increased energy demand. The specifics of how AI training works are well outside of my expertise, let alone this article, the process is very reliant on parallel processing which is something that GPUs do quite well at the expense of drawing more power. With companies constantly training and creating new models, this is demanding much more power from data centers than the previous growth metrics indicated. I am going to leave some more resources on how AI training actually works at the bottom of this article for anyone that is interested. It is an interesting, though complex topic that can provide a lot of insight as to why AI acts like it does.

Economic Cost

Economic cost is one of the most obvious complaints when criticizing the industry that has formed around AI; billions of dollars are spent to create what are now called “agents” to marginally increase quality of life for…someone? There are obviously people that use AI for anything from email to writing code, however, the worry is that AI is going to replace entire jobs that a human could otherwise do. This worry gets even more substantiated in areas such as creative works (artists and writers) as well as entry level work; many people agree that AI is not quite ready for that yet, but there is a world in which management will begin to think that AI is “good enough” to replace those people. For the affected group, this would be devastating, however, it creates an interesting problem in the long term as well. Over time, the ability to hire a senior person would degrade simply because there are no senior people in a given field. Most people cannot move up from a junior position to a senior position with senior level knowledge of the field quickly, the reason for that is not usually a lack of knowledge, but a lack of wisdom. This is something that is going to be extremely difficult for AI to emulate given how most models are currently trained, because models only absorb knowledge, they rarely understand the ‘whys’ behind that knowledge, which is often what separates the juniors from the seniors. This hasn’t been a problem with the traditional pipeline of careers because people that are good stick around, and they learn the wisdom of the people that came before them as well as improved things in their own rite. This doesn’t happen with AI, at least not the current implementations that we are using. Once the junior positions are gone, the senior positions are on a ticking clock; and when that knowledge is gone, it will be gone forever.

Living with AI

Unfortunately, the average person has very little control over whether or not this investment into AI is made or not as that is something that big tech investors and CEOs decide. So, what can the average person do to responsibly get something back for the investment that they will ultimately be paying for? That question is not particularly easy to answer in generalities because people, naturally, have different interests, desires, and concerns. Different people also tend to value differing qualities of output on the various topics that they may use an AI for, and there is not really a consensus on what ‘responsible’ AI might look like and even we did, how would we verify that? It would end up being very similar to companies saying “we respect your privacy”…

Given the costs of AI have largely been pre-paid, what are some uses of AI that would not be considered inherently harmful. At least as least harmful as possible. Ultimately, I think it comes down to trying to save time to allow a human to do something they actually care about. For example, in my blog, one of the least fun things is going out and finding resources for whatever the topic is that I am doing; this is especially true with more niche topics that are more difficult to find documentation on. At times it would be extremely helpful to have an AI go out and find resources for more niche topics rather than me having to spend hours fruitlessly digging around on the Internet for documentation. I can then take those resources and read them, as well as looking for any tangential resources mentioned in the documentation the AI found. Another thing that AI can be useful for is brute force trying new ideas, this is something that we have recently seen with Google’s AlphaFold, where AI was able to assist with major research fields such as protein folding. The major point that has to be remembered when using AI is that it does hallucinate, and nothing can be fully trusted until it is verified; which is how research should be conducted anyway. Blindly trusting an AI will often run you into problems, just like blindly trusting the Internet will.

More Resources

I am not an expert in the topic of AI, and so I want to give the reader an opportunity to learn more on their own about how AI works, as well as some other blog posts from people that specialize more in that area:

Thanks

I want to give thanks to Starbursts for assisting me with this topic, and expressing that I need to revisit the presentation of this article.