“AI Safety” is a Purposeful Distraction


A kilometre away the Ministry of Truth, his place of work, towered vast and white above the grimy landscape…

The Ministry of Truth—Minitrue, in Newspeak—was startlingly different from any other object in sight. It was an enormous pyramidal structure of glittering white concrete, soaring up, terrace after terrace, 300 metres into the air. From where Winston stood it was just possible to read, picked out on its white face in elegant lettering, the three four slogans of the Party: 

WAR IS PEACE 
FREEDOM IS SLAVERY
IGNORANCE IS STRENGTH
MACHINE LEARNING IS <INSERT SCI-FI AI CHARACTER>

George Orwell’s 1984 (note: I may have taken some liberties with the original passage)

Perhaps many people in the tech world will remember that “AI” was not always used the way it’s used now. 

In fact, I still remember that feeling of annoyance I had when “AI” was at its peak buzzword level, around 2016 or so. Every new startup that went beyond the most basic if-statements and for-loops was an “AI” startup. And every new breakthrough in machine learning was first and foremost a breakthrough in “AI.” Being constantly bombarded by this new lexical usage, I would always think to myself, The technology they’re using is machine learning (or just plain code). Why are we suddenly calling it AI? 

If it is 3:02 PM and someone asks me “what time is it?” I’ll probably say “3” or “3 o’clock.” On some rare occasions, I may say “3:02.” But I’ll never say “the afternoon.” In other words, there’s a reasonable, productive level of granularity to speak at, which we had in the case of machine learning, AI, and other topics in AI before. But this was being slowly distorted.

It’s easy to chalk this distortion up to “marketing.” Which is true. And certainly much of this marketing was straightforward and not exactly “Orwellian.” For example, many startups used the banner of Artificial Intelligence as a way to attract investment and employees, and to give an increased feel of technological sexiness. (I interned at one startup, part of the TechStars Boulder incubator, that perfectly exemplified the mismatch between marketed technological sexiness and actual technological sexiness.)

However, when we look at the big players in the ML space, such as Google (including DeepMind) and Facebook (aka “Meta”), we see that their role and purpose in this “marketing” may not have been so straightforward. That there may have been something important that they were trying to hide, and which they are still trying to hide.


“AI”: Before and After

But first, let’s consider the different usages and meanings of “AI” or “Artificial Intelligence” and look how each of these may have shifted or not have shifted in the last decade.

The term “AI” is used in two major senses:

  1. As a field of study and technological tooling
  2. As a type of fictional character, typically in science-fiction movies, tv shows, or books

Each and every one of us ties the term “AI” to both of these concepts. This was true before and this is true now. Of course, it’s also important to note: each of us, while using the same term, have different conceptualizations of the term. For example, in the latter sense (of sci-fi characters), our conceptualization will heavily depend on what movies, tv shows, and/or books we’ve personally experienced, and which left lasting impressions. And certainly, for example, technologists may have a very different conceptualization of “AI” as a field of study and technological tooling, than, for example, marketers, product managers, or CEOs. 

Also, this may be going on too much of a tangent, but considering fictional AI characters, we should also consider what role they play within those stories. One of them is a literal sense of technological possibility that may serve to inspire our imagination. But another, often even more important aspect (depending on the work of fiction), is as a narrative tool. As a narrative tool, these characters, which are typically acted or voiced by people, are really human-like in many ways but deficiently human-like in other ways. And this serves to illustrate what it means to be truly human, or to show what traits of people (exemplified by the real human characters in the story) are admirable or perhaps not-so-admirable.

Anyways, as far as I know, this use of “AI” to refer to fictional characters has not changed much in the last decade. But it’s important to discuss because this second sense is crucial to the “marketing” strategy of the major corporate users of machine learning.

What has changed (or where our usage has changed) is around this first sense: “AI” as a field of study and technological tooling.

Of course, throughout the last decade, we have also had great increases in the power of machine learning technology, so it’s important to distinguish how much of the shift may be due to pure technological development and how much may be due to marketing and corporate incentives. So, with this in mind, let me first try to show briefly that a shift has happened, which could be due to any number of reasons, and after why corporate incentives are likely to be the main reason, the main culprit behind the shift.

To show just one example of the shift, consider the syllabus from Carnegie Mellon University’s class titled “Artificial Intelligence” from the Fall of 2014:

As we can see, there are many topics in learning, but there are many other topics such as in search, game theory, and even the study of human computation, which would be moreso a topic of science than purely related to technological tooling. At the same time, in the Fall of 2014, there was an additional, separate class titled “Machine Learning.”

Now, in 2022, there is still a class titled “Machine Learning,” but there is no class titled “Artificial Intelligence.” Not to say this in itself is any kind of problem (and CMU, luckily, remains accurate in their usage of terminology). Merely, this shows to some extent the shift. And now, we can see that when we refer to “AI” we are very rarely referring to the topics of search, game theory, or human computation. We are almost universally referring to learning

Which begs the question: why not just use the term “ML”? Its acronym is still two letters. And in fact, its full name, “Machine Learning” is shorter than “Artificial Intelligence” whilst also being more accurate. Is our usage of “AI,” then, just to make the tech sound sexier? Or is there some deeper reason?


Why Corporations Would Rather You Think of “AI” than “ML”

There is a clear motive for why corporations such as Google and Facebook would rather we think of “AI” rather than “ML.” And this is that “Machine Learning” as a term gives us a better sense of what’s actually going on.

Specifically, any ML system or application has four major components:

  1. The person or organization who is using the ML
  2. The purpose or application for which it is being used
  3. The model that is being learned (and which will be used to make some forms of predictions for the particular application)
  4. The data, most often user data (i.e. human behavioral data) that is used to learn or “train” the model

If we think of these four components and a particular application of ML, say to power the Facebook or Instagram news feed, or to power Facebook’s advertising system, we may immediately be struck with several questions, like: is this a good use of technology? What data do they have on me and my family? Does this organization really have our best interests at heart?

If, however, we speak in terms of “AI,” a very different effect can happen. For one, “AI” does not imply that it uses user data, so we can be distracted from this fact. And secondly, “AI” in its fictional character sense refers most often to fully self-directed characters. So it distracts from the fact that, well, some organization is actually targeting this software towards a particular purpose. A purpose which is always towards the growth of the company but which is not always in the best interest of the general public. 

Furthermore, this allows (even if only in a subtle way) for a delusion to form even among researchers and engineers in the field of machine learning itself. Specifically, it can give the idea that the field of machine learning is on a quest to create some character of fiction. Which kind of character is of course never specified. So we’re left to perhaps imagine our favorite character.

Or, it may be occasionally hinted, by corporate marketers, that it’s one of the scary characters. This fear mongering is a classic and historical propaganda technique. And it’s clear that it can, in many cases, benefit corporate interests. First of all, for the reasons we’ve mentioned (it distracts from the real four components of any machine learning system). But also, if this technology is scary and powerful, well, it must work pretty damn well. And thus, this company that’s making it must be a good investment, this company must be worth a lot because it owns that powerful technology.

For a clear example of such fear mongering by a corporate marketing mastermind, consider Elon Musk’s appearance on the Joe Rogan podcast, where he spoke of his “fears” of “AI.” And consider also the insane growth of Tesla stock, far detached from Tesla’s actual sales. Marketing, the manipulation of people’s ways of thinking and even terminology, can be quite profitable. I mean, Elon Musk has more wealth than the combined wealth of tens of millions of Americans. And of course, Elon Musk is a smart guy. He knows that what’s behind “AI” is actually “ML.” And he knows those four components of ML. His company is doing all of those things, after all.

ML, in its largest, most profitable use-cases, is always a tool wielded by corporations for corporate purposes. And these purposes have so far have never involved the attempt to create a character resembling those of narrative fiction. 

Now, ML may, in some cases, be used for the automation of certain human functions, and thus ML can be “human-like” in the sense that it is targeted towards mimicking human behavior on some specific tasks (given enough behavioral data, of course). But this purpose, it’s important to note, is very different from the purpose of AI as a fictional character. As we mentioned, AI as a narrative tool is used to highlight aspects of human nature. Automation, on the other hand, is used to replace humans in certain areas of the economy. Not to say that automation is about “killing jobs” or anything silly like that. Certainly not! But it does, at the very least, tend to increase wealth inequality (because we as people do not own these tools of automation, these means of production). Also, it’s worth taking a very careful look at what exactly ML companies are trying to automate. For example, is it just those tedious things we don’t want to do? Or does it include many things that we do want to do?


“AI Safety”: An Extra Layer of Distraction

Following up on our previous discussion, “AI Safety” as a field and a topic certainly sounds like fear mongering. This would make a lot of sense given the incentives involved. 

That’s of course not to say that there aren’t dangers in the use of machine learning technology. There are many very real dangers, which include, among other things, questions of fairness and bias. These are absolutely important societal problems that we all need to work together to solve, and which we should absolutely not ignore. 

But the real question is: are those dangers of ML with the model? Or are they moreso with those who wield the model? Perhaps with the human data that is captured to train such models? Or with the incentives of growth at all costs that investor-driven companies are tied to?

I mean, every day, TikTok is trying to improve their deep learning models, their data capturing methods, and their metrics to make more young people “engaged” with their product. And they’ve been thus far quite successful, recently crossing 1 billion monthly active users, most of whom are under the age of 24.

And yet, in the meantime, I still don’t see anyone working on <INSERT FAVORITE SCI-FI AI CHARACTER>.

So we can only wonder: is the next “breakthrough” in “AI” going to lead us closer to our sci-fi dreams? (Dreams, which, again, vary from person to person and have never been well-defined.) Or is it moreso going to help TikTok, Google, and Facebook addict even more members of our future generation?

1 Comment

  1. dkdk01 says:

    Well said. Reinforcement learning and A/B testing are also forms of machine learning that have caused far more harm than any of the tools currently being hyped as AI (which are also promised to be on a path to AGI).

    These mathematical gadgets, like most other mathematical gadgets, are only good or bad insofar as they’re utilized. Using them to increase corporate revenue by making addictive digital products is obviously bad. Using them to map out protein structures is much more beneficial because in theory it is possible to use these new protein structures to create better therapeutic drugs.

    Liked by 1 person

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s