The Commoditization of Social Interaction and Other Progress

The rise of social media has come with both many positive aspects and many negative aspects, societally-speaking.

To name just a few on the positive side: being able to conveniently keep in touch with distant friends and family, having access to new, often valuable sources of information and entertainment, and having access to new places to speak up about important social or political issues.

Many legitimately progressive movements were made possible that might not have been possible otherwise. As were many wonderful “social experiments” like r/place. 

To name just a few on the negative side (which are at least partly due to the popular platforms): doomscrolling, increased political divisiveness, decreased face time, and increased mental health troubles especially amongst teenage girls. 

Given this mixed bag of striking positives and striking negatives, it’s easy to either defend or attack these platforms. For example, the corporate owners of these platforms can easily point to the value they’re creating, whilst either ignoring the downsides or brushing them under the rug. Naysayers on the other side can point to any one of a number of pressing concerns and shout, arguably quite rationally, “Down with the whole system!”

But rather than entering into this endless back-and-forth, a more interesting question would be: Just how much negative do we need to keep the positive? Can we imagine a world that maintains these positive benefits while removing much of the negative? Or, if we trace the historical development of social media, can we identify any root causes that are more tied to the negative aspects, so we can perhaps stamp these out?

Well, this essay is an attempt to, at least in part, answer these questions. And concretely, it’s my view that there is a common thread to the specifically detrimental aspects, mirror images of which can be seen all throughout society. It’s that, over time and through a characteristic process, social media has led to the large-scale commoditization of social interaction.

What do I mean by this? Well, let’s start from the beginning, with the Internet.

1. The Internet

The Internet itself is really just a communication protocol. It’s a way for Alice to send messages to Bob, and for Bob to send messages to Alice. Like a telephone, only more generalized. 

The power of the Internet—an unprecedented ability to communicate over distances—is precisely its limitation if we’re speaking of how the Internet affects social interaction: distance. We all know what “long distance” means and the limitations it entails. But at the same time, distance still allows for much legitimate communication because, if nothing else, we can communicate with language, a human medium that is unbounded in its possibilities for expression.

So if we consider being “social” as a human process of experiencing and expressing our experience to another person who is also experiencing and expressing their experience, we could say the Internet itself, barring the limitations of physical separation, allows for legitimate social interaction.

In turn, early online media platforms too enabled legitimate social interaction. These early platforms mostly aimed to centralize the Internet’s communication capabilities, to allow groups of people to communicate. One example, predating even the World Wide Web, was Usenet bulletin boards. But even Facebook at its onset was quite social. Superficial, yes, but social in the sense that it enabled and incentivized expression via the holistic human mediums of language and photography, and didn’t incentivize much else.

But alas, such natural communication could not dominate forever. And with the invention of the Like Button, we saw the invention of what would become the first massively popular digital social commodity.

2. Social Commodities

A Like, like all commodities, requires some labor to produce. In this case, it’s the very simple labor of pressing a button and possibly reading a message or consuming a piece of content. A Like is also useful to its recipient, since social acknowledgement (even such simple social acknowledgement) is always positively received. Furthermore, Likes are uniform: they always express a fixed quantity of acknowledgement. We can certainly try to “read into” a particular Like knowing who it came from, but the Like itself is a uniform item of exchange. 

All this to say that a Like carries all the characteristics of a commodity in its usual economic sense.

The Like, though, was just the first; we went on to create many more: the Love, the Wow, the Share, etc.. For example, Slack, the workplace communication tool, created a whole smorgasbord of social commodities in its Emoji feature. We can send and receive a Rocket Ship, a Banana, a Weird Meme Face. We can even create our own.

Importantly, though, giving and receiving social commodities is quite different from normal language communication. To name just a few differences, it’s always quantified, always uniform, and always unstructured. A collection of social commodities is just that: a collection. There is, for example, no composable grammar, and this is true no matter how many varieties we have to trade. 

Language, in its textual form, may appear similar, since it too is transmitting discrete symbols (and certainly language also has its limitations). However, language, by its structure, flexibility, and shared understanding, is able to communicate magnitudes more by what is not said than what is said. By the space between the lines. 

Being able to respond with a single Like or Emoji is efficient and adds color, but it gives us much less general capability for expressing our thoughts or experience. And for this reason, we could say that the transactioning of social commodities is inherently less social than communication via language (or other structured, unbounded mediums). 

But let’s not get ahead of ourselves. These commodities themselves are just extra ways to communicate. (Well, mostly, since screen real estate and convenience do play a role, even at the onset.) We can still write messages or send photos, but we can also give these simple digital goods. And it remains our individual freedom to choose how to communicate. Furthermore, it’s worth noting that the initial decision to create social commodities—for example, with the Like Button—was almost certainly a user-centric decision. Users, overwhelmed with all their online acquaintances and unable to keep up, needed a streamlined method. An ability to send a quick gift or token of appreciation.

Nevertheless, although social commodities were introduced as mere extras, they did not stay that way.

3. Social Economies

Usenet bulletin boards are called “bulletin boards” for a reason. They’re like a digital equivalent to the town square bulletin board. A place to ephemerally gather and chat which also has a permanent wall for holding messages and advertisements.

When a town square introduces the trading of goods, though, it becomes more than just a gathering place. It becomes a market, an economy. And the evolution of the social economies which resulted from the introduction of social commodities can be seen as analogous to the evolution of normal economies. For example, one could read Marx’s Capital and replace every instance of “commodity” with “social commodity” (e.g. Likes and Followers) and “money” with “exposure” or “attention” and have a pretty good idea of how social media has developed and many of the pathologies it has produced. But let’s dive deeper into what I mean by this.

In these social economies, money can be thought of as exposure, otherwise known as “eyeballs” or “attention”. By exposure, I mean other people seeing content or messages that one posts. And the reason this is money is because it can be used to buy social commodities. Though the analogy to money is not perfect because it differs from platform to platform. For example, on some platforms, such as Facebook and Twitter, social commodities can also be sold to gain exposure (because the poster will see that you reacted to their content); however, on platforms such as YouTube or Reddit, the giving of social commodities is an anonymous act of patronage. Even in the cases where the analogy is looser, though, it is still useful.

First, we should note a few important traits of this social money that is exposure:

  1. Every commodity transaction corresponds to one instance of exposure, so all else being equal, more transactions implies more total exposure.

  2. If the corporate owner of the social media platform is monetized via advertisements, then all else being equal, more exposure implies more money (real money) for the corporation.

  3. If the owner of a social media account monetizes their account via advertisements (i.e. by making some of their posts or some parts of their posts advertisements), then all else being equal, more exposure implies more real money for the account owner. 

  4. Ditto for an account owner if, rather than seeking money, they are seeking influence.

There is one other important trait of exposure, which is the fact that, again all else being equal, exposure lends itself to gaining more exposure. This happens both through people (followers of an account) who may share the content outside of the scope of its original exposure, and also through algorithms, which on the popular platforms do a similar kind of sharing, most commonly promoting already popular content. 

Thus, there is a rich-get-richer aspect to exposure, or especially Follower counts which guarantee exposure over time. In other words, so long as we have a pool of users who are still following new accounts—that is, a pool of people who are willing to give more of their attention—guaranteed exposure has the ability to expand itself. And thus we have not only social money but social capital, social money that is used not merely to gain social commodities but which is used to gain more social money.

Of course, when I say “all else being equal” I am hiding the fact that the accounts that become popular absolutely need to produce some kind of interesting content, content that users will want to pay for or be a patron of with their digital goods and their attention. But independent of, or conditioned on, content quality, there is a value to exposure itself.

4. Incentives

As individuals we have the freedom to choose what we focus on, what our goals are. But at the same time, on a population scale, there are dangers to the above incentive structure. Actually, “dangers” is putting it too lightly: it’s an incentive structure where the financial success of both the corporate owner and the account owners are directly tied to the amount of human attention they can hoard. And where the quantity of attention can be increased by increasing the transaction volume of social commodities, whether or not meaningful communication comes along for the ride (and certainly whether or not face-to-face interaction comes along for the ride).

To describe the problem of flawed incentives, let me quote the great computer scientist Alan Kay (a quote I got from the excellent book by Wendy Liu):

Computing is terrible. People think — falsely — that there’s been something like Darwinian processes generating the present. So therefore what we have now must be better than anything that had been done before. And they don’t realize that Darwinian processes, as any biologist will tell you, have nothing to do with optimization. They have to do with fitness. If you have a stupid environment you are going to get a stupid fit.

5. What Can Go Wrong

Before getting into mechanisms, let’s discuss some of the dangers of participating in a social economy with the above incentives—that is, a social economy that rewards exposure (i.e. attracting attention) and even rewards it such that exposure naturally gains more exposure.

The first danger is simply in how we use our time and what information we consume. Every act of consumption is an act of self-education. It changes us, ever so slightly, and this influences our future consumption choices and therefore our future self-education, and so on. So there is a concrete concern on both an individual level and a society level: the more we relinquish control over what we consume, or don’t question the systematic forces fueling what we consume, the more we are giving ourselves to a system of massive, unknown reeducation. And given the aforementioned incentive structure, should we really trust that this system is acting in our best interest?

Secondly, there is a danger of manipulative actors in such a system. One of these potential actors is the product owner itself, but it’s also account owners. Again, we all have freedom, but none of us are perfect. And clever actors—marketers, propagandists, etc.—can occasionally trick us into listening to them against our best interests. And they can especially prey on specific, vulnerable subgroups of people.

Importantly, though, a digital platform may or may not enable people to combat bad actors within the digital world itself (especially not if the bad actor is the owner). In the physical world, we can, if needed, gather together to rein in wrongdoers. But in the digital world, we are fully limited by the design of the product.

Thirdly, and perhaps most importantly, is the danger of seeking social commodities as ends in themselves. Let’s discuss this in more detail.

6. Living in One Dimension

With the incentive structure discussed in section 3, there is an incentive to pursue Follower counts and View counts independent of, or in addition to, factors such as communication and content quality. This incentive isn’t in terms of what’s necessarily best for us as human beings—in terms of the intrinsic joys of creation and connection—but in terms of social influence, or money (since, as we mentioned, exposure can be converted to real dollars and cents via advertisements).

But because of this incentive, and because anything quantified can be compared (which can play into our natural and/or cultural notions of status, performance, and self-worth), many people can and do begin to pursue social commodities as ends in themselves.

The problem with such a pursuit, though, is that we ourselves become like a commodity. We become numericized, quantified. Our identity, at least in part, disconnects from our intrinsic nature and assumes the role of an economic agent in an economy of superficial pointlessness.

As sad as it is, it’s no wonder that this could lead to the increased rates of depression and suicide amongst teenage girls who use social media heavily. If our self-worth becomes perfectly quantified, then there is no textured meaning to life, and if our quantity is low and destined to stay low, then it becomes hopeless too. And the more time we spend online, the less time we have to connect offline in a more natural, less quantified environment, so the less likely we are to learn to value our own experience and expression.

And it’s worth mentioning: even if you and I can avoid falling into this trap of quantitative comparison, it can happen to vulnerable people who may seek to find a quick fix, a hack, to improve their self-confidence. And this can especially happen when we’ve been raised to value, hack, and tie our self-worth to metrics, such as with exam scores and admissions to the same highly ranked universities (universities who themselves seek to hack their rankings).

7. Feedback Loops

While it would be convenient to put all the blame on the creator for any toxic effects a platform produces, it’s important to note that online platforms can evolve even without constant intervention from the creator. To a large extent, the culture and dynamics are controlled by the users, the digital civilians.

For example, assuming that some social commodities have already been introduced in a product, users may create feedback loops which make them more and more emphasized. One loop comes through a culture of reciprocity, or repayment. If you Like my post, I may feel like I should Like your post. Or if you Follow me, I may feel socially obligated to Follow you. 

Of course, there is no law of nature that says such a culture is “fair” or “right”, or that I’m a “bad friend” for not following you back. Arguably, a more honest culture would have people simply doing what they want to do and not voluntarily indebting themselves.

Another loop is related: it comes through what we could call proportional giving. If I’m more likely to give a Like or a Follow based on my perception of how much others value a Like or a Follow, and if the amount others value Likes and Follows increases the more Likes and Follows they receive (or other people on the platform are receiving), then this creates an acceleration in the amount of commodity transaction. It creates a cycle of giving and receiving more and more. And again, this can happen whether or not we continue to actually communicate (e.g. with language and where both parties are actively engaged).

Actually, given the rich-get-richer aspect of Followers and exposure, it’s natural (as with an economy) that social capital will accumulate into a relatively small number of hands. This creates what’s known mathematically as power law distributions. Or what’s known politically as oligarchy. But in terms of healthy social interaction, this almost certainly decreases the amount of total communication because it creates a huge asymmetry.

Most pairs of (follower, creator) or (friend, friend) have one of the two parties with many more followers than the other. The party with more followers likely has too much demand and too little supply to communicate with everyone. But yet the other party is devoting their attention to them, possibly with an unrequited love or envy (well, unrequited save for commodity repayment).

There are no longer ‘dancers’, the possessed. The cleavage of men into actor and spectators is the central fact of our time.

Jim Morrison, The Lords

The final feedback loop to discuss is fairly obvious but crucial. It’s the loop that brings more and more people on (or off) a particular platform in the first place. Everyone wants to participate in social gatherings and new products, so it’s only natural that this effect can happen.

The danger, though, when everyone moves their focus to a new online world, is that there’s less focus on everything else (including the offline World). And if our focus is captured for long enough, the everything-else can decay, which gives us less reason to shift our focus back in the future.

Really, such shifting of focus to the new is natural and shouldn’t be feared. But well-capitalized actors have often sought to game this desire. For example, some investing groups have developed a whole machiavellian science of “network effects” that they’ve used to grow online platforms. Network effects is really just a term for feedback loops. And as for how to create these feedback loops, it’s most often a matter of pouring money into aggressive marketing to create a collective delusion of social fun or FOMO, in the attempt to generate more FOMO.

8. Algorithms

Behind every digital media product is a complex mixture of business strategy, design, code, and marketing. Different types of people work in these different areas, but within a corporation, they’re organized to all work together towards common aims.

I mention this because it’s important to note what we’re talking about when we use the word “algorithm”. For example, earlier I briefly mentioned how “algorithms” share popular content outside of the scope of their original exposure (e.g. sharing a post from a popular account to people who don’t follow that account). This sharing mechanism could be called one product “feature”. 

So what does such a feature consist of?

Well, first of all, such a feature would not be “launched” in the product unless it served a business purpose and fit into the business strategy—that is, unless it supported the short or long term growth of the product/company. Secondly, such a user-visible feature is designed to look a certain way and behave a certain way that is conducive to the overall user experience of the product. Thirdly, there is a lot of code logic that makes such a feature possible. There is the frontend and backend logic to make it look the way it should, trigger the way it should, and to send the right data back and forth. There is also the ranking or recommendation logic—what’s often known as the recommender system—to decide what popular content to show to what people and when.

The general word “algorithm” just means a procedure, like the steps of a cooking recipe. So there are algorithms everywhere, be it in frontend, backend, databases, design, or business strategy. Though, in the software world, the word generally refers to the code part and specifically procedures which are complex or mathematical. So in the social media context, when we speak of “algorithm” we are specifically referring to (1) features that involve a recommender system somewhere in the mix, (2) recommender systems themselves, or both.

9. Commoditized Algorithms

So what do these recommender systems do? How do they work? Well, it’s shifted over time.

I got an excellent perspective of this shift working on recommender systems at Google (using both “old” and “new” techniques). But it took me a long time to realize what was really driving this shift. So let me share my perspective.

I did two stints at Google, the first from late 2015 to early 2019, and the second one a brief period at the beginning of this year, 2022. I worked on the same team both times, the first stint mainly on Google News and the second stint mainly on Google Discover, both media products. Not exactly “social” media but focused on similar, commoditized interactions such as Like, Dislike, Click, and Share.

My team’s heritage was in Google Search, which is also a recommendations product (given a query, what are the most relevant webpages?). Historically, Search used what is called information retrieval, which is a loose science for developing signals or heuristics and combining these into a formula for scoring webpages (one of the first major heuristics being the famous PageRank algorithm). To simplify, the methodology usually starts from identifying some subsets of queries which are qualitatively underperforming (keyword: “qualitative”), and then developing some heuristic to fix it, which is evaluated quantitatively through A/B tests and metrics, and also qualitatively through human review. 

Similarly, given our heritage, my team started with these methodologies in our media products. 

In both cases, machine learning, a category of predictive or statistical models, was used historically too. But mostly to create particular signals, which fit into a larger, human-curated framework. 

Over time, though, such human curators (i.e. designers or implementers of user-facing heuristics) were slowly phased out and replaced with predictive models. This has increased in prevalence since deep learning arrived on the scene and became more and more capable, but importantly, the trend started long before. And in advertisements (which is also a kind of recommendation product), it was mostly predictive well before DL became what it is today.

So how does ML work in these recommendation products?

Well, any application of ML always optimizes for a particular metric, and if we’re speaking of supervised ML (its most common form, and the main form used in media products), it always relies on labeled data. In other words, input-output pairs, where the goal is to be able to accurately predict the output given any input. For example, if the input is “features” describing a credit card applicant—e.g. zip code, income, and credit score—and the output is whether the applicant defaulted on their payments, the model could be optimized to predict whether any new applicant (represented by their features) would default on their payments in the future.

Deep learning—that is, artificial neural network modeling—is no different in these regards. It is still predictive, still using the same kinds of metrics, and still reliant on input-output pairs. The difference is in the fact that before the prediction layer, it can automatically do “feature engineering”. It can convert complex data structures (e.g. images, soundwaves, or text), which traditional statistical models would not be able to handle, into numerical features, which they can. 

Deep learning often surrounds itself with an air of mystery and complexity. But at the end of the day, if we’re referring to how DL is used, rather than how to make it work, all it is is feature engineering plus prediction. It’s that simple.

The complexity really is a reflection of the complexity of the natural world. Data, when resulting from the unbounded expression of nature, can take an unlimited number of forms. So no model can predict any output given any input. But given any well-defined task and data-type, the aim is to predict outputs as well as possible given inputs. In other words, the goal is to create a commodity algorithm, which performs perfectly and deterministically (or as perfectly and deterministically as possible). 

ML models do not seek to be expressive or opinionated (as with signals and heuristics). They seek to be pristine commodities, perfect predictors.

And in the case of ML for media products, what they’re now predicting (most predominantly) are precisely the social commodities: Likes, Dislikes, Clicks, Shares, Followers. Specifically, their metrics optimize for the accurate prediction of commodity transactions (e.g. will this user Like this other users’ content?) or to directly increase future trading volume. The motivation for this comes back to incentives 1 and 2 from section 3: more transactions, all else being equal, means more exposure and more exposure, all else being equal, means more money.

10. Growth

So what is behind this shift from signals and heuristics, which are (at least somewhat) subjectively expressive or opinionated, to predictive models, which are not (at least not in themselves)?

One reason is certainly the fact that statistical modeling has become more and more capable (and there is a feedback loop where the more we rely on predictors, the better we make them, so the more we rely on them, etc.). But this is far from the whole story.

The benefit to the bottom line of an already popular media product (which is already capturing a large amount of user data) is larger moving from a human-curated system to shallow ML than from moving from shallow ML to deep ML. And it’s larger moving from shallow ML to deep ML than moving from “basic” deep ML to more advanced deep ML (e.g. deep reinforcement learning or sequence models). And yet, companies, such as Google, were quite hesitant in moving more towards shallow ML in the first place (and are still often hesitant). And they are more or less hesitant depending on the product (e.g. as I mentioned, in advertisements, there was never as much fuss over being overly quantitative there, or not qualitatively-focused enough, even though ads too are user-facing).

In reality, the core driving force behind the commoditization of algorithms is precisely the same as the driving force behind the development of the social commodities: growth. The origin of the Like Button was in many users being overwhelmed with keeping up with all their online acquaintances. Similarly, as a media product grows to include many different types of users—possibly from many different countries with drastically different cultures—it becomes harder for a software development team to create tailored heuristics for all of them. The more hand-crafted approach starts to see diminishing returns in its ability to grow the user base and their engagement levels. And thus, there becomes a need for a more widely-applicable, common denominator solution.

The thing about growth, though, is that it knows no bounds. It’s not a discrete thing: small → big. And similarly, commoditization is not a discrete thing: no commoditization → commoditization. With more growth comes more commoditization: larger volumes and more accumulation.

Let’s consider the incentives from section 3 again: all else being equal, more commodity transaction means more exposure, which, all else being equal, means more financial growth. Now it is true also that, all else being equal, more users on your platform means more communication on your platform. But it’s not necessarily true that it means more communication in society as a whole. 

And it’s also not true that more transaction implies more communication. We can’t even assume “all else being equal” in this situation because transaction directly affects communication. It can be used as an extra, but it can also be used as a substitute. So it’s possible that transaction can come at the expense of communication.

And in fact, with enough growth, it can eventually be the case that communication is an active hindrance to further transaction and growth. If, for example, users are spending “too much” time chatting, well, they could be spending more time scrolling and clicking. And this really is the danger of unbounded growth under such an imperialistic incentive structure.

So what encourages our growth to be unbounded? And what encourages our incentive structures to be imperialistic? Well, we’ll get to those questions in a moment. First let’s look at similar commoditization processes happening elsewhere.

11. Similar Processes Everywhere

Just to clarify again: there’s nothing wrong with commodities in themselves. The safety, security, and luxuries we all cherish in the modern world would not be possible without commodities. Without them, most of us would immediately starve and perish.

That being said, not all commodities are made equal and not all commoditization stays the same over time. I would rather my potatoes be commoditized than my social interaction. And I would rather 5% of my social interaction be commoditized than 50%.

The gravest devastation from commoditization, at least in my view, comes from the following:

  1. When we humans ourselves are pressured to become commodities (e.g. it’s commonly understood that this can happen with labor, but as we’ve seen, it even happens outside of just “work”; we may be effectively slaving away for parties that we do not even realize we’re slaving away for)

  2. When meaningful human experiences or acts of expression are commoditized and it becomes harder to have these or do these in non-commoditized form (the more this happens, the more it becomes clear that industrialization is no longer serving us people whom it exists to serve in the first place)

  3. When the economic drive to greater efficiency and/or consumption manifests in misleading narratives and mythologies to distract us from what’s really going on, or to trick us into thinking some outdated or net-harmful trend is actually net-beneficial (or even our path to salvation)

  4. When it “goes too far” and especially when it is aggressively taking advantage of young people or future generations (e.g. many media platforms right now are explicitly targeting young people)

Let’s discuss a couple of examples of commoditization processes in other realms. 

The first is science, specifically how many major scientific institutions have evolved such that their primary product—papers—must have a statistically significant p-value, which is often used as a binary indication of “truth” or value. And furthermore, how they discourage any kind of negative result or any iota of subjectivity. This is the pure commoditization of science or truth-seeking, the delusion of positivism in its most streamlined form. And again, it comes from growth, and also because science can serve industrialization even when it produces results which are not that interesting or empowering to individual people. Luckily, though, some statisticians are fighting this anti-scientific usage of statistical significance.

In my opinion, science that’s empowering to people should be conceptual, not purely statistical. But meaningful concepts are often at least somewhat subjective (not because their “truth” is subjective, merely because it can, in many cases, be difficult to demonstrate and validate a full concept without calling upon others’ lived experience). Thus, it’s very hard to publish humanistic science in such a positivist setting. All the meaning has to be snuck in either through lectures, conjectures, offhand comments in papers, or through books that scientists write in their own time. 

Great scientists certainly do all this, but it’s not exactly incentivized. And in fact, it’s often actively disincentivized, with legitimately interesting but somewhat subjective theories being cast aside as “pseudoscience”, whilst many purely predictive non-theories masquerade as “science”.

Secondly, let’s consider SSRIs (selective serotonin reuptake inhibitors), a popular commoditized treatment for depression. It’s recently become more widely known, thanks to the work of Joanna Moncrieff and others, that SSRIs do not really treat depression at all—mechanisms, that is—since depression is not caused by any kind of serotonin abnormality.

Considering a simplified view, if A is a healthy mental state and B is a depressed mental state, SSRIs do not convert B back to A (B → A), rather they induce an alternative state C (B → C), which may have fewer depressive symptoms than B but which may also have massive side effects, and withdrawal symptoms if a patient stops using them.

In fact, Moncrieff suggests (as common sense suggests too) that the root cause of depression is not in the brain. Instead it’s social or environmental:

An over-emphasis on looking for the chemical equation of depression may have distracted us from its social causes and solutions. We suggest that looking for depression in the brain may be similar to opening up the back of our computer when a piece of software crashes: we are making a category error and mistaking problems of the mind for problems in the brain.

We have an absolute need for treatments for depression (in part because social media appears to be driving more depression). And it’s easy to imagine how pill treatments can be the product of growth (e.g. demand for therapists or other holistic, tailored treatments exceeding supply). But this example illustrates a common problem with our present (liberal capitalist) incentive structures. Clearly the pharmaceutical companies that pushed, and continue to push, SSRIs, making tens of billions of dollars each year, are not incentivized to actually fight the mechanisms of depression. To get to its conceptual roots.

And in fact, this is largely the fault of the FDA system that relies predominantly on statistically significant changes to symptom metrics. Symptom metrics, which create a layer of alienation from patients and from disease mechanisms, allow for this kind of problem to arise.

It’s quite analogous to the alienation that arose in my time at Google when we switched from developing heuristics—that is, concepts—and validating them with metrics (which is great!), to optimizing for metrics directly. In making our “northstar” quantitative rather than qualitative, we made our customers moreso means to ends of growth, with programmers and leadership becoming more detached from these customers.

12. Unsustainable Growth

People throw around the term “unsustainable” a lot. And yet, we do a pretty good job of sustaining our unsustainable growth. But what these people really mean is that it’s unsustainable without massive destruction and harm to people.

So what encourages or even forces unsustainable growth? Well, it’s easy to chalk it up to simplistic notions like “capitalism” or “competition”. But simplistic notions like that are not so useful (unless two parties already agree on a viewpoint) because the reality is that “capitalism” and “competition” are of course not absolutely bad or absolutely good. And they’re also not even unitary concepts: they follow processes which evolve and these processes are ultimately the result of individual human actions and choices. 

Even Marx, for example, who wrote the most thoroughly scathing critique of the “capitalist mode of production” was in many ways a big fan of “capitalism”. He felt that without the productive forces of capitalism which created immense surplus value, the socialist revolution he pushed for would not even be possible. His historical analysis was just that, eventually, the capitalist mode of production becomes outdated, or more precisely, it becomes so overly exploitative and counterproductive to people that a drastic change is needed.

The reality of unsustainable growth today is much more specific than mere “competition”. There are specific structures and cultures in place which force growth at consistent rates. Here are a few specifically related to social media tech companies:

  1. Venture capital investment (e.g. it’s needed to grow at an incredible pace to continue receiving more investment money)

  2. Widely distributed ownership (e.g. a single owner or leader obviously has its risks, but at least a single owner has the power to decide where to “draw the line”; distributed ownership by investors, on the other hand, is not just risky but demands growth at an unsustainable rate)

  3. The stock market (e.g. the “value” of a company is tied to speculation, which is dependent on the expectation of growth)

  4. Media outlets (e.g. TechCrunch serves financialization by encouraging everyone to put money and investment on a pedestal; it’s always ”Company X raised a $Y Series B” and never “A Critical Review of the Features of Product X” or “What I Love and What I Don’t Love About Product X”; in Marxist terms, it’s all about exchange-value and rarely about use-value)

These structures and cultures exist, but we all have the freedom to either actively bolster them, passively participate, passively abstain, or actively fight against them. And their continued existence and development is, at the end of the day, solely the result of the actions and choices of individual people.

13. Narratives to Defend the Status Quo

It is not enough to destroy one’s own and other people’s experience. One must overlay this devastation by a false consciousness inured, as Marcuse puts it, to its own falsity.

Exploitation must not be seen as such. It must be seen as benevolence.

R.D. Laing, The Politics of Experience

Let’s briefly discuss again my experience at Google and the shift away from hand-crafted methods to automation in digital media products. As I mentioned, it took me a while to figure out what was really behind the shift. And this was due (besides my own naivete) to many narratives that were actively hiding the reality.

One of these narratives was the idea that we were not directing or controlling consumer behavior; we were simply “adapting to changing user preferences”. Quite possibly, this narrative was simply a projection of our own impotence as employees. But the reality, of course, is that the creator of a product and the product itself is an active agent in the world, causing tangible (if not fully predictable) change in the people it affects.

Of course, consumers too are active agents and can abstain or even push back. But to acknowledge the agency of consumers while absolving oneself of agency and responsibility is simply inaccurate. A coping mechanism.

Related to this narrative is a misconception with the idea of data. There is the widespread idea among those developing media products that we are mere passive collectors of data, when in reality, we are actively capturing these pieces of information. And from the vast, unbounded world of information, we are capturing very specific information, information which is also largely the product of our own actions. In other words, we are not passive collectors but active creators and capturers. As Laing put it, data could more accurately be called capta.

There are other misleading narratives too. For example, when contrasting automation to heuristics, there’s not a rational acknowledgement of “We’re automating because it’s what’s needed for further growth.” Instead, we need to put automation on a pedestal and disparage more user-facing or human-crafted approaches. For example, if categorizing user needs based on user research (e.g. what types of news people like to read), and if you desire to build some heuristics based on those categories, colleagues and leadership may say “Well, your categories are not perfect.” Nevermind that doing product development behind a wall of metrics, with no users in sight, is also guaranteed to be imperfect whilst additionally guaranteeing alienation (alienation which, by the way, can hide any widespread harm we may be doing).

At least fields like finance are able to be a bit more honest in this regard. There’s no hiding the fact that the primary goal is growth and money in finance. The corporations of Silicon Valley, however, continue to try to convince us that their primary goal is “making the world a better place.”

14. Mythologies to Defend the Status Quo

In addition to smaller narratives, there are also broader mythologies to defend and even encourage further commoditization. Complete with heroes, gods, and visions of future Paradise.

One of these mythologies I just mentioned. It’s the idea that the “technology” coming out of Silicon Valley by default benefits the general public. Nevermind if the biggest accomplishments of technology we tend to cite came from decades ago (e.g. the automobile or washing machines) and our modern definition of “technology” has shifted to include companies like WeWork, various advertising and marketing tools, NFTs, and all forms of hype.

This mythology comes in many flavors, many sub-religions. But some specific manifestations can point us to what this mythology really stands for. For example, consider the myth of “AGI” as presented in the OpenAI Charter. It presents a vision of a vague future technology, “AGI”, which is guaranteed to appear at a specific moment in time and greatly “benefit humanity”. (Though, we also need to be concerned about our “safety” with respect to this new technology.)

What OpenAI has done thus far is improve deep learning models (i.e. code which performs feature engineering plus prediction). So it’s unclear how this could lead to a singular moment of Paradise. But a historical analysis of mythology will help to put this modern-day Millenarian or Messiah myth into perspective (for example, as Adorno and Horkheimer did in their work, Dialectic of Enlightenment). 

In ancient polytheistic religions, the gods were often associated with elements of nature (e.g. the sky, the sun, the ocean, the weather, fertility, and harvest). Later this shifted more to monotheistic religions where God was made in the image of man, and man in turn was to “have dominion over the fish of the sea, and over the fowl of the air, and over every living thing that moveth upon the earth.” This shift, according to Adorno and Horkheimer, reflected man’s increasing mastery over nature through technology, science, production, and population growth.

So let’s consider the myth of “AGI” in this context. In this myth, the machine is God. And. at least in my view, this is a clear reflection of our subservience to the machine. Furthermore, given that what this “AGI” that OpenAI (and other companies, such as Google DeepMind) are building is just corporately-owned tools for digital automation, it really is a reflection of our subservience to wealthy technocrats and the digital technology they produce and own.

Zooming out, it seems silly that people could believe such a vague and archetypal myth, and especially one that puts machines above people (given that we are people, after all, and anything we build should be to support us). Partly it’s due to marketing and bad actors (and the present environment often rewards bad actors). But that’s only part of the equation. 

Actually, this belief can arise in a similar way to how people can become obsessed with Likes and Follower counts on social media: if we, as technologists, tie our own success and self-worth to furthering the cause of the present tech industry and the present process of industrialization, then it’s natural that we will come to identify with this role. We disconnect, at least in part, from our intrinsic nature and assume the role of an economic agent, even if the economy becomes more and more exploitative and capital becomes more and more concentrated.

And it’s easy to do this (or rather, have this happen) because most of the time, these beliefs just sit at the backs of our minds. We don’t have to think about them actively, and especially not if we’re in a bubble, surrounded by people with similar beliefs. Furthermore, we can always find some historical and modern examples of positive outcomes of industrialization, and focus on those, not zooming out to look at the big picture. Usually, it’s only when we hit a wall in our own progress that we start zooming out or reassessing our path.

15. Change

Let’s revisit a question we asked at the very beginning: Can we imagine a world that maintains the positive benefits [of social media] while removing much of the negative?

The answer is: I think we can. Not to say getting there won’t involve a battle, but we’re speaking of imagination for now.

For example, if our present environment allows for and even rewards large-scale parasitic incentives—incentives that will almost certainly cost us in the future—then we can simply constrain this environment. And we can certainly do this in a targeted way which does not remove positive incentives. Laws and rules are quite flexible.

Furthermore, if the toxic effects of social media (and other products/industries) mainly come in the later stages of unbounded, unsustainable growth, and we have specific structures in place which not only encourage unsustainable growth but force it, then we need to change those structures. And to spread the word, since these structures often depend on our collective support (i.e. our choices and actions) and that support often comes through simple naivete.

Finally, we should again repeat (to ourselves and everyone around us) that society results from the individual choices and actions of people. Societies do not “just evolve” or “fix themselves”, and there are many different possible ways societies can develop and change, as history shows. 

For example, the aggressive industrialization and exaltation of objective rationality during the “Enlightenment” period in Europe was countered by the “Counter-Enlightenment” and “Romantic” periods, which in many ways tried to meld innovation in the external, objective realm with the internal, subjective realm. To not only master nature but to value nature, including our own human nature. In other eras, though, similar circumstances gave rise to fascism and genocide. (Simply because the fascists acted, organized, and spread their beliefs more aggressively than the non-fascists.)

We can take a Martian view and suppose that things will fix themselves. And they almost certainly will if we take this Martian view. But in what way? 

Or we can say resignedly “What’s the use in trying to change the environment. I have no power.” But acknowledging our own agency is not the same as saying we as individuals should solve every problem, or take on huge challenges. We obviously have to collaborate to solve “big problems”. And there are local organizations everywhere that are actively working together, actively seeking joy and accomplishment in the tearing down of counterproductive structures (mostly those unique outgrowths of late-stage liberal capitalism) and the creation of new movements, cultures, and ideas.

Importantly, if we want to save nature, both our own Human Nature and the wider Nature that we’re a part of, well, we have to take action. But also—perhaps even first—we need to learn to love Nature, and to learn to not love not-Nature. And certainly, we need to try to determine the difference between the two.

The False Prophecy of “AGI”: A Quick TL;DR

Since my previous essay, OpenAI and the False Prophecy of “AGI”, was quite long, I figured I should make an internet-friendly, read-on-your-coffee-break version to summarize its main conclusions.

1. “AGI,” as presented in OpenAI’s Charter, is an archetypal Millenarian myth (or Messiah myth)

With the Messiah or Paradise-inducing event being, of course, “AGI.” These myths have been found across cultures for thousands of years, and this one fits the archetype perfectly. It contains all the core assumptions: (1) the event (“AGI”) will happen, no matter what, in the near future, (2) it will be a discrete event (one moment in time), and (3) it will benefit humanity immensely. 

Meanwhile, all the basic concepts—what “AGI” is, how it will “benefit humanity,” how it will be “autonomous,” why it has a risk of “harming humanity”—are left vague or undefined.

Useful links for cross reference: Charter, Messianic and millenarian myths (Encyclopedia Brittanica), Millenarianism (Wikipedia)

2. Many, but not all, people working on “AGI” believe the myth quite literally (others, though, may believe a different kind of myth)

My assessment of believers in the OpenAI millenarian myth comes from a small number of conversations with OpenAI employees. But importantly, ML practitioners more widely, while they may not believe that exact myth, may hold another subtle belief at the back of their mind, which is that the field of ML is on the path to something resembling some sci-fi AI character, when in fact, the direction of the field of ML is being driven by its most profit-making applications (e.g. advertising and media, labor automation, and financial prediction).

3. The myth is made believable by the subtle manipulation of language and indirect PR

This manipulation of language is quite Orwellian. And there are a few specific ways the language manipulation is done. 

One is through the use of terms with multiple meanings. For example, “AI” is connected both to the technology/field of study, as well as fictional sci-fi AI characters. This allows marketers to introduce subtle assumptions. For example, they refer to technology having a risk of “harming humanity” or being “unsafe” without saying what could be harmful or unsafe. Although these assumptions are completely unfounded, it can work because we have prior ideas about “evil AI” from movies. And through the assumption, it also gives a misleading idea of what the technology is even supposed to be (e.g. it implies it should be something like in the movies).

Secondly, it plays on imprecise but not completely inaccurate terms like “autonomous.”

As for the PR, it is always cool, but it’s not reflective of the real business, which is where the incentives for developing the technology come from in the first place. For any given profit-making tech, it’s probably possible to devise some cool and innocuous-seeming application of the tech to mislead and encourage research for the profit-making use.

4. The manipulation of language is used often in Silicon Valley as a whole

For example, the most basic manipulation is with the term “technology.” The fact that we use such a blanket term allows marketers/investors to subtly trick us into thinking an NFT is in the same category as a washing machine. Or that a marketing analytics tool is in the same category as an automobile. Or even that WeWork is a “technology” company.

Instead, we should judge products and companies on a case-by-case basis (and look to their incentives and motivations). There really is no such thing as “technology,” merely individual people and organizations doing and building certain things, which may or may not be useful or empowering.

5. Any “AGI” research that compares machine “performance” to humans contains a notable fallacy, the “task fallacy”

First of all, the whole obsession with “performance” and “outperforming humans” should show how such companies are not creating humanistic products. Rather, they’re following the incentives of capital accumulation which are largely human-antagonistic.

And speaking purely logically, “performance” on well-defined “tasks” that are measured by some predictive metric—no matter how many tasks it is—is never a good measurement of what it means to be human, or any kind of independent being. This is the fundamental flaw of “AGI” research. 

But of course the research, again, is not motivated to create a sci-fi character, or a “being.” It’s moreso motivated to create a greater hyperdivision of labor and more bureaucracy to make for a more efficient economy. The zweckrational.

6. The real ML behind OpenAI’s “AGI” is for various forms of digital automation—this will not “benefit humanity” (at least not in our current economic environment)

For one, it will accumulate more capital into the hands of the already wealthy, since the automation tech is not owned by the general public. Furthermore, it may necessarily increase bureaucracy across society. More bureaucracy makes us smaller cogs in bigger machines. And makes our economic experiences (such as going to the doctor or flying on a plane) more choppy and impersonal, less human

Of course, if our economic system is regulated or altered, this could be different.

7. Silicon Valley, at least at the present moment, is more focused on marketing and creating silly myths to hide its drive towards capital accumulation than on creating human-empowering products

OpenAI is an extreme case. But not that extreme. It’s just common startup advice to create a “cult” or a “movement.” And big companies are even worse. This is just basic incentives, or how our unregulated economic system works.

Of course, there are many exceptions to this tendency, even by large corporations. People that choose to empower other people to build visionary products (or people that fight to do it themselves without anyone’s permission). And thank god for that. But so long as the overarching incentives stay the same, the general tendency will stay the same too.

8. The reason there are so many “cultlike” companies in Silicon Valley is because of differences between digital markets and physical markets

If you dominate physical infrastructure—like say oil fields or a transportation network—you don’t need to spread myths because, well, you’ll dominate either way. People need you. Similarly, if you’re in a market that’s obviously useful and ethical like, say, cancer research, you don’t need to spread myths because, well, “curing cancer” sells itself.

But if you have a digital product (where you don’t also dominate some physical land or infrastructure) and what you’re building is not obviously useful, then you can only dominate by spreading misleading ideologies. Rather than territorializing land, you have to territorialize people’s psyches.

9. If we want ML research (or software development in general) to have any chance of being broadly beneficial, we need more regulation and/or to fight the predominant economic system

Right now, every ML “breakthrough” is going towards making addictive media products more addictive and towards advertising that exerts more control over consumers. So this needs to change. 

And in fact, the whole direction of ML research is driven by this use case and a small number of other use-cases (e.g. digital automation, like in the case of OpenAI, and in various financial predictions). There are of course, great use-cases of ML, but we need regulation to choke off the bad use-cases; otherwise, every breakthrough may be net-negative for the general public (though we may, individually, still be able to make a fat paycheck).

Luckily, regulation is doable! As is fighting the system. I mean, it won’t be easy, but local organizations everywhere, such as social democratic organizations, are working together to push for positive economic change. And they’re quite social.

OpenAI and the False Prophecy of “AGI”

Prior to the digital age, dominating a market was mostly a physical matter, since to produce physical goods or services, it was/is important to own physical land or infrastructure. In the digital realm, though, and especially on the internet, the rules are different. This is because, for one, the internet has no finite territory. And for two, digital products are built largely from software, and software can be replicated for next to no cost.

This means that the strategy for dominating digital markets has shifted, especially for more purely digital markets (ones that do not have a major physical world component). It is no longer enough in these cases to merely own land and resources because we’re operating in a world without finite land and finite resources. Instead, what is of greater strategic importance are (1) the portals through which we enter our digital worlds, and (2) our desires to do certain things and go to certain places in these digital worlds. 

Portals are important, but these are controlled by a select few. A more commonly accessible and finite “resource,” though, are people. And so the strategy to dominate many digital markets has shifted from the territorialization of real estate and inanimate materials towards the territorialization of people’s desires and psyches. (This, of course, is not new to the digital/internet age, but its importance and prevalence has increased.) Desires, for example, can be swayed through positive messaging and imagery, so this is a crucial strategic component. But desires are often fleeting, so it’s useful too to establish ideologies and beliefs that will stick around for a longer term. 

Hence the rise of the modern “cultlike” technology company, one whose strategy involves the establishment of some set of beliefs (in their customers, employees, investors, etc.). This term “cult” gets thrown around a lot to refer to tech companies, and rightfully so. But it’s important to note that there is a large variance in the level of “religiosity” of such companies. And this variance is related to the market incentives we were discussing. 

Specifically, the need to be cultlike is inversely proportional to both (1) how much your product is tied to physical infrastructure, and (2) how obviously useful your product is. For example, if you are a company like Amazon, who has digital portals (e.g. the Amazon shopping website and the AWS dashboard) but whose dominance is largely dependent on physical infrastructure (e.g. transportation and data centers), then you have less need to establish some kind of misleading belief system. And for example, if your product is obviously useful and ethical, such as a test or treatment for cancer, you have less need as well. “Curing cancer” kind of sells itself, after all.

However, if your product or your market involves neither of these—it’s neither physically-tied nor obviously useful—then, well, establishing an ideology may be the only path to dominance. 

Which brings us to the topic of this essay: OpenAI. In a world of cultlike companies, they—if we look just a little below the PR surface—stand out as truly Davidian. And this is due largely to their market characteristics: They do not own physical infrastructure, which is why they’ve partnered closely with Microsoft and Microsoft’s Azure cloud. Also, they are not building something obviously useful. In fact, they do not make it clear what exactly this “AGI” they are building is even supposed to be. And yet, they are doing quite well thus far through a combination of flashy PR and deceptive ideology.

What is this deceptive ideology? Well, let’s take a look at the scripture itself: OpenAI’s Charter.

1. The Charter

The Charter, released in early 2018, is quite a remarkable document. It consists of merely one paragraph (well, two if we include the announcement paragraph) followed by eight bullet points, with each bullet point consisting of one to two sentences. And yet it contains a heavenly multitude of bullshit and misdirection. 

So much so that it’s going to take us a while to sift through all of it. But let’s go ahead and start from the beginning, from the announcement paragraph.

1.1. The Opening Paragraphs

We’re releasing a charter that describes the principles we use to execute on OpenAI’s mission. This document reflects the strategy we’ve refined over the past two years, including feedback from many people internal and external to OpenAI. The timeline to AGI remains uncertain, but our charter will guide us in acting in the best interests of humanity throughout its development.

The first thing we could notice, which is just a minor observation, is simply the fact that most companies do not have a publicized “charter” like this. They do have charter documents and bylaws (i.e. formal legal documents) for how their board is to be legally governed. But they do not have an informal publicized “charter.” 

On the other hand, many companies do have a publicized “mission statement,” or perhaps a “white paper,” a “manifesto,” or a set of “core values.” All of those phrases indicate these are our principles. “Charter,” though, has a different connotation: that OpenAI is bound (e.g. legally) to follow whatever is contained in the document. 

In reality, though, the company is not bound at all. But who may in fact be bound are the employees of OpenAI, since adherence to the Charter—and even holding others in the organization accountable for adhering to the Charter—are crucial criteria for being promoted (this coming from the exposé by Karen Hao).

Anyways, more important than the minor issue of “charter” is the last sentence of the paragraph: “The timeline to AGI remains uncertain, but our charter will guide us in acting in the best interests of humanity throughout its development.”

This sentence contains several notable assumptions: that “AGI” is an inevitability, that “AGI” is a well-defined “thing,” and that there will be a discrete moment in time where we can say that this “thing” exists.

But if “AGI” is not well-defined—if instead (and this is purely a hypothetical) it’s a conceptual play-doh to be molded as they see fit—well, these assumptions would serve to absolve OpenAI of responsibility for what they’re building. Because it says, effectively, this “thing” is getting built, whether it’s us or someone else

But wait! In the next paragraph, they do define “AGI.” Surely this will clear things up.

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles:

“Highly autonomous systems that outperform humans at most economically valuable work.” A lot to unpack here. 

First, sadly, it seems that what I said about the play-doh was accurate. This definition is effectively defining “AGI” as that which performs some “economically valuable” work. An amorphous definition that allows them to seek profit in any technologically possible way.

Secondly, I want to point out that the phrase “highly autonomous” is highly suspect, especially in light of OpenAI’s current products (i.e. their cloud APIs and Github Copilot). Why? Well, behind any modern “AI” application is machine learning, or more specifically, deep learning. And deep learning, like any piece of software, is simply a tool targeted by a person or organization for a particular purpose. Deep learning, in fact, is absolutely no different than any other software tool in this regard. 

Now, to be fair, there are some specific products or applications that could rightfully be called “autonomous.” A self-driving car would be one example, though, in that case “self-driving” is more precise than “autonomous” given that such a car is still a tool to take you from A to B as you specify. If my coffee pot is programmed to start brewing before I wake up, that too could be said, with some degree of accuracy, to be “autonomous.” But this notion of autonomous-ness is quite a different matter from “intelligence.” It’s more a reflection of the product or application being a kind of moving, functional device. And it’s important to note that none of OpenAI’s current products could be said to be “autonomous.” 

For example, though GPT-3 could be said to be “smarter” than GPT-2, it is not more “autonomous.” Not to say that there’s no chance that OpenAI plans to build some more “autonomous” applications in the future. But given their current focus on the cloud, to which, presumably, Microsoft is directing enormous resources (a $1 billion investment), it seems likely that this phrase “highly autonomous” is moreso for the sake of misdirection.

It’s worth noting, by the way, that the reason for using this phrase may be similar to reasons for using the term “AI” as opposed to the more precise terms of ML and DL (machine learning and deep learning), or the term “model” when referring to a particular instantiation of ML software (as all ML researchers and engineers would refer to it). In marketing—by OpenAI as well as Google, Facebook, and others—it is preferred to always use the blanket term “AI” because it may help to distract from what we were just discussing: the fact that these ML models are always targeted by an organization for a particular purpose (and always using some data). “AI,” by tying into our common sci-fi notions of AI characters, helps to distract from all that by making us think that this software is more “autonomous” than it really is.

Or, to put what I’m saying another way: the only real autonomy in these corporate applications is that of the investors, executives, and researchers/engineers.

Importantly too, and perhaps surprisingly, the overusage of “AI” and “autonomous” even aid in establishing a subtle belief in the minds of many expert ML researchers and engineers, which is that the field is on a quest to create some character of sci-fi. Which kind of character is of course never specified, though. And the reality of the research agenda is that it’s mostly being driven to increase profits in specific application areas (e.g. advertisement and media products, automating labor tasks, and in financial prediction).

It may seem surprising that even expert researchers could hold such a belief, but it’s possible by the fact that such a belief is only very rarely at the forefront of our minds. Most of the time, we’re working on a specific model for a specific application, which is often an engaging intellectual task. So it’s enough to merely have some plausibility that such work could contribute to some kind of positive sci-fi future. Sadly, though, such a belief is nothing short of a leap of faith given the current profit-driven research agenda. 

Anyways, this may sound like quite a lot of information merely on the terms “autonomous” and “AI,” but this matter is crucial to the whole narrative of OpenAI (and much of Silicon Valley, for that matter).

This also brings us to another important topic in this paragraph: “safety.” As just discussed, this software is fully wielded and fully targeted by OpenAI (and the clients of their cloud API service) for their own purposes. And yet, we speak of “safety.” This use of language is one way to further give the false idea that the software is an entity-in-itself. But this is also a simple case of fear mongering, a classic and historical propaganda technique. 

Fear mongering, in this case, serves multiple purposes. For one, speaking of the idea of “inevitability” we discussed earlier, if we’re fearful of someone building some scary thing, then we’ll think: We must also build this scary thing. It’s the only way to defend against the other scary thing. Though, in this case, no one has even defined what the “thing” really is.

Secondly, fear mongering also serves to increase the speculated value of companies and to attract attention. Because, well, if this technology is powerful and scary, it must work pretty damn well. Not to say that OpenAI’s technology doesn’t “work well.” I mean, it certainly does some new and flashy things at the simple payment of a few dollars (with DALL-E being like the digital equivalent of a toy capsule vending machine).

But what exactly are we being fearful of? I mean, if we are to be fearful, should we really be fearful of these models, simple figments of software? Or should we perhaps be fearful of OpenAI and other corporations that are targeting them for various purposes? Or the data about us that they are storing and using?

One final thing to discuss about this paragraph before moving on is the idea of automation and whether it will “benefit humanity” as they say. Technically, they did not use the word “automation” but in saying “outperforms humans at most economically valuable tasks,” it is implied. This is a complicated question, though, and separate from our analysis of the Charter, so let’s save it to be revisited later (section 3).

1.2. The “Principles”

Let’s take a look now at the eight bullet-points or “principles.”

Broadly Distributed Benefits

* We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

* Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

“Avoid enabling uses…that harm humanity or unduly concentrate power.” First of all, this point about unduly concentrating power is quite hypocritical given that they’re partnered with Microsoft and that their investors include some of the wealthiest people in the world. But then again, they likely have a different definition of “unduly” than the general public. After all, Sam Altman, the CEO of OpenAI has said: “We need to be ready for a world with trillionaires in it.” So we can see where his priorities are at.

Secondly, in light of the fact that they, again, did not clearly define “AGI” in the first place and the fact that they did not tell us in what ways “AGI” could “harm humanity,” this quote about harming humanity sounds quite out-of-place. Even creepy. It almost sounds like they’re preparing us, or giving themselves permission, to harm humanity. I mean, why else are we even talking about harming humanity? You didn’t tell us what could even be harmful. 

In other words, a literal reading of the Charter finds no description of what “AGI” is exactly and why it could possibly be harmful. And thus, a literal reading would find the bringing up “harm[ing] humanity” as bizarrely unfounded. Appearing out of nowhere. However, the reality is that they’re not intending for readers to do a purely literal reading of this document. 

Instead, they’re playing on our preexisting notions of “evil AI” from science fiction. Skynet and the Terminator. HAL 9000. The Borg. If we come in with these prior connections, then harming humanity may seem a perfectly reasonable concern. But, as discussed earlier, the “AI” we’re developing with deep learning is utterly unrelated to sci-fi AI characters. (And in fact, the real Borg is more likely to be OpenAI—and wider forces of Silicon Valley capitalism—indoctrinating us into drones and fusing us with addiction-optimized digital technology.)

Next, let’s look at the second bullet and specifically “needing to marshal substantial resources to fulfill our mission.” This also is basically giving themselves permission to do things like take massive investment from Microsoft or turn what was once a non-profit into a for-profit. But this permission that they’re giving themselves knows no bounds. And this because their “mission” is not well-defined. Let’s look back to a quote from the second paragraph: “We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.” If their mission includes anything that aids others, what is its stopping point?

There is a big assumption that “AGI” is a discrete event. But here in these bullets they refer to “AI or AGI” which removes this idea that they are necessarily working towards some discrete event. 

And, in light of this and everything else we’ve discussed, if we take an honest look at what they’re saying, we find that none of this—not the “mission,” not the “AGI,” not the “benefiting humanity,” not the “autonomousness”—have any strict meanings at all. All of these empty terms are to give the semblance that OpenAI is a moral entity, even if their actions and incentives may suggest otherwise. Even if they’re primarily seeking 100x returns for their investors, even if they encourage employees to police each other for sins against the Charter, well, it’s all in the service of the greater good. Whatever that greater good may be.

Long-Term Safety

* We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.

* We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

This is more giving themselves permissions. “Driving the broad adoption of such [AI Safety] research across the AI community.” As I argued, the whole idea of “safety” is a distraction from the fact that the only real autonomy in any ML system is the autonomy of the makers. And the fact that it’s fear mongering. But here, they are giving themselves permission to spread this misleading ideology further. 

The second bullet appears to give them permission to be acquired by some other corporate entity and again reinforces the unfounded assumptions that “AGI” is both inevitable and will be a discrete event.

Technical Leadership

* To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.

* We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise.

These bullets, which speak to OpenAI’s focus on technology, seem, at first glance, to be obvious: Of course OpenAI is seeking to improve their technology. But upon a deeper look, they contain some notable subtleties. For one, this is the first mention of “policy and safety advocacy.” Sneaking this phrase in here gives the assumption that, Oh, we will do policy and safety advocacy too. In other words, we may fund or lobby towards favorable policies and ideologies. 

Furthermore, coming back to the idea of absolving their responsibility for working on “AGI,” these bullets again serve to absolve that responsibility. According to this document, “AGI,” that ever elusive concept, is a certainty, and simply waiting on the sidelines is not an option.

Cooperative Orientation

* We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges.

* We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.

These final bullets are again giving themselves more permissions. In this case, to “cooperate” with other institutions of power, as well as to, at any point, stop making their research public.

2. The Disciples/Employees

If we take a step back, we can see that OpenAI’s Charter is a simple Millenarian or Messiah myth: The Messiah (“AGI”) will come in the near future. We will prepare for the coming, but no matter what, it will happen. Every action we do, however it may appear, is all to bring about this divine intervention. And though we may collect profits, though we may spread falsehoods, trust us: it is all for the future Paradise that the Messiah will bring.

In other words, an ancient and universal human myth that’s appeared in countless cultures. But in this case, given that it’s operating under the investor-controlled, grow-forever capitalism of Silicon Valley, it’s merely a way to hide aimless capital accumulation behind a veil.

So, given all this, the deceptiveness and folly of OpenAI’s ideology, a natural follow-up question is: do people at OpenAI really believe it? 

Well, I recently talked to a few OpenAI employees and probed at exactly this question. And the result, at least from my small sample, is a mixed bag: many but not all employees do buy into this religion of “AGI.”

One employee, an engineer, was quite practical. He said effectively: “The company will talk about ‘AGI’ but from my perspective the business is what matters. For me, I’m interested in making my own company in the future and the OpenAI brand will help me.”

Another employee, though, gave me concern. Actually, it was these concerning answers to simple questions that first led me to taking a closer, more skeptical look at OpenAI. At the time, I didn’t know much. I thought they simply did some interesting research. But after talking to some employees, I was left with a weird feeling in my stomach, like something is very “off” here.

Anyways, this employee was a recruiter. She mentioned “we want to make sure our AGI is safe.” So I simply asked: “what are you making the AGI for exactly? Because the safest ‘AGI’ is no ‘AGI,’ right?” She stumbled through a few possible answers and eventually got to “Well, why did we make the Industrial Revolution?” After discussing further for a few minutes, though, she too seemed genuinely curious to have the answer and pointed me to someone else to ask.

So, yes, definitely a strange answer. But we should be careful to not poke fun at this particular recruiter because this belief, or one very similar, is one that many people in Silicon Valley hold, even expert technologists. The belief that Silicon Valley is by default making the world a better place as did the Industrial Revolution. And given our terminology, it makes a kind of sense: after all, Silicon Valley is producing “technology” as did the innovators of the Industrial Revolution. And early “technology” made the world a “better place.” So why wouldn’t ours also?

If you question this belief, a common response may be: Would you rather live 200 years ago?

But, unfortunately, this is not sound logic. And the reason, as with the Charter, comes back to a misuse of terminology. First of all, although we have a phrase for the “Industrial Revolution,” it’s wrong to treat that as corresponding to a discrete event when in reality, it’s describing a whole historical period. A period of time that involved many individual people’s choices and actions. And also, even if the “Industrial Revolution” did “make the world a better place,” well, we still need to make a careful comparison to the kinds of “technology” we are developing today to the kinds of “technology” early industrial innovators developed. I mean, does an NFT have as much utility as a washing machine? What about a marketing analytics tool versus, say, an automobile?

Finally, I talked to one fairly high up director at the company (the person the recruiter pointed me to). He again mentioned “safe AGI” so I asked him the same question I asked the recruiter—”why do you want to make this ‘AGI’ if it is so ‘dangerous’ as you say?” He answered, “Well, it will create abundance.”

Unfortunately, though, he wasn’t able to tell me what he exactly meant by “abundance,” and he promptly shifted the conversation. At this point in the conversation, though, I couldn’t just move on. And I wasn’t really sure if this was just a marketing spiel of his or a real belief, so I asked him directly: “This ‘AGI’ stuff, is it just marketing? Or do people at the company really believe it?” He looked confusedly taken aback and paused for a moment before saying, “No, people really believe it.”

3. Digital Automation, Not the Promised Land

At this point, we should be hesitant to trust anything OpenAI is doing. For one, they’re spreading this strange mythology. If they truly believe it, we should be concerned. And if they don’t truly believe it (and are actively crafting it for the sake of manipulation), we should also be concerned. And secondly, despite their “Open” name they are now tied to market incentives and seeking first and foremost returns for their already wealthy/powerful investors and executives. 

But I know, even upon hearing all this, many people that have been impressed by OpenAI’s technological feats—for example, with GPT-3 and DALL-E, may still think: Well, they must be doing something right. We should encourage them to keep going. This technology could be really useful.

So it’s worth also considering this more practical question. 

First, though, a clarification: as a former ML engineer, I will readily admit that what OpenAI has done from a pure technological difficulty perspective is incredibly impressive. I mean, deep learning, although it has become more standardized over the years, is lightyears from easy for many, many reasons. Getting a model to properly fit a large dataset is a subtle art. And to be honest, although I got better at it over the years at Google and PathAI, I never had the “magic touch” that some people who really understand all the subtleties and nuances do.

Also, another important clarification: technology itself is absolutely not the problem. I mean, I personally love programming. And there’s no doubt that it’s an elegant and creative craft with great transformative potential. For example, the book Masters of Doom tells a wonderful story of the power of software visionaries who are not motivated primarily by money. And, for example, although I find social media when it’s directly optimized for youth addiction quite abhorrent, I find our new voice assistants quite cool and useful. All this being said, we cannot simply group all “technology” together. An NFT, whatever NFT investors may tell us, is categorically different from a washing machine. A marketing analytics tool is different from an automobile. And thus, particular products should be judged on a case-by-case basis. 

Of course, the establishment wants to brand “technology” as one monolith (one that even includes such companies as WeWork) and to not encourage critical assessment. But we should rate products the way we rate movies or the way we rate restaurants. It’s a bit subjective. We can’t always predict what a company will do. But it’s good to be skeptical. And furthermore, it’s good to look at motives and incentives because, as we can see with the Charter and most other PR and marketing, we can’t simply trust what corporations tell us.

Anyways, this brings us to OpenAI’s aim to build some digital software (“AGI”) that “outperforms humans at most economically valuable work.” In other words, to develop some forms of digital automation.

First of all, not even thinking about the software but just their aim to “outperform humans” coupled with their requirement to create massive ROI, this sounds quite clearly antagonistic towards humankind. Let’s not forget that, you know, it’s also possible to empower people. Through better education, better training, and tools, people can be made better. I love the film First Man about Neil Armstrong because it shows exactly this point. Through training, self-study, passion, and opportunity, he became an impressively competent human-being—mentally and physically. And I really believe all people are capable of that. Also, we can strive to make our society a better place for human thriving. 

Doesn’t that sound like a better goal for technology than “outperforming humans”?

But let’s also consider the software OpenAI is making. It may sound like they are antagonistic towards people, but GPT-3 and DALL-E seem cool, seem fun. How could those be antagonistic?

Well, one thing to keep in mind is that the examples we see in press releases never match the real profit-making use cases. If we imagine a pie chart denoting where the money comes from, the PR use-cases make up a tiny sliver. Or, in many cases, like with OpenAI’s Dota bots, no sliver at all (they are not seeking to make money from Dota directly, after all).

More practically, automation where the means of production are owned by a corporation rather than the general public generally serves to accumulate capital to the owners, rather than the general public. There are narratives that suggest that such capital can “trickle down” to the general public, but believing those requires yet another unfounded leap of faith. 

Luckily, some grassroots organizations such as EleutherAI are working to make open source equivalents to OpenAI’s tech. It’s possible that tech can be owned and shared by the general public. I mean, that was supposedly the original idea behind “Open”AI, but if they won’t do it, luckily others will fight for it.

Speaking of automation and “outperforming humans,” there is also a serious problem with the notion of “outperformance” that is found in the “AGI” research of OpenAI, DeepMind, and others. It’s what I call the task fallacy

The idea is that although machines can outperform humans at certain well-defined tasks (and the number of such tasks and degree of outperformance will only increase in the future), we should be careful not to conflate such outperformance on well-defined, controlled tasks, with some holistic notion of “performance.” In other words, with the holistic notion of how technology can increase human thriving, or improve the overall “user experience” of living in the world (that is what our goal should be, right?).

Let’s consider an example: digital pathology, which I worked on at PathAI. There are many studies comparing a DL model’s performance on classifying cell types to an “average doctor” in a controlled setting. In many cases, the predictive accuracy of the models has been shown to be higher than the average doctor’s and furthermore, that doctors have a high rate of disagreement between them. Now, it may be the case that this indicates we should use this model in clinical settings. But let’s be mindful of the precise facts here. 

First of all, as we said before, we could work to make the “average doctor” better. Secondly, although doctor’s may disagree, we have lots of practices which take this into account. For example, we most often have multiple doctors checking results (and doctors have the power to bring in other specialists as needed). Furthermore, “predictive accuracy” is just one metric. There are a multitude of other considerations that go into any assessment of a real person’s condition. And especially for complicated cases, people need to be able to do holistic assessments, understand concepts, and dig deeper (e.g. maybe we don’t even have the right data).

And finally, there is one more important aspect to the task fallacy, which is related to its practical usage in the world. Specifically, if we break everything, such as treating a patient, into a series of predictive tasks without being mindful of the overall “user experience,” well, the result is simply a bureaucratic nightmare. What could be a pleasant, holistic experience becomes choppy and impersonal. And if money shifts to the pockets of the owners of digital doctors, that will likely mean less money in the pockets of doctors, and fewer doctors there to improve the system over time and provide that human element. 

In theory, technology can be used in brilliantly ergonomic, empowering ways, and we should fight for these uses. But importantly, growth-oriented companies are not directly incentivized to optimize for empowerment. So again, we need to be skeptical of individual companies and products, and we need to work towards better incentives.

4. The Bigger Picture

OpenAI is an extreme company. I mean, they’re literally trying to indoctrinate us with a modern day Millenarian or Messiah myth. They’re quite literally creating a kind of religion, fueled by over a billion dollars of capital from Microsoft. And their doctrine speaks of “benefiting humanity” whilst all logic suggests otherwise.

And yet, it’s not that extreme. Not for Silicon Valley. In fact, it’s just common, run-of-the-mill startup advice that one should try to start a “cult.” Or a “movement.” To quote the well-known venture capitalist, David Sacks:

The best founders talk eloquently about their mission and the change they want to make in the world. They speak about something larger than dollars and cents. They articulate a vision of the future that attracts adherents. They create a movement.

He wrote this in his essay, Your Startup is a Movement, which is essentially a playbook of Machiavellian marketing techniques. One that never mentions the basic concept of, you know, making a product that’s actually fucking useful.

Meanwhile another notable venture capitalist, Chamath Palihapitiya, has openly referred to SV as “a dangerous, high stakes Ponzi scheme.” One where, as always, working class people are set to lose the most.

And it’s not just startups. The problems with big corporations are even worse. At companies like Google, Facebook, and TikTok, deep learning is being used most profitably towards better harnessing user data for advertisement and for media products. With the media products being directly optimized towards “engagement.” In other words, towards territorializing our desires, or what could simply be called “addiction.” Of course, they try to make sure that the precious human time of ours that they harness is somewhat worth our while. That people don’t regret it too much. But their primary aim, as I’ve seen first hand, is “engagement.”

Mind you, not every online, social platform or forum does this. It’s not a problem with technology itself (those inanimate tools that are always wielded by people for particular purposes). Merely with the incentives of an unregulated market.

Now, I think we can all agree that this is not technology at its best. And that it’s not the kind of technology people naturally are motivated to build of their own accord. Merely what’s encouraged or required in the current system.

But luckily, there are solutions—if we’re willing to fight for a better system.

5. A Better Path Forward

I respect the work of groups that seek to make software like what OpenAI is developing actually open, actually owned by the general public. But if we want a better world, we have to go farther and question the incentives that are driving the development of specific technologies in the first place. 

So here are a few large-scale action items to improve the current state of affairs. Mind you, these are just a few of my own, possibly naive, ideas. So I encourage you to do your own assessment. And of course, I’m happy to discuss and work together (and there are many local groups already working together towards similar kinds of change).

Item 1: Regulate Data-Driven Media Products and Advertisement

If we made a pie chart representing the money that is made from deep learning, we’ll find that an inordinately large portion comes from advertising and “engagement”-optimized media products (and even OpenAI’s research can be used towards these aims, as well as some of their tools). These use-cases are the lifeblood of the ML juggernauts of Google, Facebook, and TikTok, after all.

How is deep learning applied to such use-cases? Well, the basic idea is that massive amounts of personal user data can, through advanced statistical modeling, be turned into approximate models of real consumers. And those models can then be used to direct—to some extent, control—consumer behavior.

Importantly, though, so long as this use-case makes up a huge portion of the pie chart, we have to be wary that any new ML “breakthrough” could simply allow for more consumer control. Actually, we should not even bother being wary. We can be certain that it will happen. 

It’s a sad fact but it’s simply our reality until something changes. And although I have no doubt it can change, the corporations are not going to change it out of the goodness of their hearts. They, in fact, are actively pushing in the opposite direction. So far in the other direction that companies like Facebook will acquire and use a fake VPN service to spy on teens against their knowledge.

Item 2: Funding With Better Incentives

It’s great that Silicon Valley funds technologists to build things. But again, “technology” is not a monolith. If I build a machine to paint the sky purple, well, maybe that is “technology,” but is it worthwhile? What if I build the latest and greatest in pump and dump scams? 

In other words, although we have a lot of capital that is ostentatiously funding “innovation,” and it’s easy to see this, from a personal perspective, as a world of opportunity—I could make loads of money by innovating!—the reality is more problematic.

First, by taking equity, the investors can end up controlling the majority of a company. Even if they, as in the case of Google, do not own the majority of the voting shares, they can still own the majority of the board seats and thus effectively have control. It’s important to note that this total focus on equity-based investing is a relatively new phenomenon. If funding is absolutely needed, it would be better to have some debt-based funding, which lets founders maintain control and skin in the game. And which also wouldn’t require the company to keep growing forever.

The company Basecamp and their founders are great proponents of this kind of philosophy. And they are quite financially successful too, so I recommend reading and listening to what they’ve said on this.

It could also be better to pivot to government funded innovation for hardware and software, like a computing equivalent to the NIH. This of course would come with a whole new set of concerns (and would certainly still end up tied to corporate interests), but it could be an improvement if implemented well. 

Item 3: Regulate Corporate Political Donations

Ever since the Citizens United v. FEC ruling, there has been no limit on corporate donations to political campaigns. And given the ubiquity of advertising and the fact that many of us don’t put a ton of research into every political candidate, this means that, to a large extent, politicians can be bought by corporate entities and/or wealthy individuals. 

Naturally, this allows the wealthy and powerful to maintain the status quo and even shift more capital away from the general public and into their hands. So we have got to put a stop to this legalized bribery (though, regulating data-driven advertisement would also help by taking some power and incentive away from advertisement).

Perhaps the connection between this and problems with technology are not obvious, but they should be: Tech corporations and/or investing groups can effectively buy politicians, who then prevent any systematic changes or regulation that would go against them. I mean, why, in the face of incredible evidence (and simple common sense) pointing to mental health problems due to social media—problems on a never before seen societal scale, problems which continue to increase at a rapid pace—is there never any regulation in America?

Item 4: Software Against the System

I’ve found an interesting pattern: software that is the most ethical—or the most rooted in our social reality—is often the most interesting and elegant software too. For example, the free and open-source software movements have developed incredible, extensible tools. And bitcoin, although it spun off into many get-rich-quick crazes, was created by cypherpunk hackers seeking an alternative system in the wake of the 2008 financial crash. They weren’t seeking fame or funding, and if they had been, they wouldn’t have come up with such a breakthrough.

So we need more hacking like that, and really, there are a million ideas and things to do. Software to increase transparency and fight false narratives. Software to increase accountability. Software to empower normal people. Software to better organize together. Software to give real independent nonprofits a voice. Or just lovely, user-centric products to show that products can be better when they’re not focused primarily on money and metrics.

Conclusion: If we really want to transform society for the better, we can’t just pray for a Messianic intervention. We can’t just believe in silly myths and shut off our critical thinking. No, we’ve gotta face up to real problems honestly and critically, and work together.

“AI Safety” is a Purposeful Distraction

A kilometre away the Ministry of Truth, his place of work, towered vast and white above the grimy landscape…

The Ministry of Truth—Minitrue, in Newspeak—was startlingly different from any other object in sight. It was an enormous pyramidal structure of glittering white concrete, soaring up, terrace after terrace, 300 metres into the air. From where Winston stood it was just possible to read, picked out on its white face in elegant lettering, the three four slogans of the Party: 


George Orwell’s 1984 (note: I may have taken some liberties with the original passage)

Perhaps many people in the tech world will remember that “AI” was not always used the way it’s used now. 

In fact, I still remember that feeling of annoyance I had when “AI” was at its peak buzzword level, around 2016 or so. Every new startup that went beyond the most basic if-statements and for-loops was an “AI” startup. And every new breakthrough in machine learning was first and foremost a breakthrough in “AI.” Being constantly bombarded by this new lexical usage, I would always think to myself, The technology they’re using is machine learning (or just plain code). Why are we suddenly calling it AI? 

If it is 3:02 PM and someone asks me “what time is it?” I’ll probably say “3” or “3 o’clock.” On some rare occasions, I may say “3:02.” But I’ll never say “the afternoon.” In other words, there’s a reasonable, productive level of granularity to speak at, which we had in the case of machine learning, AI, and other topics in AI before. But this was being slowly distorted.

It’s easy to chalk this distortion up to “marketing.” Which is true. And certainly much of this marketing was straightforward and not exactly “Orwellian.” For example, many startups used the banner of Artificial Intelligence as a way to attract investment and employees, and to give an increased feel of technological sexiness. (I interned at one startup, part of the TechStars Boulder incubator, that perfectly exemplified the mismatch between marketed technological sexiness and actual technological sexiness.)

However, when we look at the big players in the ML space, such as Google (including DeepMind) and Facebook (aka “Meta”), we see that their role and purpose in this “marketing” may not have been so straightforward. That there may have been something important that they were trying to hide, and which they are still trying to hide.

“AI”: Before and After

But first, let’s consider the different usages and meanings of “AI” or “Artificial Intelligence” and look how each of these may have shifted or not have shifted in the last decade.

The term “AI” is used in two major senses:

  1. As a field of study and technological tooling
  2. As a type of fictional character, typically in science-fiction movies, tv shows, or books

Each and every one of us ties the term “AI” to both of these concepts. This was true before and this is true now. Of course, it’s also important to note: each of us, while using the same term, have different conceptualizations of the term. For example, in the latter sense (of sci-fi characters), our conceptualization will heavily depend on what movies, tv shows, and/or books we’ve personally experienced, and which left lasting impressions. And certainly, for example, technologists may have a very different conceptualization of “AI” as a field of study and technological tooling, than, for example, marketers, product managers, or CEOs. 

Also, this may be going on too much of a tangent, but considering fictional AI characters, we should also consider what role they play within those stories. One of them is a literal sense of technological possibility that may serve to inspire our imagination. But another, often even more important aspect (depending on the work of fiction), is as a narrative tool. As a narrative tool, these characters, which are typically acted or voiced by people, are really human-like in many ways but deficiently human-like in other ways. And this serves to illustrate what it means to be truly human, or to show what traits of people (exemplified by the real human characters in the story) are admirable or perhaps not-so-admirable.

Anyways, as far as I know, this use of “AI” to refer to fictional characters has not changed much in the last decade. But it’s important to discuss because this second sense is crucial to the “marketing” strategy of the major corporate users of machine learning.

What has changed (or where our usage has changed) is around this first sense: “AI” as a field of study and technological tooling.

Of course, throughout the last decade, we have also had great increases in the power of machine learning technology, so it’s important to distinguish how much of the shift may be due to pure technological development and how much may be due to marketing and corporate incentives. So, with this in mind, let me first try to show briefly that a shift has happened, which could be due to any number of reasons, and after why corporate incentives are likely to be the main reason, the main culprit behind the shift.

To show just one example of the shift, consider the syllabus from Carnegie Mellon University’s class titled “Artificial Intelligence” from the Fall of 2014:

As we can see, there are many topics in learning, but there are many other topics such as in search, game theory, and even the study of human computation, which would be moreso a topic of science than purely related to technological tooling. At the same time, in the Fall of 2014, there was an additional, separate class titled “Machine Learning.”

Now, in 2022, there is still a class titled “Machine Learning,” but there is no class titled “Artificial Intelligence.” Not to say this in itself is any kind of problem (and CMU, luckily, remains accurate in their usage of terminology). Merely, this shows to some extent the shift. And now, we can see that when we refer to “AI” we are very rarely referring to the topics of search, game theory, or human computation. We are almost universally referring to learning

Which begs the question: why not just use the term “ML”? Its acronym is still two letters. And in fact, its full name, “Machine Learning” is shorter than “Artificial Intelligence” whilst also being more accurate. Is our usage of “AI,” then, just to make the tech sound sexier? Or is there some deeper reason?

Why Corporations Would Rather You Think of “AI” than “ML”

There is a clear motive for why corporations such as Google and Facebook would rather we think of “AI” rather than “ML.” And this is that “Machine Learning” as a term gives us a better sense of what’s actually going on.

Specifically, any ML system or application has four major components:

  1. The person or organization who is using the ML
  2. The purpose or application for which it is being used
  3. The model that is being learned (and which will be used to make some forms of predictions for the particular application)
  4. The data, most often user data (i.e. human behavioral data) that is used to learn or “train” the model

If we think of these four components and a particular application of ML, say to power the Facebook or Instagram news feed, or to power Facebook’s advertising system, we may immediately be struck with several questions, like: is this a good use of technology? What data do they have on me and my family? Does this organization really have our best interests at heart?

If, however, we speak in terms of “AI,” a very different effect can happen. For one, “AI” does not imply that it uses user data, so we can be distracted from this fact. And secondly, “AI” in its fictional character sense refers most often to fully self-directed characters. So it distracts from the fact that, well, some organization is actually targeting this software towards a particular purpose. A purpose which is always towards the growth of the company but which is not always in the best interest of the general public. 

Furthermore, this allows (even if only in a subtle way) for a delusion to form even among researchers and engineers in the field of machine learning itself. Specifically, it can give the idea that the field of machine learning is on a quest to create some character of fiction. Which kind of character is of course never specified. So we’re left to perhaps imagine our favorite character.

Or, it may be occasionally hinted, by corporate marketers, that it’s one of the scary characters. This fear mongering is a classic and historical propaganda technique. And it’s clear that it can, in many cases, benefit corporate interests. First of all, for the reasons we’ve mentioned (it distracts from the real four components of any machine learning system). But also, if this technology is scary and powerful, well, it must work pretty damn well. And thus, this company that’s making it must be a good investment, this company must be worth a lot because it owns that powerful technology.

For a clear example of such fear mongering by a corporate marketing mastermind, consider Elon Musk’s appearance on the Joe Rogan podcast, where he spoke of his “fears” of “AI.” And consider also the insane growth of Tesla stock, far detached from Tesla’s actual sales. Marketing, the manipulation of people’s ways of thinking and even terminology, can be quite profitable. I mean, Elon Musk has more wealth than the combined wealth of tens of millions of Americans. And of course, Elon Musk is a smart guy. He knows that what’s behind “AI” is actually “ML.” And he knows those four components of ML. His company is doing all of those things, after all.

ML, in its largest, most profitable use-cases, is always a tool wielded by corporations for corporate purposes. And these purposes have so far have never involved the attempt to create a character resembling those of narrative fiction. 

Now, ML may, in some cases, be used for the automation of certain human functions, and thus ML can be “human-like” in the sense that it is targeted towards mimicking human behavior on some specific tasks (given enough behavioral data, of course). But this purpose, it’s important to note, is very different from the purpose of AI as a fictional character. As we mentioned, AI as a narrative tool is used to highlight aspects of human nature. Automation, on the other hand, is used to replace humans in certain areas of the economy. Not to say that automation is about “killing jobs” or anything silly like that. Certainly not! But it does, at the very least, tend to increase wealth inequality (because we as people do not own these tools of automation, these means of production). Also, it’s worth taking a very careful look at what exactly ML companies are trying to automate. For example, is it just those tedious things we don’t want to do? Or does it include many things that we do want to do?

“AI Safety”: An Extra Layer of Distraction

Following up on our previous discussion, “AI Safety” as a field and a topic certainly sounds like fear mongering. This would make a lot of sense given the incentives involved. 

That’s of course not to say that there aren’t dangers in the use of machine learning technology. There are many very real dangers, which include, among other things, questions of fairness and bias. These are absolutely important societal problems that we all need to work together to solve, and which we should absolutely not ignore. 

But the real question is: are those dangers of ML with the model? Or are they moreso with those who wield the model? Perhaps with the human data that is captured to train such models? Or with the incentives of growth at all costs that investor-driven companies are tied to?

I mean, every day, TikTok is trying to improve their deep learning models, their data capturing methods, and their metrics to make more young people “engaged” with their product. And they’ve been thus far quite successful, recently crossing 1 billion monthly active users, most of whom are under the age of 24.

And yet, in the meantime, I still don’t see anyone working on <INSERT FAVORITE SCI-FI AI CHARACTER>.

So we can only wonder: is the next “breakthrough” in “AI” going to lead us closer to our sci-fi dreams? (Dreams, which, again, vary from person to person and have never been well-defined.) Or is it moreso going to help TikTok, Google, and Facebook addict even more members of our future generation?

The Society of the Statistic: Where Does Science Fit?

One of my favorite films is Dead Poets Society. It’s a beautiful story of the fight for the human experience in a system that treats people as a means to an end. 

It’s also an honest story. Such a fight isn’t easy (the system, after all, is where much of the power rests), and it necessarily comes with real human costs. In DPS, these costs are illustrated by the suicide of Neil Perry, a boy with a talent and love for acting but whose father would have nothing but his becoming a doctor. (As for why Neil’s father has such a simplistic metric of success for his child, it’s related, presumably, to economic status or security. The Perry family is not as wealthy as the other families that send their children to Welton, the prestigious private school where DPS is set.) 

Overall, though, DPS is an optimistic story. In the end, many students have learned to think for themselves and value themselves, and the system has lost some of its sway. This is shown in the final scene, where a majority of the students stand on their desks in homage to Keating, the teacher, and in defiance of Mr. Nolan, the headmaster.

Mr. Nolan, in this story, is the embodiment of the system: serious, austere, unempathetic. And his opening speech to a crowd of students (all of whom are white males, by the way) and parents shows how such a system uses statistics and metrics as its guiding light. It’s the very opening lines of the film:

In her first year, Welton Academy graduated five students. Last year we graduated fifty-one. And more than seventy-five percent of those went on to the Ivy League!

Welton may teach some valuable skills. But ultimately, the school and Mr. Nolan are guided by that Ivy League percentage, that one number. It doesn’t matter if its practices are empowering to the students as individuals or not. This Ivy League percentage is what the parents, the customers of Welton, are looking for.

Keating (played by the late, great Robin Williams), on the other hand, is the embodiment of what it means for people to be an end in themselves, rather than just a means to an end. His guiding light is a meaningful human experience for each student. Though, certainly, what would entail such an experience differs from student to student, and thus Keating’s education is largely an attempt to encourage the students to think for themselves. But he also opens their eyes to the joys of poetry, a human passion largely forgotten by the system (forgotten because, well, there’s not much economic value to poetry). 

One of my favorite scenes of Keating is the following. He’s speaking to his classroom full of students in his usual loud and boisterous manner:

No matter what people tell you, words and ideas can change the world. Now, I see that look in Mr. Pitts’ eye, like 19th century literature has nothing to do with going to business school or medical school. Right? Maybe. 

Mr. Hopkins, you may agree with him, thinking: Yes, we should simply study our Mr. Pritchard and learn our rhyme and metre and go quietly about the business of achieving other ambitions. 

Well, I’ve a little secret for you. Huddle up. Huddle up!

The students gather around Keating, who crouches, and continues in a softer, more intimate voice:

We don’t read and write poetry because it’s cute. We read and write poetry because we are members of the human race. And the human race is filled with passion.

Now medicine, law, business, engineering, these are noble pursuits, and necessary to sustain life. But poetry, beauty, romance, love, these are what we stay alive for.

To quote from Whitman: ‘Oh me! Oh life! of the questions of these recurring, Of the endless trains of the faithless, of cities fill’d with the foolish… What good amid these, O me, O life? Answer. That you here—that life exists and identity, That the powerful play goes on and you may contribute a verse.’

It’s a wonderful scene and a refreshing philosophy. But I do have one thing to add to Keating’s list. I too enjoy poetry, but being the predominantly “left brained” person that I am, I love science even more. And science, at its best and truest, can also be what we stay alive for.

But is there even a need to speak of the value of science? I mean, in the way that Keating speaks of the value of poetry? Science, after all, has much more economic value than poetry, so it’s already valued by our current society, right? 

Well, it would seem so, and it’s often discussed as if it is so, but I’m not so sure. And the reason why stems from the fact that science is not one monolithic entity. There are multiple aspects to it, some of which are valued and some of which are not. For example, Richard Feynman, a true “Keating of Physics,” spoke of these multiple values of science in a speech of his:

The first way science is of value is familiar to everyone. It is that scientific knowledge enables us to do all kinds of things and to make all kinds of things…

Another value of science is the fun called intellectual enjoyment which some people get from reading and learning and thinking about it, and which others get from working in it.

He goes on to describe the intellectual enjoyment aspect further:

The same thrill, the same awe and mystery, come again and again when we look at any problem deeply enough. With more knowledge comes deeper, more wonderful mystery, luring one on to penetrate deeper still. Never concerned that the answer may prove disappointing, but with pleasure and confidence we turn over each new stone to find unimagined strangeness leading on to on to more wonderful questions and mysteries…

Feynman, then, spoke of two values of science in the above quotes. Building on what he said, we can consider three values:

  1. Economic usefulness: the ability to do all kinds of things and make all kinds of things that enable the economy to grow
  2. Individual usefulness: the ability for people to utilize their scientific understanding to do all kinds of things and make all kinds of things (for intrinsic purposes)
  3. Intellectual enjoyment: the experience of fun, awe, mystery we get in the pursuit, discovery, and usage of scientific insight

Looking at it in this way, we can see that what economic forces will tend to incentivize is economic usefulness. They won’t directly incentivize the other two; though, this is not necessarily a problem. If, for example, economic growth could only be obtained by encouraging a significant amount of intellectual enjoyment and the development of a significant amount of individual usefulness, then everyone, scientists and capitalists alike, would be happy. The economy could grow and people could find passion in the pursuit of understanding.

If not, though, the costs are obvious: our scientific society would be like Mr. Nolan.

The Rise of Statistics and the Fall of Science (the Joyful Part)

It’s easy to mistake statistics for science. After all, both are logical and detail-oriented and perhaps performed by the same kinds of logical, detail-oriented people. At the very least, we could imagine that they seem more related to each other than, say, to poetry or music.

But that would be an illusion. If we bring in our separate components of science, economically useful science, individually useful science, and intellectually enjoyable science, I’d claim that, actually, the latter two are more related to poetry and music than they are to statistics. 

Because while poetry and music are certainly more emotionally evocative than science, they do share a most important experiential character with, for example, the intellectually enjoyable science (the science of the “idea” and of awe and mystery that Feynman spoke of), which is that they’re all holistic and conceptual.

To put it another way: science, like statistics, involves breaking things down, but intellectually enjoyable science involves also putting the things back together. Finding how the things all relate to form a cohesive, meaningful picture.

Statistics/metrics, on the other hand, are one dimensional. They’re broken down, but they’re not meant to come back together, to form a whole. (Or to be more precise, they can be formed together, but only into another single number, a scalar, or as an unstructured collection.) There’s no mechanistic relationships between them because they’re just measurements. Not to disparage measurements of course! Science absolutely depends on measurements! On experimentation and validation. But what I’m speaking of is when statistics transcend their measurement or validator status and become the end itself, the goal that we seek.

This, sadly, happens often. Especially when it comes to economic entities. For example, corporations may aim for a certain growth percentage (e.g. of customers or profit), or investors may aim for a certain ROI. Even for corporations that deal with artistic mediums, such as film studios, metrics tend to be the goal. 

In an ideal world, this could merely be a useful abstraction. The business men and women at the top of an organization could deal with numbers and logistics, while still empowering the inventors or artists below to “do their thing.” But in practice, it doesn’t work this way. The number-fixation trickles down. And the reason, I believe, is that such a system has a fundamental requirement: in order for the top level to guarantee that it can achieve its goal statistic, it needs to be able to convert each component of the lower layer into a statistic as well. Ditto for the layer below, and below. So in any organizational structure, be it a single corporation, or an entire industry, or an entire economy, if the top level goal is a metric, then at some point going down the hierarchy of organization, there needs to be a translation from numbers to people

And such a thing is impossible. You can attempt, futilely, to approximate people and their experiences/concepts with numbers, but you will always lose the holistic human element. That goal statistic at the top will always create an unbridgeable alienation. And, if we look at the history of modern statistics, we see that, actually, alienation is precisely why it was created.

Eugenics, Capitalism, and The Birth of Modern Statistics

Statistics has always had a dirty little secret (one you certainly won’t find in any high school statistics course). The secret is that the “father of modern statistics,” R.A. Fisher, and many other forefathers of statistics, such as Pearson and Galton, were eugenicists, racists. And not just of the “casual,” societal norm kind. Actively so.

Now, I admit, this fact alone doesn’t prove anything about statistics. I mean, as a pure mathematical construct, it is ethically agnostic. Also, it’s easy to imagine that, for example, eugenics and racism were more normalized at the time (late 19th century – early/mid 20th century). And it’s easy too to imagine that Fisher may have just been a kind of “twisted genius” (that the mathematical part of his brain was on full throttle, but he didn’t have a balanced connection to humanity). 

But the relationship between statistics and racism runs deeper. Even the eminent statistician, Andrew Gelman has recently noted this relationship. In a blog post commenting on the article “RA Fisher and the science of hatred” by Richard J. Evans, he says this: 

Fisher’s personal racist attitudes do not make him special, but in retrospect I think it was a mistake for us in statistics for many years to not fully recognize the centrality of racism to statistics, not just a side hustle but the main gig.

It’s clear from Evans’ analysis that, at least for Fisher, eugenics was not merely a “side interest” of his, but crucial to his whole life motivation. Although we can’t be sure how he came to such a worldview, it seems quite possible that his work in statistics and genetics were, to a large extent, motivated to give rational support to his racist beliefs (this assumption coming from the fact that statistics and genetics can be used quite successfully to support such beliefs; of course, in a deluded way that misses the human picture).

But speaking of statistics as a whole, its relationship with racism is a more confusing question. Gelman, for example, has a sense of the deepness of the relationship. We can see, for example, that statistics can be used to support racist views, to speak of averages of different populations and the like. But what exactly is the relationship?

Viewing this through an economic lens, I believe, makes the relationship more clear. 

Consider first the use-cases of statistics/metrics (as goals, rather than measurements):

  1. Processing large quantities of people or things
  2. Organizing people around a common goal (e.g. in lieu of a holistic vision, a common metric is a way to organize people—even large, scattered groups of people—around a common goal)
  3. Removing human/qualitative concerns; reducing them to numerical concerns (e.g. this can be useful for hiding human concerns, because perhaps we don’t want to see them, or numericizing human concerns for the sake of perceived simplification)

Statistics can be used for any of these use-cases, but if you have a “problem” that benefits from all three of these use-cases, then statistics is the perfect tool! 

Though, if you only care about two out of three of these use-cases, there are other, more effective tools. For example, if you are running a government, you absolutely need to handle a large population of people and organize them around some common goals. But that doesn’t imply that we need to remove human concerns or convert them to numerical concerns. Earlier we mentioned how an organizational structure based on statistics requires the conversion of each lower layer into statistics as well. But that’s not the only way to organize and to delegate. For example, a government may delegate by geographic region (in a hierarchical fashion). This is great. It does “partition” people, but partitioning doesn’t inherently remove the human element. The human element is only removed when we convert subgroups of people to a mere collection of numbers or properties.

For R.A. Fisher, though, this third use-case, removing human concerns, was crucial. Because his eugenicist goals involved the dehumanization of a specific subpopulation of people. Similarly, our economic system often has a drive for this third use-case as well, as Marx so clearly illustrated.

But, as we said, metrics are not the only way to delegate and organize. And in the case of novel production or innovation (e.g. science, technology, art), as opposed to commodity production, such a forced metric-fixation is counterproductive. Because it disincentivizes innovation in the holistic human realm. As we can see now, for example, we’re quite effective at “innovating” in the realm of financialization (developing new ways of turning big numbers into even bigger numbers), but it’s questionable how much meaningful human innovation we’re doing.

It is worth asking the question, though: is a tool bound by its original purpose? Modern statistics, which revolves around testing the “significance” of numbers, may have been designed explicitly for alienation. But can it be repurposed for nobler causes? 

Well, in some cases, yes. I mean, a chainsaw can be used for ice sculpting in addition to cutting trees. But a tool that is tailored for a specific purpose will naturally lend itself to similar purposes.

So, in the case of statistics, what are these similar purposes? And, more importantly, how did statistics grow from satisfying a relatively small niche of eugenicists to dominating nearly all realms of science and technology as it does today?

Computers and the Early Rise of Statistics

Computers as a tool were not developed for statistics. Or for alienation. As far as I can tell, there’s nothing inherently alienating about a computer. And I, for one, love programming and believe computing as a medium has unlimited potential for creativity, empowerment, and fun. (For example, the book Masters of Doom shows a brilliant example of how people can combine computational mastery, creativity/art, and courage to produce something wholly new and exciting.)

However, computers, being at their core number-crunchers, lend themselves well to the processing of statistics. And thus, economic entities can easily wield computers in order to wield statistics. And even, perhaps, convince the public that what they’re doing is “science.”

To better understand the role of computers in the rise of statistics (and to understand exactly what these quotation marks that I’m putting around “science” are), let’s consider an example. Specifically, of a statistical model that is well-cited within academia: the Five Factor Model of personality (also known as the Big Five or OCEAN). 

Now, the FFM is interesting, so I don’t want to diss it too hard. But it’s important to note the properties of a statistical model like this, contrasted to a more conceptual or mechanistic theory of personality.

First, let’s discuss the history of the FFM. Its development began in 1884 with the “Lexical Hypothesis” of Francis Galton (one of the forefathers of statistics we mentioned earlier). This hypothesis states that any important traits of personality should be encoded in words (e.g. adjectives) of any language. For example, quiet, friendly, gregarious, visionary, creative, resourceful, etc. This hypothesis was a crucial breakthrough for what would become the FFM because it allows us to perform an interesting kind of alchemy: converting people to a set of numbers (in this case, because converting to a fixed set of words is equivalent to converting to a set of numbers).

People began using the lexical hypothesis to develop personality questionnaires throughout the early 1900s. But a big breakthrough came in the 1950s and 1960s with the culmination of computer technology and statistical methods. Specifically, Raymond Cattell worked with Charles Spearman to develop new methods of factor analysis (a statistical method) and to apply that to personality questionnaire data. Cattell originally got 16 factors of personality. But later, with the work of Costa, McCrae, and others throughout the 70s, 80s, and 90s, they began to perfect the methodology and found that 5 factors were more stable and comprehensible.

So what does this model give us as people and as a society? Well, it gives us a way to convert people to a descriptive set of five numbers and traits, with these five traits being those that most efficiently summarize the full population of people (or people-numbers). Knowing what the five traits are is interesting. But the assumption is that such traits are static, unchangeable, so it’s not very actionable from an individual perspective (except perhaps to “know your place” relative to others). From an economic standpoint, though, this population summarization could still be useful, for example, in better allocating labor. 

Although the theory pertains to personality, its key breakthroughs were in (1) data generation (the alchemy of converting people to numbers) and (2) data analysis. In other words, not in the study of people (psychology), as neither involved actually probing into or investigating people in any kind of deep way. 

Now, again, don’t get me wrong. This doesn’t mean that the breakthroughs are not interesting or actual breakthroughs. The study of data and statistics are in themselves interesting and are themselves fields of inquiry. But importantly, these are breakthroughs in applied statistics, not psychology. And applied statistics is moreso a discipline of engineering than it is of science (as Chomsky has stated many times on this same topic).

Let’s contrast the FFM to a more psychological theory of personality, Carl Jung’s theory of “cognitive functions.” The watered down and bastardized version of this theory, the MBTI, gets a lot of flack (for good reason, since it’s removing the mechanics from a mechanical model to make it more marketable; though, statistically-speaking, the MBTI correlates closely with the FFM). But the original theory, from Jung’s massive tome, Psychological Types, is much more interesting and conceptual. Jung’s theory came from the deep study of people in his personal practice of psychoanalysis, as well as trying to explain differences of mentality among the leaders of the new psychoanalytic movement (e.g. Adler and Freud). For Jung, it started simple, with the concepts of introversion and extraversion (now ubiquitous). But he slowly built up his concepts over time to explain more aspects of how people think (not just categorizing people but explaining their mechanisms of thinking, and observing that the same mechanisms are used interchangeably by different types of people).

Is Jung’s theory “true”? Well, can we even say that Newton’s theory of gravitation is “true”? I mean, given that the “accuracy” of his model was displaced by Einstein’s theory of general relativity? Are the particles of quantum mechanics “real”? What are they exactly?

In other words, it’s difficult to comment on the “truth” of Jung’s theory, but what I can say is that his concepts are, to a good extent, empowering and representative of meaningful aspects of the way people think. Also, there is a beautiful logic to the whole system. It’s largely analogous, in my opinion, to Newton’s theory of gravitation. Newton’s theory is not a statistical theory. He didn’t design it to optimize for predictive accuracy. But his holistic concept of gravity remains as representative and explanatory as ever. And, for example, what Einstein’s theory added to Newton’s picture was not so much better predictive accuracy as it was the wholly new concept of spacetime

Some people claim that non-statistical theories like the ones we discussed are like astrology. But this is a funny analogy considering that astrology is absolutely not a conceptual or mechanistic theory. Astrology claims that things just are a certain way. Why are they that way? I don’t know. They just are. And in this sense of being non-mechanistic, astrology is actually much more similar to statistical models.

I’ve digressed, but coming back to our main points: a good theory should come with predictive accuracy, but predictive accuracy should not be our goal (and predictive accuracy in lieu of a holistic concept is no theory at all, even though it may still have economic usefulness). Also, suffice it to say that even in the early days of computers, computers were already lending themselves to the spread of statistical modeling.

Big Data and Meta-Statistics

Today, we are well beyond the early days of computers. It hasn’t been so many years in the grand scheme of history, but already computers are ubiquitous and have drastically altered the way we live. Among the many effects of this societal shift, one important one is—you guessed it—more widespread statistical modeling, and specifically, more usage of statistics/metrics as goals. The primary things we seek. 

But why exactly has our ubiquity of computers and smartphones led to greater usage of statistics? I can’t speak to the full picture of what has happened, but one crucially important factor is the rise of Big Data, or more precisely, Big User Data.

Over time, we’ve made numerous breakthroughs in the speed and scale of computation, as well as in data storage. And now we have the ability to save and process great swaths of information on how people behave. Furthermore, with the invention of Deep Learning, we have developed powerful methods of what we could call meta-statistics, the ability to take any dataset and create a generic pattern matcher or pattern generator for that data. 

So what do these developments mean? Well, there are many ways that these technologies can be combined for profitable economic purposes. For example, being able to statistically mimic human behavior means that we can automate certain human tasks or functions. (Not to say that the model does it as well as people. Likely not, since it’s mimicry. But if a statistical model can approximate a task well enough for cheaply enough, then it can be used for automation in many cases.) Also, being able to model people’s behavior means being able, to some extent, to direct consumer behavior. A very lucrative skill. 

Not to say that all of our new applications of statistical modeling are manipulative. Certainly not! For example, I love our new voice assistants, which, as far as I can tell, were designed to assist moreso than to automate or control (also, voice assistants are not using statistics at the top level, merely to perform well-defined subtasks, such as converting speech to text and text to speech). Sadly, though, these truly assistive use-cases appear to be less profitable overall in our present economy.

Coming back to the rise of statistics, these profitable use-cases have encouraged corporations to invest in new and better ways of statistical modeling (especially deep learning). And to invest in their development and application across all fields of science (even in many places where they perhaps don’t really belong).


I hope it’s clear by now: there is an inherent alienating effect of statistics/metrics. It was part of their design and it continues to lend itself to similar use-cases (many of which could be called “imperialistic”). And this is bad for most everyone.

For scientists and technologists, though, there is an additional danger. A more personal danger. When our Keating spirit is destroyed and only Mr. Nolan remains, our very sense of mystery can disappear. I myself have felt this at times, wondering things like have we already discovered most of what there is to discover? Will future generations have anything important left to discover? And I’ve seen others express similar thoughts. 

For example, we may speak of science as a fractal. Or we may speak of low hanging fruit and high hanging fruit, the metaphor of fruit on a tree. These metaphors, however, which are related to finiteness and territory, just come from our identification as an economic agent, our thinking of our own value or success as coming from our contribution to the wider economic system. Capital is finite and territorial, so capitalist science is finite and territorial too. 

But speaking of humanist science, a much more natural metaphor is the one Newton gave centuries ago, towards the end of his life:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.

We’ve found many more smooth pebbles and pretty shells since Newton’s time, but the ocean remains. And that’s because unlike capitalist science, real human science, the science that we stay alive for, is unbounded.

AI Research: the Corporate Narrative and the Economic Reality

In his widely-circulated essay, The Bitter Lesson, the computer scientist Rich Sutton argued in favor of AI methods that leverage massive amounts of data and compute power, as opposed to human understanding. If you haven’t read it before, I encourage you to do so (it’s short, well-written, and important, representing a linchpin in the corporate narrative around machine learning). But I’ll go ahead and give some excerpts which form the gist of his argument:

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.

In computer chess, the methods that defeated the world champion, Kasparov, in 1997, were based on massive, deep search. At the time, this was looked upon with dismay by the majority of computer-chess researchers who had pursued methods that leveraged human understanding of the special structure of chess. When a simpler, search-based approach with special hardware and software proved vastly more effective, these human-knowledge-based chess researchers were not good losers…

A similar pattern of research progress was seen in computer Go… Enormous initial efforts went into avoiding search by taking advantage of human knowledge, or of the special features of the game, but all those efforts proved irrelevant, or worse, once search was applied effectively at scale.

In speech recognition… again, the statistical methods won out over the human-knowledge-based methods… as in the games, researchers always tried to make systems that worked the way the researchers thought their own minds worked—they tried to put that knowledge in their systems—but it proved ultimately counterproductive, and a colossal waste of researcher’s time, when, through Moore’s law, massive computation became available and a means was found to put it to good use.

This is a big lesson. As a field, we still have not thoroughly learned it, as we are continuing to make the same kind of mistakes.

We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

Now before getting to the many problems with Sutton’s argument, I have a caveat to add, and I think it’s best to add it up front: I am one of the “sore losers” that he mentions. I, like many others, colossally wasted my time making a Go AI that utilized understanding. It felt like fun, like a learning experience, simultaneously exploring the mysteries of Go and programming. But now, I realize it was just a sad squandering of calories and minutes. If only I had devoted myself to amassing a farm of 3,000 TPUs and making a few friends at the local power grid station. If only I had followed the money and cast my pesky curiosity aside, I could’ve—just maybe—gotten to AlphaGo first! Then I wouldn’t be sitting here, in this puddle of envy, writing today. But alas, here I am.

In all seriousness, though, my first criticism of Sutton’s piece is this: it’s too “winnerist.” Everything is framed in terms of winners and losers. A zero sum game. But I question whether that is an accurate and productive view of science, technology, and even of games like Chess and Go.

Let’s start with the games. I’m more familiar with Go, so I’ll speak on that. Although Sutton cites the 70 year history of AI research, Go has existed for thousands of years. And it’s always been valued as an artistic and even spiritual pursuit in addition to its competitive aspect. For example, in ancient China, it was considered one of the four arts along with calligraphy, painting, and instrument-playing. And many Go professionals today feel the same way. The display of AlphaGo marketed Go as an AI challenge, a thing to be won—nay, conquered—rather than a thing to be explored and relished in (and it’s clear the benefits to DeepMind and Google of marketing it in such a way). But historically, that has not been the game’s brand within the Go world. Nor was that even its brand in the Go AI world (which consisted largely of Go nerds who happened to also be programming nerds).

For players, these games are about competition, yes, but also about learning, beauty, and mystery. For corporations, they’re just a means to an end of making a public display of their technology (one completely disconnected, by the way, from the actual profit-making uses of said technology). And for AI researchers, well, that’s up to them to decide. If they’re also players, they will likely value Go or Chess for Go or Chess’s sake. But if not, it makes sense to pursue what is most rewarded in the research community. 

But we don’t have to look far to see that what is rewarded in the ML research community is largely tied to corporate interests. After all, many eminent researchers (such as Sutton himself) are employed by the likes of Google (including DeepMind), Facebook (a.k.a. Meta), Microsoft, OpenAI, etc. Of the three giants of deep learning—Hinton, Bengio, and Lecun—only Bengio is “purely academic.” And even in the academic world, funding/grants largely come from corporations, or are indirectly tied to corporate interests. 

This brings me to a conjecture. It’s related both to winnerism and what has been “most effective” in being rewarded by the ML/AI research community. The conjecture is that both are reflections not of an ideal scientific attitude, merely of the unfettered capitalist society that we find ourselves in.

Are we, as Sutton says, attempting to discover our own “discovery process”? In the hopes of, I don’t know, creating <INSERT FAVORITE SCI-FI AI CHARACTER>? Or are we perhaps discovering new profit processes for the corporations that have the most data and compute power? 

The Economic Reality

Either way, it stands to reason that if we want to judge what is “most effective” as a technology, we should look beyond a small set of famed AI challenges and to its predominant economic use cases. What are these funders actually using machine learning for? Forget the marketing, the CEO-on-a-stage stuff. Where are the dollars and cents actually coming from? 

Well, from what I’ve seen in the industry, the most profitable use-cases appear to fall into two main categories. Unfortunately, neither involve creating my favorite sci-fi AI character. Instead, they involve (1) automation and (2) directing consumer behavior. 

Let’s start with the first: automation. This one should not surprise any historian of business, as the automation of labor is as old as capitalism itself. But what is perhaps unique about software, and especially machine learning, is that we’re no longer limited to the automation of physical labor. We can automate mental labor as well. (Hence the assault on human understanding, which has now become a competitor.)

One example we can see is in the automation of doctors, or certain functions of doctors. I saw an example of this at PathAI, a Boston-based healthcare company. But examples are now widespread in the healthcare industry (and even Google and Amazon have moved into the healthcare industry). The way PathAI used deep learning was actually a principled, humanistic way to use deep learning. Specifically, it was used to segment pathology images, which are huge gigapixel images that no human can fully segment (though doctors do have effective sampling procedures). Once segmented, the images were converted to “human-interpretable features” which could be used by doctors and scientists to aid in their search for new treatments. It did really empower doctors, scientists, and programmers.

But it’s not all peaches and cream. Many applications are not empowering, and there is a lot of manipulative media bias. Like with Go, winnerist marketing has overtaken healthcare. For example, there are many widely publicized studies that compare (in a highly controlled environment) the prediction accuracy of doctors to an ML model in classifying cell types from pathology images. The implication is that if the model performs better, we should prefer having the model to look at our pathology images. Such a limited metric in such a highly controlled environment, though, does not prove much. And more importantly, you will not see corporations trying to quantify and publicize the many things that doctors can do that the machines can’t do. Nor trying to improve the skills of doctors. No. If anything, they want humans to look like bad machines so their actual machines will shine in comparison.

Now, I hope it’s clear from this example that technology is not the problem. I’m not arguing to go back to the golden ages of bloodletting and tobacco smoke enemas. What I am arguing, or simply highlighting, is the fact that corporations do not optimize for empowerment. They optimize for growth, and marketing in any and every way that aids that growth.

Which brings me to our second category: directing consumer behavior. Two clear examples of this are in recommender systems (for media products) and advertisement. Both of which are the lifeblood of ML juggernauts like Google, Facebook, and TikTok.

The idea is that with enough consumer data and smart algorithms, we can build accurate models of real consumers. Then, using those models, we can to some extent direct consumer behavior. To sell more shit. (Sorry, I’m still annoyed by my purchase, through Facebook, of a Casper pillow many years ago.) 

I mean, it’s nothing new to say: these social media products are like cigarettes. Yada yada. How can we make tech that’s not cigarettes? Somehow, though, we brush the issue under the rug most of the time. We convince ourselves that ML advancements are going towards machine translation to communicate with displaced refugees. But honestly, we should be more bothered, especially about products targeted towards young people.

My last tenure at Google (working in recommender systems) was a bit disturbing actually. I guess I was just sheltered in the past. But this experience was striking because I worked there, left for about three years, then came back to the same team. And I could see just how much had changed—for the worse—in Google’s frenzied attempt at neverending growth. 

For one, my team had shifted far away from its old, understanding-centric approaches and towards the black box optimization of primarily engagement and growth-related metrics. This helped with scale, but it alienated us programmers from the product and users, which makes it much harder to develop a product that’s actually useful

Even worse, Google is very worried now about the fact that young people are not using Search as much. And they are trying to “solve” this problem through some very stupid and unethical means. (I mean, if being unethical is your only option and you’re honest about it, then, you know, it is what it is; but in this case the organization was being stupidly and dishonestly unethical, a truly remarkable set of traits.)

The Downsides of Solutions Without Understanding

Anyways, coming back to Sutton’s examples—Chess, Go, and speech recognition—we can see that they’re all constrained domains with well-defined measures of “success.” Winning in the case of Chess and Go, and word prediction accuracy in the case of speech recognition. I did argue that these are not true measures of success from a human perspective. But, to some extent, it works for these examples, the domains that make for good AI challenges.

Most products that are meaningful to people, however, are not as constrained as Chess, Go, and speech recognition. They can’t be boiled down to a single metric or set of metrics that we can optimize towards (even if we were to use solely user-centric and not company-centric metrics). And the attempt to do so as a primary strategy leads to products with less craftsmanship and less humanism. Or worse. 

For one, relying on the model to do the “understanding,” rather than ourselves, disincentivizes us from developing new understanding. Of course, we’re free to pursue understanding in our own time, but it’s less funded and therefore economically disincentivized. We put all our eggs into the model basket and the only path forward is more data, more compute power, and incremental improvements to model capacity. This hinders innovation in any domain that’s not machine learning itself. And, in fact, the corporate media that bolsters such approaches hinders innovation even more because would-be innovators get the impression that this is just how things are done, when in fact, it may merely be in the best interests of those with the most data and compute power. 

This fact is easy to miss by ML/AI researchers because they are pursuing understanding (of how to make the best models). But it can do a grave disservice to people researching other domains if (1) ML is applied in an end-to-end fashion that doesn’t rely on understanding and (2) it’s heavily marketed and publicized in a winnerist way. This flush of marketing and capital shifts the culture of those domains to more of a sport (one that consists of making stronger and stronger programs, like a billionaire BattleBots), rather than a scientific pursuit.

Secondly, as with the quote “one death is a tragedy, a million deaths is a statistic,” the overuse of metrics can actually be a way for corporations to hide the harm they may be doing to users (from their own employees). This is clear with recommender systems and media products for example. It’s much easier to look at a metric that measures youth engagement than to look at some possibly unsavory content that real young people are consuming. If we attempt to solve a user problem, then validate it with metrics, that’s just good science. But when metrics become our “northstar,” then the metrics are necessarily separating us from users.

It’s important to note, by the way, that none of this relies on the people running our corporations being “bad people” (though they may be). It’s just the nature of investor-controlled companies where the investors are focused on ROI. (I mean, speaking of how metrics can separate us from users, consider the effect of the ROI metric, especially when investments are spread across many companies.) When push comes to shove and growth expectations are not met, whatever standards you have are subject to removal. I certainly saw this at Google, which at least from my limited perspective, was more focused on utility in the past. The pressure trickles down the chain of command. Misleading narratives are formed. Information is withheld. The people who don’t like what they see leave. And what you’re left with is an unfettered economic growth machine. The only thing preventing this development is resistance/activism from the inside (employees) and resistance/activism from the outside (which can lead to new regulations).

This is the real “bitter lesson.” As a field, we have most certainly not learned it yet, simply because the corporations that benefit the most from ML control the narrative. But there is, I believe, a “better lesson” too. 

The Better Lesson

Luckily, none of these downsides I’ve mentioned have to do with the technology itself. Machine learning and deep learning are incredibly useful and have beautiful theories behind them. And programming in general is an elegant and creative craft, clearly with great transformative potential.

Technology, though, ML included, always exists for a purpose. It always has to be targeted towards a goal. As Richard Feynman put it in his speech on The Value of Science, science and technology provide the keys to both the gates of heaven and the gates of hell, but not clear instructions on which gate is which. So it’s up to us, not as scientists and technologists but as human beings, to figure that out. And to spread the word when we do.