The Commoditization of Social Interaction and Other Progress

The rise of social media has come with both many positive aspects and many negative aspects, societally-speaking.

To name just a few on the positive side: being able to conveniently keep in touch with distant friends and family, having access to new, often valuable sources of information and entertainment, and having access to new places to speak up about important social or political issues.

Many legitimately progressive movements were made possible that might not have been possible otherwise. As were many wonderful “social experiments” like r/place. 

To name just a few on the negative side (which are at least partly due to the popular platforms): doomscrolling, increased political divisiveness, decreased face time, and increased mental health troubles especially amongst teenage girls. 

Given this mixed bag of striking positives and striking negatives, it’s easy to either defend or attack these platforms. For example, the corporate owners of these platforms can easily point to the value they’re creating, whilst either ignoring the downsides or brushing them under the rug. Naysayers on the other side can point to any one of a number of pressing concerns and shout, arguably quite rationally, “Down with the whole system!”

But rather than entering into this endless back-and-forth, a more interesting question would be: Just how much negative do we need to keep the positive? Can we imagine a world that maintains these positive benefits while removing much of the negative? Or, if we trace the historical development of social media, can we identify any root causes that are more tied to the negative aspects, so we can perhaps stamp these out?

Well, this essay is an attempt to, at least in part, answer these questions. And concretely, it’s my view that there is a common thread to the specifically detrimental aspects, mirror images of which can be seen all throughout society. It’s that, over time and through a characteristic process, social media has led to the large-scale commoditization of social interaction.

What do I mean by this? Well, let’s start from the beginning, with the Internet.

1. The Internet

The Internet itself is really just a communication protocol. It’s a way for Alice to send messages to Bob, and for Bob to send messages to Alice. Like a telephone, only more generalized. 

The power of the Internet—an unprecedented ability to communicate over distances—is precisely its limitation if we’re speaking of how the Internet affects social interaction: distance. We all know what “long distance” means and the limitations it entails. But at the same time, distance still allows for much legitimate communication because, if nothing else, we can communicate with language, a human medium that is unbounded in its possibilities for expression.

So if we consider being “social” as a human process of experiencing and expressing our experience to another person who is also experiencing and expressing their experience, we could say the Internet itself, barring the limitations of physical separation, allows for legitimate social interaction.

In turn, early online media platforms too enabled legitimate social interaction. These early platforms mostly aimed to centralize the Internet’s communication capabilities, to allow groups of people to communicate. One example, predating even the World Wide Web, was Usenet bulletin boards. But even Facebook at its onset was quite social. Superficial, yes, but social in the sense that it enabled and incentivized expression via the holistic human mediums of language and photography, and didn’t incentivize much else.

But alas, such natural communication could not dominate forever. And with the invention of the Like Button, we saw the invention of what would become the first massively popular digital social commodity.

2. Social Commodities

A Like, like all commodities, requires some labor to produce. In this case, it’s the very simple labor of pressing a button and possibly reading a message or consuming a piece of content. A Like is also useful to its recipient, since social acknowledgement (even such simple social acknowledgement) is always positively received. Furthermore, Likes are uniform: they always express a fixed quantity of acknowledgement. We can certainly try to “read into” a particular Like knowing who it came from, but the Like itself is a uniform item of exchange. 

All this to say that a Like carries all the characteristics of a commodity in its usual economic sense.

The Like, though, was just the first; we went on to create many more: the Love, the Wow, the Share, etc.. For example, Slack, the workplace communication tool, created a whole smorgasbord of social commodities in its Emoji feature. We can send and receive a Rocket Ship, a Banana, a Weird Meme Face. We can even create our own.

Importantly, though, giving and receiving social commodities is quite different from normal language communication. To name just a few differences, it’s always quantified, always uniform, and always unstructured. A collection of social commodities is just that: a collection. There is, for example, no composable grammar, and this is true no matter how many varieties we have to trade. 

Language, in its textual form, may appear similar, since it too is transmitting discrete symbols (and certainly language also has its limitations). However, language, by its structure, flexibility, and shared understanding, is able to communicate magnitudes more by what is not said than what is said. By the space between the lines. 

Being able to respond with a single Like or Emoji is efficient and adds color, but it gives us much less general capability for expressing our thoughts or experience. And for this reason, we could say that the transactioning of social commodities is inherently less social than communication via language (or other structured, unbounded mediums). 

But let’s not get ahead of ourselves. These commodities themselves are just extra ways to communicate. (Well, mostly, since screen real estate and convenience do play a role, even at the onset.) We can still write messages or send photos, but we can also give these simple digital goods. And it remains our individual freedom to choose how to communicate. Furthermore, it’s worth noting that the initial decision to create social commodities—for example, with the Like Button—was almost certainly a user-centric decision. Users, overwhelmed with all their online acquaintances and unable to keep up, needed a streamlined method. An ability to send a quick gift or token of appreciation.

Nevertheless, although social commodities were introduced as mere extras, they did not stay that way.

3. Social Economies

Usenet bulletin boards are called “bulletin boards” for a reason. They’re like a digital equivalent to the town square bulletin board. A place to ephemerally gather and chat which also has a permanent wall for holding messages and advertisements.

When a town square introduces the trading of goods, though, it becomes more than just a gathering place. It becomes a market, an economy. And the evolution of the social economies which resulted from the introduction of social commodities can be seen as analogous to the evolution of normal economies. For example, one could read Marx’s Capital and replace every instance of “commodity” with “social commodity” (e.g. Likes and Followers) and “money” with “exposure” or “attention” and have a pretty good idea of how social media has developed and many of the pathologies it has produced. But let’s dive deeper into what I mean by this.

In these social economies, money can be thought of as exposure, otherwise known as “eyeballs” or “attention”. By exposure, I mean other people seeing content or messages that one posts. And the reason this is money is because it can be used to buy social commodities. Though the analogy to money is not perfect because it differs from platform to platform. For example, on some platforms, such as Facebook and Twitter, social commodities can also be sold to gain exposure (because the poster will see that you reacted to their content); however, on platforms such as YouTube or Reddit, the giving of social commodities is an anonymous act of patronage. Even in the cases where the analogy is looser, though, it is still useful.

First, we should note a few important traits of this social money that is exposure:

  1. Every commodity transaction corresponds to one instance of exposure, so all else being equal, more transactions implies more total exposure.

  2. If the corporate owner of the social media platform is monetized via advertisements, then all else being equal, more exposure implies more money (real money) for the corporation.

  3. If the owner of a social media account monetizes their account via advertisements (i.e. by making some of their posts or some parts of their posts advertisements), then all else being equal, more exposure implies more real money for the account owner. 

  4. Ditto for an account owner if, rather than seeking money, they are seeking influence.

There is one other important trait of exposure, which is the fact that, again all else being equal, exposure lends itself to gaining more exposure. This happens both through people (followers of an account) who may share the content outside of the scope of its original exposure, and also through algorithms, which on the popular platforms do a similar kind of sharing, most commonly promoting already popular content. 

Thus, there is a rich-get-richer aspect to exposure, or especially Follower counts which guarantee exposure over time. In other words, so long as we have a pool of users who are still following new accounts—that is, a pool of people who are willing to give more of their attention—guaranteed exposure has the ability to expand itself. And thus we have not only social money but social capital, social money that is used not merely to gain social commodities but which is used to gain more social money.

Of course, when I say “all else being equal” I am hiding the fact that the accounts that become popular absolutely need to produce some kind of interesting content, content that users will want to pay for or be a patron of with their digital goods and their attention. But independent of, or conditioned on, content quality, there is a value to exposure itself.

4. Incentives

As individuals we have the freedom to choose what we focus on, what our goals are. But at the same time, on a population scale, there are dangers to the above incentive structure. Actually, “dangers” is putting it too lightly: it’s an incentive structure where the financial success of both the corporate owner and the account owners are directly tied to the amount of human attention they can hoard. And where the quantity of attention can be increased by increasing the transaction volume of social commodities, whether or not meaningful communication comes along for the ride (and certainly whether or not face-to-face interaction comes along for the ride).

To describe the problem of flawed incentives, let me quote the great computer scientist Alan Kay:

Computing is terrible. People think — falsely — that there’s been something like Darwinian processes generating the present. So therefore what we have now must be better than anything that had been done before. And they don’t realize that Darwinian processes, as any biologist will tell you, have nothing to do with optimization. They have to do with fitness. If you have a stupid environment you are going to get a stupid fit.

5. What Can Go Wrong

Before getting into mechanisms, let’s discuss some of the dangers of participating in a social economy with the above incentives—that is, a social economy that rewards exposure (i.e. attracting attention) and even rewards it such that exposure naturally gains more exposure.

The first danger is simply in how we use our time and what information we consume. Every act of consumption is an act of self-education. It changes us, ever so slightly, and this influences our future consumption choices and therefore our future self-education, and so on. So there is a concrete concern on both an individual level and a society level: the more we relinquish control over what we consume, or don’t question the systematic forces fueling what we consume, the more we are giving ourselves to a system of massive, unknown reeducation. And given the aforementioned incentive structure, should we really trust that this system is acting in our best interest?

Secondly, there is a danger of manipulative actors in such a system. One of these potential actors is the product owner itself, but it’s also account owners. Again, we all have freedom, but none of us are perfect. And clever actors—marketers, propagandists, etc.—can occasionally trick us into listening to them against our best interests. And they can especially prey on specific, vulnerable subgroups of people.

Importantly, though, a digital platform may or may not enable people to combat bad actors within the digital world itself (especially not if the bad actor is the owner). In the physical world, we can, if needed, gather together to rein in wrongdoers. But in the digital world, we are fully limited by the design of the product.

Thirdly, and perhaps most importantly, is the danger of seeking social commodities as ends in themselves. Let’s discuss this in more detail.

6. Living in One Dimension

With the incentive structure discussed in section 3, there is an incentive to pursue Follower counts and View counts independent of, or in addition to, factors such as communication and content quality. This incentive isn’t in terms of what’s necessarily best for us as human beings—in terms of the intrinsic joys of creation and connection—but in terms of social influence, or money (since, as we mentioned, exposure can be converted to real dollars and cents via advertisements).

But because of this incentive, and because anything quantified can be compared (which can play into our natural and/or cultural notions of status, performance, and self-worth), many people can and do begin to pursue social commodities as ends in themselves.

The problem with such a pursuit, though, is that we ourselves become like a commodity. We become numericized, quantified. Our identity, at least in part, disconnects from our intrinsic nature and assumes the role of an economic agent in an economy of superficial pointlessness.

As sad as it is, it’s no wonder that this could lead to the increased rates of depression and suicide amongst teenage girls who use social media heavily. If our self-worth becomes perfectly quantified, then there is no textured meaning to life, and if our quantity is low and destined to stay low, then it becomes hopeless too. And the more time we spend online, the less time we have to connect offline in a more natural, less quantified environment, so the less likely we are to learn to value our own experience and expression.

And it’s worth mentioning: even if you and I can avoid falling into this trap of quantitative comparison, it can happen to vulnerable people who may seek to find a quick fix, a hack, to improve their self-confidence. And this can especially happen when we’ve been raised to value, hack, and tie our self-worth to metrics, such as with exam scores and admissions to the same highly ranked universities (universities who themselves seek to hack their rankings).

7. Feedback Loops

While it would be convenient to put all the blame on the creator for any toxic effects a platform produces, it’s important to note that online platforms can evolve even without constant intervention from the creator. To a large extent, the culture and dynamics are controlled by the users, the digital civilians.

For example, assuming that some social commodities have already been introduced in a product, users may create feedback loops which make them more and more emphasized. One loop comes through a culture of reciprocity, or repayment. If you Like my post, I may feel like I should Like your post. Or if you Follow me, I may feel socially obligated to Follow you. 

Of course, there is no law of nature that says such a culture is “fair” or “right”, or that I’m a “bad friend” for not following you back. Arguably, a more honest culture would have people simply doing what they want to do and not voluntarily indebting themselves.

Another loop is related: it comes through what we could call proportional giving. If I’m more likely to give a Like or a Follow based on my perception of how much others value a Like or a Follow, and if the amount others value Likes and Follows increases the more Likes and Follows they receive (or other people on the platform are receiving), then this creates an acceleration in the amount of commodity transaction. It creates a cycle of giving and receiving more and more. And again, this can happen whether or not we continue to actually communicate (e.g. with language and where both parties are actively engaged).

Actually, given the rich-get-richer aspect of Followers and exposure, it’s natural (as with an economy) that social capital will accumulate into a relatively small number of hands. This creates what’s known mathematically as power law distributions. Or what’s known politically as oligarchy. But in terms of healthy social interaction, this almost certainly decreases the amount of total communication because it creates a huge asymmetry.

Most pairs of (follower, creator) or (friend, friend) have one of the two parties with many more followers than the other. The party with more followers likely has too much demand and too little supply to communicate with everyone. But yet the other party is devoting their attention to them, possibly with an unrequited love or envy (well, unrequited save for commodity repayment).

There are no longer ‘dancers’, the possessed. The cleavage of men into actor and spectators is the central fact of our time.

Jim Morrison, The Lords

The final feedback loop to discuss is fairly obvious but crucial. It’s the loop that brings more and more people on (or off) a particular platform in the first place. Everyone wants to participate in social gatherings and new products, so it’s only natural that this effect can happen.

The danger, though, when everyone moves their focus to a new online world, is that there’s less focus on everything else (including the offline World). And if our focus is captured for long enough, the everything-else can decay, which gives us less reason to shift our focus back in the future.

Really, such shifting of focus to the new is natural and shouldn’t be feared. But well-capitalized actors have often sought to game this desire. For example, some investing groups have developed a whole machiavellian science of “network effects” that they’ve used to grow online platforms. Network effects is really just a term for feedback loops. And as for how to create these feedback loops, it’s most often a matter of pouring money into aggressive marketing to create a collective delusion of social fun or FOMO, in the attempt to generate more FOMO.

8. Algorithms

Behind every digital media product is a complex mixture of business strategy, design, code, and marketing. Different types of people work in these different areas, but within a corporation, they’re organized to all work together towards common aims.

I mention this because it’s important to note what we’re talking about when we use the word “algorithm”. For example, earlier I briefly mentioned how “algorithms” share popular content outside of the scope of their original exposure (e.g. sharing a post from a popular account to people who don’t follow that account). This sharing mechanism could be called one product “feature”. 

So what does such a feature consist of?

Well, first of all, such a feature would not be “launched” in the product unless it served a business purpose and fit into the business strategy—that is, unless it supported the short or long term growth of the product/company. Secondly, such a user-visible feature is designed to look a certain way and behave a certain way that is conducive to the overall user experience of the product. Thirdly, there is a lot of code logic that makes such a feature possible. There is the frontend and backend logic to make it look the way it should, trigger the way it should, and to send the right data back and forth. There is also the ranking or recommendation logic—what’s often known as the recommender system—to decide what popular content to show to what people and when.

The general word “algorithm” just means a procedure, like the steps of a cooking recipe. So there are algorithms everywhere, be it in frontend, backend, databases, design, or business strategy. Though, in the software world, the word generally refers to the code part and specifically procedures which are complex or mathematical. So in the social media context, when we speak of “algorithm” we are specifically referring to (1) features that involve a recommender system somewhere in the mix, (2) recommender systems themselves, or both.

9. Commoditized Algorithms

So what do these recommender systems do? How do they work? Well, it’s shifted over time.

I got an excellent perspective of this shift working on recommender systems at Google (using both “old” and “new” techniques). But it took me a long time to realize what was really driving this shift. So let me share my perspective.

I did two stints at Google, the first from late 2015 to early 2019, and the second one a brief period at the beginning of this year, 2022. I worked on the same team both times, the first stint mainly on Google News and the second stint mainly on Google Discover, both media products. Not exactly “social” media but focused on similar, commoditized interactions such as Like, Dislike, Click, and Share.

My team’s heritage was in Google Search, which is also a recommendations product (given a query, what are the most relevant webpages?). Historically, Search used what is called information retrieval, which is a loose science for developing signals or heuristics and combining these into a formula for scoring webpages (one of the first major heuristics being the famous PageRank algorithm). To simplify, the methodology usually starts from identifying some subsets of queries which are qualitatively underperforming (keyword: “qualitative”), and then developing some heuristic to fix it, which is evaluated quantitatively through A/B tests and metrics, and also qualitatively through human review. 

Similarly, given our heritage, my team started with these methodologies in our media products. 

In both cases, machine learning, a category of predictive or statistical models, was used historically too. But mostly to create particular signals, which fit into a larger, human-curated framework. 

Over time, though, such human curators (i.e. designers or implementers of user-facing heuristics) were slowly phased out and replaced with predictive models. This has increased in prevalence since deep learning arrived on the scene and became more and more capable, but importantly, the trend started long before. And in advertisements (which is also a kind of recommendation product), it was mostly predictive well before DL became what it is today.

So how does ML work in these recommendation products?

Well, any application of ML always optimizes for a particular metric, and if we’re speaking of supervised ML (its most common form, and the main form used in media products), it always relies on labeled data. In other words, input-output pairs, where the goal is to be able to accurately predict the output given any input. For example, if the input is “features” describing a credit card applicant—e.g. zip code, income, and credit score—and the output is whether the applicant defaulted on their payments, the model could be optimized to predict whether any new applicant (represented by their features) would default on their payments in the future.

Deep learning—that is, artificial neural network modeling—is no different in these regards. It is still predictive, still using the same kinds of metrics, and still reliant on input-output pairs. The difference is in the fact that before the prediction layer, it can automatically do “feature engineering”. It can convert complex data structures (e.g. images, soundwaves, or text), which traditional statistical models would not be able to handle, into numerical features, which they can. 

Deep learning often surrounds itself with an air of mystery and complexity. But at the end of the day, if we’re referring to how DL is used, rather than how to make it work, all it is is feature engineering plus prediction. It’s that simple.

The complexity really is a reflection of the complexity of the natural world. Data, when resulting from the unbounded expression of nature, can take an unlimited number of forms. So no model can predict any output given any input. But given any well-defined task and data-type, the aim is to predict outputs as well as possible given inputs. In other words, the goal is to create a commodity algorithm, which performs perfectly and deterministically (or as perfectly and deterministically as possible). 

ML models do not seek to be expressive or opinionated (as with signals and heuristics). They seek to be pristine commodities, perfect predictors.

And in the case of ML for media products, what they’re now predicting (most predominantly) are precisely the social commodities: Likes, Dislikes, Clicks, Shares, Followers. Specifically, their metrics optimize for the accurate prediction of commodity transactions (e.g. will this user Like this other users’ content?) or to directly increase future trading volume. The motivation for this comes back to incentives 1 and 2 from section 3: more transactions, all else being equal, means more exposure and more exposure, all else being equal, means more money.

10. Growth

So what is behind this shift from signals and heuristics, which are (at least somewhat) subjectively expressive or opinionated, to predictive models, which are not (at least not in themselves)?

One reason is certainly the fact that statistical modeling has become more and more capable (and there is a feedback loop where the more we rely on predictors, the better we make them, so the more we rely on them, etc.). But this is far from the whole story.

The benefit to the bottom line of an already popular media product (which is already capturing a large amount of user data) is larger moving from a human-curated system to shallow ML than from moving from shallow ML to deep ML. And it’s larger moving from shallow ML to deep ML than moving from “basic” deep ML to more advanced deep ML (e.g. deep reinforcement learning or sequence models). And yet, companies, such as Google, were quite hesitant in moving more towards shallow ML in the first place (and are still often hesitant). And they are more or less hesitant depending on the product (e.g. as I mentioned, in advertisements, there was never as much fuss over being overly quantitative there, or not qualitatively-focused enough, even though ads too are user-facing).

In reality, the core driving force behind the commoditization of algorithms is precisely the same as the driving force behind the development of the social commodities: growth. The origin of the Like Button was in many users being overwhelmed with keeping up with all their online acquaintances. Similarly, as a media product grows to include many different types of users—possibly from many different countries with drastically different cultures—it becomes harder for a software development team to create tailored heuristics for all of them. The more hand-crafted approach starts to see diminishing returns in its ability to grow the user base and their engagement levels. And thus, there becomes a need for a more widely-applicable, common denominator solution.

The thing about growth, though, is that it knows no bounds. It’s not a discrete thing: small → big. And similarly, commoditization is not a discrete thing: no commoditization → commoditization. With more growth comes more commoditization: larger volumes and more accumulation.

Let’s consider the incentives from section 3 again: all else being equal, more commodity transaction means more exposure, which, all else being equal, means more financial growth. Now it is true also that, all else being equal, more users on your platform means more communication on your platform. But it’s not necessarily true that it means more communication in society as a whole. 

And it’s also not true that more transaction implies more communication. We can’t even assume “all else being equal” in this situation because transaction directly affects communication. It can be used as an extra, but it can also be used as a substitute. So it’s possible that transaction can come at the expense of communication.

And in fact, with enough growth, it can eventually be the case that communication is an active hindrance to further transaction and growth. If, for example, users are spending “too much” time chatting, well, they could be spending more time scrolling and clicking. And this really is the danger of unbounded growth under such an imperialistic incentive structure.

So what encourages our growth to be unbounded? And what encourages our incentive structures to be imperialistic? Well, we’ll get to those questions in a moment. First let’s look at similar commoditization processes happening elsewhere.

11. Similar Processes Everywhere

Just to clarify again: there’s nothing wrong with commodities in themselves. The safety, security, and luxuries we all cherish in the modern world would not be possible without commodities. Without them, most of us would immediately starve and perish.

That being said, not all commodities are made equal and not all commoditization stays the same over time. I would rather my potatoes be commoditized than my social interaction. And I would rather 5% of my social interaction be commoditized than 50%.

The gravest devastation from commoditization, at least in my view, comes from the following:

  1. When we humans ourselves are pressured to become commodities (e.g. it’s commonly understood that this can happen with labor, but as we’ve seen, it even happens outside of just “work”; we may be effectively slaving away for parties that we do not even realize we’re slaving away for)

  2. When meaningful human experiences or acts of expression are commoditized and it becomes harder to have these or do these in non-commoditized form (the more this happens, the more it becomes clear that industrialization is no longer serving us people whom it exists to serve in the first place)

  3. When the economic drive to greater efficiency and/or consumption manifests in misleading narratives and mythologies to distract us from what’s really going on, or to trick us into thinking some outdated or net-harmful trend is actually net-beneficial (or even our path to salvation)

  4. When it “goes too far” and especially when it is aggressively taking advantage of young people or future generations (e.g. many media platforms right now are explicitly targeting young people)

Let’s discuss a couple of examples of commoditization processes in other realms. 

The first is science, specifically how many major scientific institutions have evolved such that their primary product—papers—must have a statistically significant p-value, which is often used as a binary indication of “truth” or value. And furthermore, how they discourage any kind of negative result or any iota of subjectivity. This is the pure commoditization of science or truth-seeking, the delusion of positivism in its most streamlined form. And again, it comes from growth, and also because science can serve industrialization even when it produces results which are not that interesting or empowering to individual people. Luckily, though, some statisticians are fighting this anti-scientific usage of statistical significance.

In my opinion, science that’s empowering to people should be conceptual, not purely statistical. But meaningful concepts are often at least somewhat subjective (not because their “truth” is subjective, merely because it can, in many cases, be difficult to demonstrate and validate a full concept without calling upon others’ lived experience). Thus, it’s very hard to publish humanistic science in such a positivist setting. All the meaning has to be snuck in either through lectures, conjectures, offhand comments in papers, or through books that scientists write in their own time. 

Great scientists certainly do all this, but it’s not exactly incentivized. And in fact, it’s often actively disincentivized, with legitimately interesting but somewhat subjective theories being cast aside as “pseudoscience”, whilst many purely predictive non-theories masquerade as “science”.

Secondly, let’s consider SSRIs (selective serotonin reuptake inhibitors), a popular commoditized treatment for depression. It’s recently become more widely known, thanks to the work of Joanna Moncrieff and others, that SSRIs do not really treat depression at all—mechanisms, that is—since depression is not caused by any kind of serotonin abnormality.

Considering a simplified view, if A is a healthy mental state and B is a depressed mental state, SSRIs do not convert B back to A (B → A), rather they induce an alternative state C (B → C), which may have fewer depressive symptoms than B but which may also have massive side effects, and withdrawal symptoms if a patient stops using them.

In fact, Moncrieff suggests (as common sense suggests too) that the root cause of depression is not in the brain. Instead it’s social or environmental:

An over-emphasis on looking for the chemical equation of depression may have distracted us from its social causes and solutions. We suggest that looking for depression in the brain may be similar to opening up the back of our computer when a piece of software crashes: we are making a category error and mistaking problems of the mind for problems in the brain.

We have an absolute need for treatments for depression (in part because social media appears to be driving more depression). And it’s easy to imagine how pill treatments can be the product of growth (e.g. demand for therapists or other holistic, tailored treatments exceeding supply). But this example illustrates a common problem with our present (liberal capitalist) incentive structures. Clearly the pharmaceutical companies that pushed, and continue to push, SSRIs, making tens of billions of dollars each year, are not incentivized to actually fight the mechanisms of depression. To get to its conceptual roots.

And in fact, this is largely the fault of the FDA system that relies predominantly on statistically significant changes to symptom metrics. Symptom metrics, which create a layer of alienation from patients and from disease mechanisms, allow for this kind of problem to arise.

It’s quite analogous to the alienation that arose in my time at Google when we switched from developing heuristics—that is, concepts—and validating them with metrics (which is great!), to optimizing for metrics directly. In making our “northstar” quantitative rather than qualitative, we made our customers moreso means to ends of growth, with programmers and leadership becoming more detached from these customers.

12. Unsustainable Growth

People throw around the term “unsustainable” a lot. And yet, we do a pretty good job of sustaining our unsustainable growth. But what these people really mean is that it’s unsustainable without massive destruction and harm to people.

So what encourages or even forces unsustainable growth? Well, it’s easy to chalk it up to simplistic notions like “capitalism” or “competition”. But simplistic notions like that are not so useful (unless two parties already agree on a viewpoint) because the reality is that “capitalism” and “competition” are of course not absolutely bad or absolutely good. And they’re also not even unitary concepts: they follow processes which evolve and these processes are ultimately the result of individual human actions and choices. 

Even Marx, for example, who wrote the most thoroughly scathing critique of the “capitalist mode of production” was in many ways a big fan of “capitalism”. He felt that without the productive forces of capitalism which created immense surplus value, the socialist revolution he pushed for would not even be possible. His historical analysis was just that, eventually, the capitalist mode of production becomes outdated, or more precisely, it becomes so overly exploitative and counterproductive to people that a drastic change is needed.

The reality of unsustainable growth today is much more specific than mere “competition”. There are specific structures and cultures in place which force growth at consistent rates. Here are a few specifically related to social media tech companies:

  1. Venture capital investment (e.g. it’s needed to grow at an incredible pace to continue receiving more investment money)

  2. Widely distributed ownership (e.g. a single owner or leader obviously has its risks, but at least a single owner has the power to decide where to “draw the line”; distributed ownership by investors, on the other hand, is not just risky but demands growth at an unsustainable rate)

  3. The stock market (e.g. the “value” of a company is tied to speculation, which is dependent on the expectation of growth)

  4. Media outlets (e.g. TechCrunch serves financialization by encouraging everyone to put money and investment on a pedestal; it’s always ”Company X raised a $Y Series B” and never “A Critical Review of the Features of Product X” or “What I Love and What I Don’t Love About Product X”; in Marxist terms, it’s all about exchange-value and rarely about use-value)

These structures and cultures exist, but we all have the freedom to either actively bolster them, passively participate, passively abstain, or actively fight against them. And their continued existence and development is, at the end of the day, solely the result of the actions and choices of individual people.

13. Narratives to Defend the Status Quo

It is not enough to destroy one’s own and other people’s experience. One must overlay this devastation by a false consciousness inured, as Marcuse puts it, to its own falsity.

Exploitation must not be seen as such. It must be seen as benevolence.

R.D. Laing, The Politics of Experience

Let’s briefly discuss again my experience at Google and the shift away from hand-crafted methods to automation in digital media products. As I mentioned, it took me a while to figure out what was really behind the shift. And this was due (besides my own naivete) to many narratives that were actively hiding the reality.

One of these narratives was the idea that we were not directing or controlling consumer behavior; we were simply “adapting to changing user preferences”. Quite possibly, this narrative was simply a projection of our own impotence as employees. But the reality, of course, is that the creator of a product and the product itself is an active agent in the world, causing tangible (if not fully predictable) change in the people it affects.

Of course, consumers too are active agents and can abstain or even push back. But to acknowledge the agency of consumers while absolving oneself of agency and responsibility is simply inaccurate. A coping mechanism.

Related to this narrative is a misconception with the idea of data. There is the widespread idea among those developing media products that we are mere passive collectors of data, when in reality, we are actively capturing these pieces of information. And from the vast, unbounded world of information, we are capturing very specific information, information which is also largely the product of our own actions. In other words, we are not passive collectors but active creators and capturers. As Laing put it, data could more accurately be called capta.

There are other misleading narratives too. For example, when contrasting automation to heuristics, there’s not a rational acknowledgement of “We’re automating because it’s what’s needed for further growth.” Instead, we need to put automation on a pedestal and disparage more user-facing or human-crafted approaches. For example, if categorizing user needs based on user research (e.g. what types of news people like to read), and if you desire to build some heuristics based on those categories, colleagues and leadership may say “Well, your categories are not perfect.” Nevermind that doing product development behind a wall of metrics, with no users in sight, is also guaranteed to be imperfect whilst additionally guaranteeing alienation (alienation which, by the way, can hide any widespread harm we may be doing).

At least fields like finance are able to be a bit more honest in this regard. There’s no hiding the fact that the primary goal is growth and money in finance. The corporations of Silicon Valley, however, continue to try to convince us that their primary goal is “making the world a better place.”

14. Mythologies to Defend the Status Quo

In addition to smaller narratives, there are also broader mythologies to defend and even encourage further commoditization. Complete with heroes, gods, and visions of future Paradise.

One of these mythologies I just mentioned. It’s the idea that the “technology” coming out of Silicon Valley by default benefits the general public. Nevermind if the biggest accomplishments of technology we tend to cite came from decades ago (e.g. the automobile or washing machines) and our modern definition of “technology” has shifted to include companies like WeWork, various advertising and marketing tools, NFTs, and all forms of hype.

This mythology comes in many flavors, many sub-religions. But some specific manifestations can point us to what this mythology really stands for. For example, consider the myth of “AGI” as presented in the OpenAI Charter. It presents a vision of a vague future technology, “AGI”, which is guaranteed to appear at a specific moment in time and greatly “benefit humanity”. (Though, we also need to be concerned about our “safety” with respect to this new technology.)

What OpenAI has done thus far is improve deep learning models (i.e. code which performs feature engineering plus prediction). So it’s unclear how this could lead to a singular moment of Paradise. But a historical analysis of mythology will help to put this modern-day Millenarian or Messiah myth into perspective (for example, as Adorno and Horkheimer did in their work, Dialectic of Enlightenment). 

In ancient polytheistic religions, the gods were often associated with elements of nature (e.g. the sky, the sun, the ocean, the weather, fertility, and harvest). Later this shifted more to monotheistic religions where God was made in the image of man, and man in turn was to “have dominion over the fish of the sea, and over the fowl of the air, and over every living thing that moveth upon the earth.” This shift, according to Adorno and Horkheimer, reflected man’s increasing mastery over nature through technology, science, production, and population growth.

So let’s consider the myth of “AGI” in this context. In this myth, the machine is God. And. at least in my view, this is a clear reflection of our subservience to the machine. Furthermore, given that what this “AGI” that OpenAI (and other companies, such as Google DeepMind) are building is just corporately-owned tools for digital automation, it really is a reflection of our subservience to wealthy technocrats and the digital technology they produce and own.

Zooming out, it seems silly that people could believe such a vague and archetypal myth, and especially one that puts machines above people (given that we are people, after all, and anything we build should be to support us). Partly it’s due to marketing and bad actors (and the present environment often rewards bad actors). But that’s only part of the equation. 

Actually, this belief can arise in a similar way to how people can become obsessed with Likes and Follower counts on social media: if we, as technologists, tie our own success and self-worth to furthering the cause of the present tech industry and the present process of industrialization, then it’s natural that we will come to identify with this role. We disconnect, at least in part, from our intrinsic nature and assume the role of an economic agent, even if the economy becomes more and more exploitative and capital becomes more and more concentrated.

And it’s easy to do this (or rather, have this happen) because most of the time, these beliefs just sit at the backs of our minds. We don’t have to think about them actively, and especially not if we’re in a bubble, surrounded by people with similar beliefs. Furthermore, we can always find some historical and modern examples of positive outcomes of industrialization, and focus on those, not zooming out to look at the big picture. Usually, it’s only when we hit a wall in our own progress that we start zooming out or reassessing our path.

15. Change

Let’s revisit a question we asked at the very beginning: Can we imagine a world that maintains the positive benefits [of social media] while removing much of the negative?

The answer is: I think we can. Not to say getting there won’t involve a battle, but we’re speaking of imagination for now.

For example, if our present environment allows for and even rewards large-scale parasitic incentives—incentives that will almost certainly cost us in the future—then we can simply constrain this environment. And we can certainly do this in a targeted way which does not remove positive incentives. Laws and rules are quite flexible.

Furthermore, if the toxic effects of social media (and other products/industries) mainly come in the later stages of unbounded, unsustainable growth, and we have specific structures in place which not only encourage unsustainable growth but force it, then we need to change those structures. And to spread the word, since these structures often depend on our collective support (i.e. our choices and actions) and that support often comes through simple naivete.

Finally, we should again repeat (to ourselves and everyone around us) that society results from the individual choices and actions of people. Societies do not “just evolve” or “fix themselves”, and there are many different possible ways societies can develop and change, as history shows. 

For example, the aggressive industrialization and exaltation of objective rationality during the “Enlightenment” period in Europe was countered by the “Counter-Enlightenment” and “Romantic” periods, which in many ways tried to meld innovation in the external, objective realm with the internal, subjective realm. To not only master nature but to value nature, including our own human nature. In other eras, though, similar circumstances gave rise to fascism and genocide. (Simply because the fascists acted, organized, and spread their beliefs more aggressively than the non-fascists.)

We can take a Martian view and suppose that things will fix themselves. And they almost certainly will if we take this Martian view. But in what way? 

Or we can say resignedly “What’s the use in trying to change the environment. I have no power.” But acknowledging our own agency is not the same as saying we as individuals should solve every problem, or take on huge challenges. We obviously have to collaborate to solve “big problems”. And there are local organizations everywhere that are actively working together, actively seeking joy and accomplishment in the tearing down of counterproductive structures (mostly those unique outgrowths of late-stage liberal capitalism) and the creation of new movements, cultures, and ideas.

Importantly, if we want to save nature, both our own Human Nature and the wider Nature that we’re a part of, well, we have to take action. But also—perhaps even first—we need to learn to love Nature, and to learn to not love not-Nature. And certainly, we need to try to determine the difference between the two.

AI Research: the Corporate Narrative and the Economic Reality

In his widely-circulated essay, The Bitter Lesson, the computer scientist Rich Sutton argued in favor of AI methods that leverage massive amounts of data and compute power, as opposed to human understanding. If you haven’t read it before, I encourage you to do so (it’s short, well-written, and important, representing a linchpin in the corporate narrative around machine learning). But I’ll go ahead and give some excerpts which form the gist of his argument:

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.

In computer chess, the methods that defeated the world champion, Kasparov, in 1997, were based on massive, deep search. At the time, this was looked upon with dismay by the majority of computer-chess researchers who had pursued methods that leveraged human understanding of the special structure of chess. When a simpler, search-based approach with special hardware and software proved vastly more effective, these human-knowledge-based chess researchers were not good losers…

A similar pattern of research progress was seen in computer Go… Enormous initial efforts went into avoiding search by taking advantage of human knowledge, or of the special features of the game, but all those efforts proved irrelevant, or worse, once search was applied effectively at scale.

In speech recognition… again, the statistical methods won out over the human-knowledge-based methods… as in the games, researchers always tried to make systems that worked the way the researchers thought their own minds worked—they tried to put that knowledge in their systems—but it proved ultimately counterproductive, and a colossal waste of researcher’s time, when, through Moore’s law, massive computation became available and a means was found to put it to good use.

This is a big lesson. As a field, we still have not thoroughly learned it, as we are continuing to make the same kind of mistakes.

We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

Now before getting to the many problems with Sutton’s argument, I have a caveat to add, and I think it’s best to add it up front: I am one of the “sore losers” that he mentions. I, like many others, colossally wasted my time making a Go AI that utilized understanding. It felt like fun, like a learning experience, simultaneously exploring the mysteries of Go and programming. But now, I realize it was just a sad squandering of calories and minutes. If only I had devoted myself to amassing a farm of 3,000 TPUs and making a few friends at the local power grid station. If only I had followed the money and cast my pesky curiosity aside, I could’ve—just maybe—gotten to AlphaGo first! Then I wouldn’t be sitting here, in this puddle of envy, writing today. But alas, here I am.

In all seriousness, though, my first criticism of Sutton’s piece is this: it’s too “winnerist.” Everything is framed in terms of winners and losers. A zero sum game. But I question whether that is an accurate and productive view of science, technology, and even of games like Chess and Go.

Let’s start with the games. I’m more familiar with Go, so I’ll speak on that. Although Sutton cites the 70 year history of AI research, Go has existed for thousands of years. And it’s always been valued as an artistic and even spiritual pursuit in addition to its competitive aspect. For example, in ancient China, it was considered one of the four arts along with calligraphy, painting, and instrument-playing. And many Go professionals today feel the same way. The display of AlphaGo marketed Go as an AI challenge, a thing to be won—nay, conquered—rather than a thing to be explored and relished in (and it’s clear the benefits to DeepMind and Google of marketing it in such a way). But historically, that has not been the game’s brand within the Go world. Nor was that even its brand in the Go AI world (which consisted largely of Go nerds who happened to also be programming nerds).

For players, these games are about competition, yes, but also about learning, beauty, and mystery. For corporations, they’re just a means to an end of making a public display of their technology (one completely disconnected, by the way, from the actual profit-making uses of said technology). And for AI researchers, well, that’s up to them to decide. If they’re also players, they will likely value Go or Chess for Go or Chess’s sake. But if not, it makes sense to pursue what is most rewarded in the research community. 

But we don’t have to look far to see that what is rewarded in the ML research community is largely tied to corporate interests. After all, many eminent researchers (such as Sutton himself) are employed by the likes of Google (including DeepMind), Facebook, Microsoft, OpenAI, etc. Of the three giants of deep learning—Hinton, Bengio, and Lecun—only Bengio is “purely academic.” And even in the academic world, funding/grants largely come from corporations, or are indirectly tied to corporate interests. 

This brings me to a conjecture. It’s related both to winnerism and what has been “most effective” in being rewarded by the ML/AI research community. The conjecture is that both are reflections not of an ideal scientific attitude, merely of the unfettered capitalist society that we find ourselves in.

Are we, as Sutton says, attempting to discover our own “discovery process”? In the hopes of, I don’t know, creating <INSERT FAVORITE SCI-FI AI CHARACTER>? Or are we perhaps discovering new profit processes for the corporations that have the most data and compute power? 

The Economic Reality

Either way, it stands to reason that if we want to judge what is “most effective” as a technology, we should look beyond a small set of famed AI challenges and to its predominant economic use cases. What are these funders actually using machine learning for? Forget the marketing, the CEO-on-a-stage stuff. Where are the dollars and cents actually coming from? 

Well, from what I’ve seen in the industry, the most profitable use-cases appear to fall into two main categories. Unfortunately, neither involve creating my favorite sci-fi AI character. Instead, they involve (1) automation and (2) directing consumer behavior. 

Let’s start with the first: automation. This one should not surprise any historian of business, as the automation of labor is as old as capitalism itself. But what is perhaps unique about software, and especially machine learning, is that we’re no longer limited to the automation of physical labor. We can automate mental labor as well. (Hence the assault on human understanding, which has now become a competitor.)

One example we can see is in the automation of doctors, or certain functions of doctors. I saw an example of this at PathAI, a Boston-based healthcare company. But examples are now widespread in the healthcare industry (and even Google and Amazon have moved into the healthcare industry). The way PathAI used deep learning was actually a principled, humanistic way to use deep learning. Specifically, it was used to segment pathology images, which are huge gigapixel images that no human can fully segment (though doctors do have effective sampling procedures). Once segmented, the images were converted to “human-interpretable features” which could be used by doctors and scientists to aid in their search for new treatments. It did really empower doctors, scientists, and programmers.

But it’s not all peaches and cream. Many applications are not empowering, and there is a lot of manipulative media bias. Like with Go, winnerist marketing has overtaken healthcare. For example, there are many widely publicized studies that compare (in a highly controlled environment) the prediction accuracy of doctors to an ML model in classifying cell types from pathology images. The implication is that if the model performs better, we should prefer having the model to look at our pathology images. Such a limited metric in such a highly controlled environment, though, does not prove much. And more importantly, you will not see corporations trying to quantify and publicize the many things that doctors can do that the machines can’t do. Nor trying to improve the skills of doctors. No. If anything, they want humans to look like bad machines so their actual machines will shine in comparison.

Now, I hope it’s clear from this example that technology is not the problem. I’m not arguing to go back to the golden ages of bloodletting and tobacco smoke enemas. What I am arguing, or simply highlighting, is the fact that corporations do not optimize for empowerment. They optimize for growth, and marketing in any and every way that aids that growth.

Which brings me to our second category: directing consumer behavior. Two clear examples of this are in recommender systems (for media products) and advertisement. Both of which are the lifeblood of ML juggernauts like Google, Facebook, and TikTok.

The idea is that with enough consumer data and smart algorithms, we can build accurate models of real consumers. Then, using those models, we can to some extent direct consumer behavior. To sell more shit. (Sorry, I’m still annoyed by my purchase, through Facebook, of a Casper pillow many years ago.) 

I mean, it’s nothing new to say: these social media products are like cigarettes. Yada yada. How can we make tech that’s not cigarettes? Somehow, though, we brush the issue under the rug most of the time. We convince ourselves that ML advancements are going towards machine translation to communicate with displaced refugees. But honestly, we should be more bothered, especially about products targeted towards young people.

The Downsides of Solutions Without Understanding

Anyways, coming back to Sutton’s examples—Chess, Go, and speech recognition—we can see that they’re all constrained domains with well-defined measures of “success.” Winning in the case of Chess and Go, and word prediction accuracy in the case of speech recognition. I did argue that these are not true measures of success from a human perspective. But, to some extent, it works for these examples, the domains that make for good AI challenges.

Most products that are meaningful to people, however, are not as constrained as Chess, Go, and speech recognition. They can’t be boiled down to a single metric or set of metrics that we can optimize towards (even if we were to use solely user-centric and not company-centric metrics). And the attempt to do so as a primary strategy leads to products with less craftsmanship and less humanism. Or worse. 

For one, relying on the model to do the “understanding,” rather than ourselves, disincentivizes us from developing new understanding. Of course, we’re free to pursue understanding in our own time, but it’s less funded and therefore economically disincentivized. We put all our eggs into the model basket and the only path forward is more data, more compute power, and incremental improvements to model capacity. This hinders innovation in any domain that’s not machine learning itself. And, in fact, the corporate media that bolsters such approaches hinders innovation even more because would-be innovators get the impression that this is just how things are done, when in fact, it may merely be in the best interests of those with the most data and compute power. 

This fact is easy to miss by ML/AI researchers because they are pursuing understanding (of how to make the best models). But it can do a grave disservice to people researching other domains if (1) ML is applied in an end-to-end fashion that doesn’t rely on understanding and (2) it’s heavily marketed and publicized in a winnerist way. This flush of marketing and capital shifts the culture of those domains to more of a sport (one that consists of making stronger and stronger programs, like a billionaire BattleBots), rather than a scientific pursuit.

Secondly, as with the quote “one death is a tragedy, a million deaths is a statistic,” the overuse of metrics can actually be a way for corporations to hide the harm they may be doing to users (from their own employees). This is clear with recommender systems and media products for example. It’s much easier to look at a metric that measures youth engagement than to look at some possibly unsavory content that real young people are consuming. If we attempt to solve a user problem, then validate it with metrics, that’s just good science. But when metrics become our “northstar,” then the metrics are necessarily separating us from users.

It’s important to note, by the way, that none of this relies on the people running our corporations being “bad people” (though they may be). It’s just the nature of investor-controlled companies where the investors are focused on ROI. (I mean, speaking of how metrics can separate us from users, consider the effect of the ROI metric, especially when investments are spread across many companies.) When push comes to shove and growth expectations are not met, whatever standards you have are subject to removal. I certainly saw this at Google, which at least from my limited perspective, was more focused on utility in the past. The pressure trickles down the chain of command. Misleading narratives are formed. Information is withheld. The people who don’t like what they see leave. And what you’re left with is an unfettered economic growth machine. The only thing preventing this development is resistance/activism from the inside (employees) and resistance/activism from the outside (which can lead to new regulations).

This is the real “bitter lesson.” As a field, we have most certainly not learned it yet, simply because the corporations that benefit the most from ML control the narrative. But there is, I believe, a “better lesson” too. 

The Better Lesson

Luckily, none of these downsides I’ve mentioned have to do with the technology itself. Machine learning and deep learning are incredibly useful and have beautiful theories behind them. And programming in general is an elegant and creative craft, clearly with great transformative potential.

Technology, though, ML included, always exists for a purpose. It always has to be targeted towards a goal. As Richard Feynman put it in his speech on The Value of Science, science and technology provide the keys to both the gates of heaven and the gates of hell, but not clear instructions on which gate is which. So it’s up to us, not as scientists and technologists but as human beings, to figure that out. And to spread the word when we do.