AI Research: the Corporate Narrative and the Economic Reality


In his widely-circulated essay, The Bitter Lesson, the computer scientist Rich Sutton argued in favor of AI methods that leverage massive amounts of data and compute power, as opposed to human understanding. If you haven’t read it before, I encourage you to do so (it’s short, well-written, and important, representing a linchpin in the corporate narrative around machine learning). But I’ll go ahead and give some excerpts which form the gist of his argument:

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.

In computer chess, the methods that defeated the world champion, Kasparov, in 1997, were based on massive, deep search. At the time, this was looked upon with dismay by the majority of computer-chess researchers who had pursued methods that leveraged human understanding of the special structure of chess. When a simpler, search-based approach with special hardware and software proved vastly more effective, these human-knowledge-based chess researchers were not good losers…

A similar pattern of research progress was seen in computer Go… Enormous initial efforts went into avoiding search by taking advantage of human knowledge, or of the special features of the game, but all those efforts proved irrelevant, or worse, once search was applied effectively at scale.

In speech recognition… again, the statistical methods won out over the human-knowledge-based methods… as in the games, researchers always tried to make systems that worked the way the researchers thought their own minds worked—they tried to put that knowledge in their systems—but it proved ultimately counterproductive, and a colossal waste of researcher’s time, when, through Moore’s law, massive computation became available and a means was found to put it to good use.

This is a big lesson. As a field, we still have not thoroughly learned it, as we are continuing to make the same kind of mistakes.

We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

Now before getting to the many problems with Sutton’s argument, I have a caveat to add, and I think it’s best to add it up front: I am one of the “sore losers” that he mentions. I, like many others, colossally wasted my time making a Go AI that utilized understanding. It felt like fun, like a learning experience, simultaneously exploring the mysteries of Go and programming. But now, I realize it was just a sad squandering of calories and minutes. If only I had devoted myself to amassing a farm of 3,000 TPUs and making a few friends at the local power grid station. If only I had followed the money and cast my pesky curiosity aside, I could’ve—just maybe—gotten to AlphaGo first! Then I wouldn’t be sitting here, in this puddle of envy, writing today. But alas, here I am.

In all seriousness, though, my first criticism of Sutton’s piece is this: it’s too “winnerist.” Everything is framed in terms of winners and losers. A zero sum game. But I question whether that is an accurate and productive view of science, technology, and even of games like Chess and Go.

Let’s start with the games. I’m more familiar with Go, so I’ll speak on that. Although Sutton cites the 70 year history of AI research, Go has existed for thousands of years. And it’s always been valued as an artistic and even spiritual pursuit in addition to its competitive aspect. For example, in ancient China, it was considered one of the four arts along with calligraphy, painting, and instrument-playing. And many Go professionals today feel the same way. The display of AlphaGo marketed Go as an AI challenge, a thing to be won—nay, conquered—rather than a thing to be explored and relished in (and it’s clear the benefits to DeepMind and Google of marketing it in such a way). But historically, that has not been the game’s brand within the Go world. Nor was that even its brand in the Go AI world (which consisted largely of Go nerds who happened to also be programming nerds).

For players, these games are about competition, yes, but also about learning, beauty, and mystery. For corporations, they’re just a means to an end of making a public display of their technology (one completely disconnected, by the way, from the actual profit-making uses of said technology). And for AI researchers, well, that’s up to them to decide. If they’re also players, they will likely value Go or Chess for Go or Chess’s sake. But if not, it makes sense to pursue what is most rewarded in the research community. 

But we don’t have to look far to see that what is rewarded in the ML research community is largely tied to corporate interests. After all, many eminent researchers (such as Sutton himself) are employed by the likes of Google (including DeepMind), Facebook (a.k.a. Meta), Microsoft, OpenAI, etc. Of the three giants of deep learning—Hinton, Bengio, and Lecun—only Bengio is “purely academic.” And even in the academic world, funding/grants largely come from corporations, or are indirectly tied to corporate interests. 

This brings me to a conjecture. It’s related both to winnerism and what has been “most effective” in being rewarded by the ML/AI research community. The conjecture is that both are reflections not of an ideal scientific attitude, merely of the unfettered capitalist society that we find ourselves in.

Are we, as Sutton says, attempting to discover our own “discovery process”? In the hopes of, I don’t know, creating <INSERT FAVORITE SCI-FI AI CHARACTER>? Or are we perhaps discovering new profit processes for the corporations that have the most data and compute power? 


The Economic Reality

Either way, it stands to reason that if we want to judge what is “most effective” as a technology, we should look beyond a small set of famed AI challenges and to its predominant economic use cases. What are these funders actually using machine learning for? Forget the marketing, the CEO-on-a-stage stuff. Where are the dollars and cents actually coming from? 

Well, from what I’ve seen in the industry, the most profitable use-cases appear to fall into two main categories. Unfortunately, neither involve creating my favorite sci-fi AI character. Instead, they involve (1) automation and (2) directing consumer behavior. 

Let’s start with the first: automation. This one should not surprise any historian of business, as the automation of labor is as old as capitalism itself. But what is perhaps unique about software, and especially machine learning, is that we’re no longer limited to the automation of physical labor. We can automate mental labor as well. (Hence the assault on human understanding, which has now become a competitor.)

One example we can see is in the automation of doctors, or certain functions of doctors. I saw an example of this at PathAI, a Boston-based healthcare company. But examples are now widespread in the healthcare industry (and even Google and Amazon have moved into the healthcare industry). The way PathAI used deep learning was actually a principled, humanistic way to use deep learning. Specifically, it was used to segment pathology images, which are huge gigapixel images that no human can fully segment (though doctors do have effective sampling procedures). Once segmented, the images were converted to “human-interpretable features” which could be used by doctors and scientists to aid in their search for new treatments. It did really empower doctors, scientists, and programmers.

But it’s not all peaches and cream. Many applications are not empowering, and there is a lot of manipulative media bias. Like with Go, winnerist marketing has overtaken healthcare. For example, there are many widely publicized studies that compare (in a highly controlled environment) the prediction accuracy of doctors to an ML model in classifying cell types from pathology images. The implication is that if the model performs better, we should prefer having the model to look at our pathology images. Such a limited metric in such a highly controlled environment, though, does not prove much. And more importantly, you will not see corporations trying to quantify and publicize the many things that doctors can do that the machines can’t do. Nor trying to improve the skills of doctors. No. If anything, they want humans to look like bad machines so their actual machines will shine in comparison.

Now, I hope it’s clear from this example that technology is not the problem. I’m not arguing to go back to the golden ages of bloodletting and tobacco smoke enemas. What I am arguing, or simply highlighting, is the fact that corporations do not optimize for empowerment. They optimize for growth, and marketing in any and every way that aids that growth.

Which brings me to our second category: directing consumer behavior. Two clear examples of this are in recommender systems (for media products) and advertisement. Both of which are the lifeblood of ML juggernauts like Google, Facebook, and TikTok.

The idea is that with enough consumer data and smart algorithms, we can build accurate models of real consumers. Then, using those models, we can to some extent direct consumer behavior. To sell more shit. (Sorry, I’m still annoyed by my purchase, through Facebook, of a Casper pillow many years ago.) 

I mean, it’s nothing new to say: these social media products are like cigarettes. Yada yada. How can we make tech that’s not cigarettes? Somehow, though, we brush the issue under the rug most of the time. We convince ourselves that ML advancements are going towards machine translation to communicate with displaced refugees. But honestly, we should be more bothered, especially about products targeted towards young people.

My last tenure at Google (working in recommender systems) was a bit disturbing actually. I guess I was just sheltered in the past. But this experience was striking because I worked there, left for about three years, then came back to the same team. And I could see just how much had changed—for the worse—in Google’s frenzied attempt at neverending growth. 

For one, my team had shifted far away from its old, understanding-centric approaches and towards the black box optimization of primarily engagement and growth-related metrics. This helped with scale, but it alienated us programmers from the product and users, which makes it much harder to develop a product that’s actually useful

Even worse, Google is very worried now about the fact that young people are not using Search as much. And they are trying to “solve” this problem through some very stupid and unethical means. (I mean, if being unethical is your only option and you’re honest about it, then, you know, it is what it is; but in this case the organization was being stupidly and dishonestly unethical, a truly remarkable set of traits.)


The Downsides of Solutions Without Understanding

Anyways, coming back to Sutton’s examples—Chess, Go, and speech recognition—we can see that they’re all constrained domains with well-defined measures of “success.” Winning in the case of Chess and Go, and word prediction accuracy in the case of speech recognition. I did argue that these are not true measures of success from a human perspective. But, to some extent, it works for these examples, the domains that make for good AI challenges.

Most products that are meaningful to people, however, are not as constrained as Chess, Go, and speech recognition. They can’t be boiled down to a single metric or set of metrics that we can optimize towards (even if we were to use solely user-centric and not company-centric metrics). And the attempt to do so as a primary strategy leads to products with less craftsmanship and less humanism. Or worse. 

For one, relying on the model to do the “understanding,” rather than ourselves, disincentivizes us from developing new understanding. Of course, we’re free to pursue understanding in our own time, but it’s less funded and therefore economically disincentivized. We put all our eggs into the model basket and the only path forward is more data, more compute power, and incremental improvements to model capacity. This hinders innovation in any domain that’s not machine learning itself. And, in fact, the corporate media that bolsters such approaches hinders innovation even more because would-be innovators get the impression that this is just how things are done, when in fact, it may merely be in the best interests of those with the most data and compute power. 

This fact is easy to miss by ML/AI researchers because they are pursuing understanding (of how to make the best models). But it can do a grave disservice to people researching other domains if (1) ML is applied in an end-to-end fashion that doesn’t rely on understanding and (2) it’s heavily marketed and publicized in a winnerist way. This flush of marketing and capital shifts the culture of those domains to more of a sport (one that consists of making stronger and stronger programs, like a billionaire BattleBots), rather than a scientific pursuit.

Secondly, as with the quote “one death is a tragedy, a million deaths is a statistic,” the overuse of metrics can actually be a way for corporations to hide the harm they may be doing to users (from their own employees). This is clear with recommender systems and media products for example. It’s much easier to look at a metric that measures youth engagement than to look at some possibly unsavory content that real young people are consuming. If we attempt to solve a user problem, then validate it with metrics, that’s just good science. But when metrics become our “northstar,” then the metrics are necessarily separating us from users.

It’s important to note, by the way, that none of this relies on the people running our corporations being “bad people” (though they may be). It’s just the nature of investor-controlled companies where the investors are focused on ROI. (I mean, speaking of how metrics can separate us from users, consider the effect of the ROI metric, especially when investments are spread across many companies.) When push comes to shove and growth expectations are not met, whatever standards you have are subject to removal. I certainly saw this at Google, which at least from my limited perspective, was more focused on utility in the past. The pressure trickles down the chain of command. Misleading narratives are formed. Information is withheld. The people who don’t like what they see leave. And what you’re left with is an unfettered economic growth machine. The only thing preventing this development is resistance/activism from the inside (employees) and resistance/activism from the outside (which can lead to new regulations).

This is the real “bitter lesson.” As a field, we have most certainly not learned it yet, simply because the corporations that benefit the most from ML control the narrative. But there is, I believe, a “better lesson” too. 


The Better Lesson

Luckily, none of these downsides I’ve mentioned have to do with the technology itself. Machine learning and deep learning are incredibly useful and have beautiful theories behind them. And programming in general is an elegant and creative craft, clearly with great transformative potential.

Technology, though, ML included, always exists for a purpose. It always has to be targeted towards a goal. As Richard Feynman put it in his speech on The Value of Science, science and technology provide the keys to both the gates of heaven and the gates of hell, but not clear instructions on which gate is which. So it’s up to us, not as scientists and technologists but as human beings, to figure that out. And to spread the word when we do.

1 Comment

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s