Since my previous essay, OpenAI and the False Prophecy of “AGI”, was quite long, I figured I should make an internet-friendly, read-on-your-coffee-break version to summarize its main conclusions.
1. “AGI,” as presented in OpenAI’s Charter, is an archetypal Millenarian myth (or Messiah myth)
With the Messiah or Paradise-inducing event being, of course, “AGI.” These myths have been found across cultures for thousands of years, and this one fits the archetype perfectly. It contains all the core assumptions: (1) the event (“AGI”) will happen, no matter what, in the near future, (2) it will be a discrete event (one moment in time), and (3) it will benefit humanity immensely.
Meanwhile, all the basic concepts—what “AGI” is, how it will “benefit humanity,” how it will be “autonomous,” why it has a risk of “harming humanity”—are left vague or undefined.
Useful links for cross reference: Charter, Messianic and millenarian myths (Encyclopedia Brittanica), Millenarianism (Wikipedia)
2. Many, but not all, people working on “AGI” believe the myth quite literally (others, though, may believe a different kind of myth)
My assessment of believers in the OpenAI millenarian myth comes from a small number of conversations with OpenAI employees. But importantly, ML practitioners more widely, while they may not believe that exact myth, may hold another subtle belief at the back of their mind, which is that the field of ML is on the path to something resembling some sci-fi AI character, when in fact, the direction of the field of ML is being driven by its most profit-making applications (e.g. advertising and media, labor automation, and financial prediction).
3. The myth is made believable by the subtle manipulation of language and indirect PR
This manipulation of language is quite Orwellian. And there are a few specific ways the language manipulation is done.
One is through the use of terms with multiple meanings. For example, “AI” is connected both to the technology/field of study, as well as fictional sci-fi AI characters. This allows marketers to introduce subtle assumptions. For example, they refer to technology having a risk of “harming humanity” or being “unsafe” without saying what could be harmful or unsafe. Although these assumptions are completely unfounded, it can work because we have prior ideas about “evil AI” from movies. And through the assumption, it also gives a misleading idea of what the technology is even supposed to be (e.g. it implies it should be something like in the movies).
Secondly, it plays on imprecise but not completely inaccurate terms like “autonomous.”
As for the PR, it is always cool, but it’s not reflective of the real business, which is where the incentives for developing the technology come from in the first place. For any given profit-making tech, it’s probably possible to devise some cool and innocuous-seeming application of the tech to mislead and encourage research for the profit-making use.
4. The manipulation of language is used often in Silicon Valley as a whole
For example, the most basic manipulation is with the term “technology.” The fact that we use such a blanket term allows marketers/investors to subtly trick us into thinking an NFT is in the same category as a washing machine. Or that a marketing analytics tool is in the same category as an automobile. Or even that WeWork is a “technology” company.
Instead, we should judge products and companies on a case-by-case basis (and look to their incentives and motivations). There really is no such thing as “technology,” merely individual people and organizations doing and building certain things, which may or may not be useful or empowering.
5. Any “AGI” research that compares machine “performance” to humans contains a notable fallacy, the “task fallacy”
First of all, the whole obsession with “performance” and “outperforming humans” should show how such companies are not creating humanistic products. Rather, they’re following the incentives of capital accumulation which are largely human-antagonistic.
And speaking purely logically, “performance” on well-defined “tasks” that are measured by some predictive metric—no matter how many tasks it is—is never a good measurement of what it means to be human, or any kind of independent being. This is the fundamental flaw of “AGI” research.
But of course the research, again, is not motivated to create a sci-fi character, or a “being.” It’s moreso motivated to create a greater hyperdivision of labor and more bureaucracy to make for a more efficient economy. The zweckrational.
6. The real ML behind OpenAI’s “AGI” is for various forms of digital automation—this will not “benefit humanity” (at least not in our current economic environment)
For one, it will accumulate more capital into the hands of the already wealthy, since the automation tech is not owned by the general public. Furthermore, it may necessarily increase bureaucracy across society. More bureaucracy makes us smaller cogs in bigger machines. And makes our economic experiences (such as going to the doctor or flying on a plane) more choppy and impersonal, less human.
Of course, if our economic system is regulated or altered, this could be different.
7. Silicon Valley, at least at the present moment, is more focused on marketing and creating silly myths to hide its drive towards capital accumulation than on creating human-empowering products
OpenAI is an extreme case. But not that extreme. It’s just common startup advice to create a “cult” or a “movement.” And big companies are even worse. This is just basic incentives, or how our unregulated economic system works.
Of course, there are many exceptions to this tendency, even by large corporations. People that choose to empower other people to build visionary products (or people that fight to do it themselves without anyone’s permission). And thank god for that. But so long as the overarching incentives stay the same, the general tendency will stay the same too.
8. The reason there are so many “cultlike” companies in Silicon Valley is because of differences between digital markets and physical markets
If you dominate physical infrastructure—like say oil fields or a transportation network—you don’t need to spread myths because, well, you’ll dominate either way. People need you. Similarly, if you’re in a market that’s obviously useful and ethical like, say, cancer research, you don’t need to spread myths because, well, “curing cancer” sells itself.
But if you have a digital product (where you don’t also dominate some physical land or infrastructure) and what you’re building is not obviously useful, then you can only dominate by spreading misleading ideologies. Rather than territorializing land, you have to territorialize people’s psyches.
9. If we want ML research (or software development in general) to have any chance of being broadly beneficial, we need more regulation and/or to fight the predominant economic system
Right now, every ML “breakthrough” is going towards making addictive media products more addictive and towards advertising that exerts more control over consumers. So this needs to change.
And in fact, the whole direction of ML research is driven by this use case and a small number of other use-cases (e.g. digital automation, like in the case of OpenAI, and in various financial predictions). There are of course, great use-cases of ML, but we need regulation to choke off the bad use-cases; otherwise, every breakthrough may be net-negative for the general public (though we may, individually, still be able to make a fat paycheck).
Luckily, regulation is doable! As is fighting the system. I mean, it won’t be easy, but local organizations everywhere, such as social democratic organizations, are working together to push for positive economic change. And they’re quite social.