OpenAI and the False Prophecy of “AGI”

Prior to the digital age, dominating a market was mostly a physical matter, since to produce physical goods or services, it was/is important to own physical land or infrastructure. In the digital realm, though, and especially on the internet, the rules are different. This is because, for one, the internet has no finite territory. And for two, digital products are built largely from software, and software can be replicated for next to no cost.

This means that the strategy for dominating digital markets has shifted, especially for more purely digital markets (ones that do not have a major physical world component). It is no longer enough in these cases to merely own land and resources because we’re operating in a world without finite land and finite resources. Instead, what is of greater strategic importance are (1) the portals through which we enter our digital worlds, and (2) our desires to do certain things and go to certain places in these digital worlds. 

Portals are important, but these are controlled by a select few. A more commonly accessible and finite “resource,” though, are people. And so the strategy to dominate many digital markets has shifted from the territorialization of real estate and inanimate materials towards the territorialization of people’s desires and psyches. (This, of course, is not new to the digital/internet age, but its importance and prevalence has increased.) Desires, for example, can be swayed through positive messaging and imagery, so this is a crucial strategic component. But desires are often fleeting, so it’s useful too to establish ideologies and beliefs that will stick around for a longer term. 

Hence the rise of the modern “cultlike” technology company, one whose strategy involves the establishment of some set of beliefs (in their customers, employees, investors, etc.). This term “cult” gets thrown around a lot to refer to tech companies, and rightfully so. But it’s important to note that there is a large variance in the level of “religiosity” of such companies. And this variance is related to the market incentives we were discussing. 

Specifically, the need to be cultlike is inversely proportional to both (1) how much your product is tied to physical infrastructure, and (2) how obviously useful your product is. For example, if you are a company like Amazon, who has digital portals (e.g. the Amazon shopping website and the AWS dashboard) but whose dominance is largely dependent on physical infrastructure (e.g. transportation and data centers), then you have less need to establish some kind of misleading belief system. And for example, if your product is obviously useful and ethical, such as a test or treatment for cancer, you have less need as well. “Curing cancer” kind of sells itself, after all.

However, if your product or your market involves neither of these—it’s neither physically-tied nor obviously useful—then, well, establishing an ideology may be the only path to dominance. 

Which brings us to the topic of this essay: OpenAI. In a world of cultlike companies, they—if we look just a little below the PR surface—stand out as truly Davidian. And this is due largely to their market characteristics: They do not own physical infrastructure, which is why they’ve partnered closely with Microsoft and Microsoft’s Azure cloud. Also, they are not building something obviously useful. In fact, they do not make it clear what exactly this “AGI” they are building is even supposed to be. And yet, they are doing quite well thus far through a combination of flashy PR and deceptive ideology.

What is this deceptive ideology? Well, let’s take a look at the scripture itself: OpenAI’s Charter.

1. The Charter

The Charter, released in early 2018, is quite a remarkable document. It consists of merely one paragraph (well, two if we include the announcement paragraph) followed by eight bullet points, with each bullet point consisting of one to two sentences. And yet it contains a heavenly multitude of bullshit and misdirection. 

So much so that it’s going to take us a while to sift through all of it. But let’s go ahead and start from the beginning, from the announcement paragraph.

1.1. The Opening Paragraphs

We’re releasing a charter that describes the principles we use to execute on OpenAI’s mission. This document reflects the strategy we’ve refined over the past two years, including feedback from many people internal and external to OpenAI. The timeline to AGI remains uncertain, but our charter will guide us in acting in the best interests of humanity throughout its development.

The first thing we could notice, which is just a minor observation, is simply the fact that most companies do not have a publicized “charter” like this. They do have charter documents and bylaws (i.e. formal legal documents) for how their board is to be legally governed. But they do not have an informal publicized “charter.” 

On the other hand, many companies do have a publicized “mission statement,” or perhaps a “white paper,” a “manifesto,” or a set of “core values.” All of those phrases indicate these are our principles. “Charter,” though, has a different connotation: that OpenAI is bound (e.g. legally) to follow whatever is contained in the document. 

In reality, though, the company is not bound at all. But who may in fact be bound are the employees of OpenAI, since adherence to the Charter—and even holding others in the organization accountable for adhering to the Charter—are crucial criteria for being promoted (this coming from the exposé by Karen Hao).

Anyways, more important than the minor issue of “charter” is the last sentence of the paragraph: “The timeline to AGI remains uncertain, but our charter will guide us in acting in the best interests of humanity throughout its development.”

This sentence contains several notable assumptions: that “AGI” is an inevitability, that “AGI” is a well-defined “thing,” and that there will be a discrete moment in time where we can say that this “thing” exists.

But if “AGI” is not well-defined—if instead (and this is purely a hypothetical) it’s a conceptual play-doh to be molded as they see fit—well, these assumptions would serve to absolve OpenAI of responsibility for what they’re building. Because it says, effectively, this “thing” is getting built, whether it’s us or someone else

But wait! In the next paragraph, they do define “AGI.” Surely this will clear things up.

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles:

“Highly autonomous systems that outperform humans at most economically valuable work.” A lot to unpack here. 

First, sadly, it seems that what I said about the play-doh was accurate. This definition is effectively defining “AGI” as that which performs some “economically valuable” work. An amorphous definition that allows them to seek profit in any technologically possible way.

Secondly, I want to point out that the phrase “highly autonomous” is highly suspect, especially in light of OpenAI’s current products (i.e. their cloud APIs and Github Copilot). Why? Well, behind any modern “AI” application is machine learning, or more specifically, deep learning. And deep learning, like any piece of software, is simply a tool targeted by a person or organization for a particular purpose. Deep learning, in fact, is absolutely no different than any other software tool in this regard. 

Now, to be fair, there are some specific products or applications that could rightfully be called “autonomous.” A self-driving car would be one example, though, in that case “self-driving” is more precise than “autonomous” given that such a car is still a tool to take you from A to B as you specify. If my coffee pot is programmed to start brewing before I wake up, that too could be said, with some degree of accuracy, to be “autonomous.” But this notion of autonomous-ness is quite a different matter from “intelligence.” It’s more a reflection of the product or application being a kind of moving, functional device. And it’s important to note that none of OpenAI’s current products could be said to be “autonomous.” 

For example, though GPT-3 could be said to be “smarter” than GPT-2, it is not more “autonomous.” Not to say that there’s no chance that OpenAI plans to build some more “autonomous” applications in the future. But given their current focus on the cloud, to which, presumably, Microsoft is directing enormous resources (a $1 billion investment), it seems likely that this phrase “highly autonomous” is moreso for the sake of misdirection.

It’s worth noting, by the way, that the reason for using this phrase may be similar to reasons for using the term “AI” as opposed to the more precise terms of ML and DL (machine learning and deep learning), or the term “model” when referring to a particular instantiation of ML software (as all ML researchers and engineers would refer to it). In marketing—by OpenAI as well as Google, Facebook, and others—it is preferred to always use the blanket term “AI” because it may help to distract from what we were just discussing: the fact that these ML models are always targeted by an organization for a particular purpose (and always using some data). “AI,” by tying into our common sci-fi notions of AI characters, helps to distract from all that by making us think that this software is more “autonomous” than it really is.

Or, to put what I’m saying another way: the only real autonomy in these corporate applications is that of the investors, executives, and researchers/engineers.

Importantly too, and perhaps surprisingly, the overusage of “AI” and “autonomous” even aid in establishing a subtle belief in the minds of many expert ML researchers and engineers, which is that the field is on a quest to create some character of sci-fi. Which kind of character is of course never specified, though. And the reality of the research agenda is that it’s mostly being driven to increase profits in specific application areas (e.g. advertisement and media products, automating labor tasks, and in financial prediction).

It may seem surprising that even expert researchers could hold such a belief, but it’s possible by the fact that such a belief is only very rarely at the forefront of our minds. Most of the time, we’re working on a specific model for a specific application, which is often an engaging intellectual task. So it’s enough to merely have some plausibility that such work could contribute to some kind of positive sci-fi future. Sadly, though, such a belief is nothing short of a leap of faith given the current profit-driven research agenda. 

Anyways, this may sound like quite a lot of information merely on the terms “autonomous” and “AI,” but this matter is crucial to the whole narrative of OpenAI (and much of Silicon Valley, for that matter).

This also brings us to another important topic in this paragraph: “safety.” As just discussed, this software is fully wielded and fully targeted by OpenAI (and the clients of their cloud API service) for their own purposes. And yet, we speak of “safety.” This use of language is one way to further give the false idea that the software is an entity-in-itself. But this is also a simple case of fear mongering, a classic and historical propaganda technique. 

Fear mongering, in this case, serves multiple purposes. For one, speaking of the idea of “inevitability” we discussed earlier, if we’re fearful of someone building some scary thing, then we’ll think: We must also build this scary thing. It’s the only way to defend against the other scary thing. Though, in this case, no one has even defined what the “thing” really is.

Secondly, fear mongering also serves to increase the speculated value of companies and to attract attention. Because, well, if this technology is powerful and scary, it must work pretty damn well. Not to say that OpenAI’s technology doesn’t “work well.” I mean, it certainly does some new and flashy things at the simple payment of a few dollars (with DALL-E being like the digital equivalent of a toy capsule vending machine).

But what exactly are we being fearful of? I mean, if we are to be fearful, should we really be fearful of these models, simple figments of software? Or should we perhaps be fearful of OpenAI and other corporations that are targeting them for various purposes? Or the data about us that they are storing and using?

One final thing to discuss about this paragraph before moving on is the idea of automation and whether it will “benefit humanity” as they say. Technically, they did not use the word “automation” but in saying “outperforms humans at most economically valuable tasks,” it is implied. This is a complicated question, though, and separate from our analysis of the Charter, so let’s save it to be revisited later (section 3).

1.2. The “Principles”

Let’s take a look now at the eight bullet-points or “principles.”

Broadly Distributed Benefits

* We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

* Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

“Avoid enabling uses…that harm humanity or unduly concentrate power.” First of all, this point about unduly concentrating power is quite hypocritical given that they’re partnered with Microsoft and that their investors include some of the wealthiest people in the world. But then again, they likely have a different definition of “unduly” than the general public. After all, Sam Altman, the CEO of OpenAI has said: “We need to be ready for a world with trillionaires in it.” So we can see where his priorities are at.

Secondly, in light of the fact that they, again, did not clearly define “AGI” in the first place and the fact that they did not tell us in what ways “AGI” could “harm humanity,” this quote about harming humanity sounds quite out-of-place. Even creepy. It almost sounds like they’re preparing us, or giving themselves permission, to harm humanity. I mean, why else are we even talking about harming humanity? You didn’t tell us what could even be harmful. 

In other words, a literal reading of the Charter finds no description of what “AGI” is exactly and why it could possibly be harmful. And thus, a literal reading would find the bringing up “harm[ing] humanity” as bizarrely unfounded. Appearing out of nowhere. However, the reality is that they’re not intending for readers to do a purely literal reading of this document. 

Instead, they’re playing on our preexisting notions of “evil AI” from science fiction. Skynet and the Terminator. HAL 9000. The Borg. If we come in with these prior connections, then harming humanity may seem a perfectly reasonable concern. But, as discussed earlier, the “AI” we’re developing with deep learning is utterly unrelated to sci-fi AI characters. (And in fact, the real Borg is more likely to be OpenAI—and wider forces of Silicon Valley capitalism—indoctrinating us into drones and fusing us with addiction-optimized digital technology.)

Next, let’s look at the second bullet and specifically “needing to marshal substantial resources to fulfill our mission.” This also is basically giving themselves permission to do things like take massive investment from Microsoft or turn what was once a non-profit into a for-profit. But this permission that they’re giving themselves knows no bounds. And this because their “mission” is not well-defined. Let’s look back to a quote from the second paragraph: “We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.” If their mission includes anything that aids others, what is its stopping point?

There is a big assumption that “AGI” is a discrete event. But here in these bullets they refer to “AI or AGI” which removes this idea that they are necessarily working towards some discrete event. 

And, in light of this and everything else we’ve discussed, if we take an honest look at what they’re saying, we find that none of this—not the “mission,” not the “AGI,” not the “benefiting humanity,” not the “autonomousness”—have any strict meanings at all. All of these empty terms are to give the semblance that OpenAI is a moral entity, even if their actions and incentives may suggest otherwise. Even if they’re primarily seeking 100x returns for their investors, even if they encourage employees to police each other for sins against the Charter, well, it’s all in the service of the greater good. Whatever that greater good may be.

Long-Term Safety

* We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.

* We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

This is more giving themselves permissions. “Driving the broad adoption of such [AI Safety] research across the AI community.” As I argued, the whole idea of “safety” is a distraction from the fact that the only real autonomy in any ML system is the autonomy of the makers. And the fact that it’s fear mongering. But here, they are giving themselves permission to spread this misleading ideology further. 

The second bullet appears to give them permission to be acquired by some other corporate entity and again reinforces the unfounded assumptions that “AGI” is both inevitable and will be a discrete event.

Technical Leadership

* To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.

* We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise.

These bullets, which speak to OpenAI’s focus on technology, seem, at first glance, to be obvious: Of course OpenAI is seeking to improve their technology. But upon a deeper look, they contain some notable subtleties. For one, this is the first mention of “policy and safety advocacy.” Sneaking this phrase in here gives the assumption that, Oh, we will do policy and safety advocacy too. In other words, we may fund or lobby towards favorable policies and ideologies. 

Furthermore, coming back to the idea of absolving their responsibility for working on “AGI,” these bullets again serve to absolve that responsibility. According to this document, “AGI,” that ever elusive concept, is a certainty, and simply waiting on the sidelines is not an option.

Cooperative Orientation

* We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges.

* We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.

These final bullets are again giving themselves more permissions. In this case, to “cooperate” with other institutions of power, as well as to, at any point, stop making their research public.

2. The Disciples/Employees

If we take a step back, we can see that OpenAI’s Charter is a simple Millenarian or Messiah myth: The Messiah (“AGI”) will come in the near future. We will prepare for the coming, but no matter what, it will happen. Every action we do, however it may appear, is all to bring about this divine intervention. And though we may collect profits, though we may spread falsehoods, trust us: it is all for the future Paradise that the Messiah will bring.

In other words, an ancient and universal human myth that’s appeared in countless cultures. But in this case, given that it’s operating under the investor-controlled, grow-forever capitalism of Silicon Valley, it’s merely a way to hide aimless capital accumulation behind a veil.

So, given all this, the deceptiveness and folly of OpenAI’s ideology, a natural follow-up question is: do people at OpenAI really believe it? 

Well, I recently talked to a few OpenAI employees and probed at exactly this question. And the result, at least from my small sample, is a mixed bag: many but not all employees do buy into this religion of “AGI.”

One employee, an engineer, was quite practical. He said effectively: “The company will talk about ‘AGI’ but from my perspective the business is what matters. For me, I’m interested in making my own company in the future and the OpenAI brand will help me.”

Another employee, though, gave me concern. Actually, it was these concerning answers to simple questions that first led me to taking a closer, more skeptical look at OpenAI. At the time, I didn’t know much. I thought they simply did some interesting research. But after talking to some employees, I was left with a weird feeling in my stomach, like something is very “off” here.

Anyways, this employee was a recruiter. She mentioned “we want to make sure our AGI is safe.” So I simply asked: “what are you making the AGI for exactly? Because the safest ‘AGI’ is no ‘AGI,’ right?” She stumbled through a few possible answers and eventually got to “Well, why did we make the Industrial Revolution?” After discussing further for a few minutes, though, she too seemed genuinely curious to have the answer and pointed me to someone else to ask.

So, yes, definitely a strange answer. But we should be careful to not poke fun at this particular recruiter because this belief, or one very similar, is one that many people in Silicon Valley hold, even expert technologists. The belief that Silicon Valley is by default making the world a better place as did the Industrial Revolution. And given our terminology, it makes a kind of sense: after all, Silicon Valley is producing “technology” as did the innovators of the Industrial Revolution. And early “technology” made the world a “better place.” So why wouldn’t ours also?

If you question this belief, a common response may be: Would you rather live 200 years ago?

But, unfortunately, this is not sound logic. And the reason, as with the Charter, comes back to a misuse of terminology. First of all, although we have a phrase for the “Industrial Revolution,” it’s wrong to treat that as corresponding to a discrete event when in reality, it’s describing a whole historical period. A period of time that involved many individual people’s choices and actions. And also, even if the “Industrial Revolution” did “make the world a better place,” well, we still need to make a careful comparison to the kinds of “technology” we are developing today to the kinds of “technology” early industrial innovators developed. I mean, does an NFT have as much utility as a washing machine? What about a marketing analytics tool versus, say, an automobile?

Finally, I talked to one fairly high up director at the company (the person the recruiter pointed me to). He again mentioned “safe AGI” so I asked him the same question I asked the recruiter—”why do you want to make this ‘AGI’ if it is so ‘dangerous’ as you say?” He answered, “Well, it will create abundance.”

Unfortunately, though, he wasn’t able to tell me what he exactly meant by “abundance,” and he promptly shifted the conversation. At this point in the conversation, though, I couldn’t just move on. And I wasn’t really sure if this was just a marketing spiel of his or a real belief, so I asked him directly: “This ‘AGI’ stuff, is it just marketing? Or do people at the company really believe it?” He looked confusedly taken aback and paused for a moment before saying, “No, people really believe it.”

3. Digital Automation, Not the Promised Land

At this point, we should be hesitant to trust anything OpenAI is doing. For one, they’re spreading this strange mythology. If they truly believe it, we should be concerned. And if they don’t truly believe it (and are actively crafting it for the sake of manipulation), we should also be concerned. And secondly, despite their “Open” name they are now tied to market incentives and seeking first and foremost returns for their already wealthy/powerful investors and executives. 

But I know, even upon hearing all this, many people that have been impressed by OpenAI’s technological feats—for example, with GPT-3 and DALL-E, may still think: Well, they must be doing something right. We should encourage them to keep going. This technology could be really useful.

So it’s worth also considering this more practical question. 

First, though, a clarification: as a former ML engineer, I will readily admit that what OpenAI has done from a pure technological difficulty perspective is incredibly impressive. I mean, deep learning, although it has become more standardized over the years, is lightyears from easy for many, many reasons. Getting a model to properly fit a large dataset is a subtle art. And to be honest, although I got better at it over the years at Google and PathAI, I never had the “magic touch” that some people who really understand all the subtleties and nuances do.

Also, another important clarification: technology itself is absolutely not the problem. I mean, I personally love programming. And there’s no doubt that it’s an elegant and creative craft with great transformative potential. For example, the book Masters of Doom tells a wonderful story of the power of software visionaries who are not motivated primarily by money. And, for example, although I find social media when it’s directly optimized for youth addiction quite abhorrent, I find our new voice assistants quite cool and useful. All this being said, we cannot simply group all “technology” together. An NFT, whatever NFT investors may tell us, is categorically different from a washing machine. A marketing analytics tool is different from an automobile. And thus, particular products should be judged on a case-by-case basis. 

Of course, the establishment wants to brand “technology” as one monolith (one that even includes such companies as WeWork) and to not encourage critical assessment. But we should rate products the way we rate movies or the way we rate restaurants. It’s a bit subjective. We can’t always predict what a company will do. But it’s good to be skeptical. And furthermore, it’s good to look at motives and incentives because, as we can see with the Charter and most other PR and marketing, we can’t simply trust what corporations tell us.

Anyways, this brings us to OpenAI’s aim to build some digital software (“AGI”) that “outperforms humans at most economically valuable work.” In other words, to develop some forms of digital automation.

First of all, not even thinking about the software but just their aim to “outperform humans” coupled with their requirement to create massive ROI, this sounds quite clearly antagonistic towards humankind. Let’s not forget that, you know, it’s also possible to empower people. Through better education, better training, and tools, people can be made better. I love the film First Man about Neil Armstrong because it shows exactly this point. Through training, self-study, passion, and opportunity, he became an impressively competent human-being—mentally and physically. And I really believe all people are capable of that. Also, we can strive to make our society a better place for human thriving. 

Doesn’t that sound like a better goal for technology than “outperforming humans”?

But let’s also consider the software OpenAI is making. It may sound like they are antagonistic towards people, but GPT-3 and DALL-E seem cool, seem fun. How could those be antagonistic?

Well, one thing to keep in mind is that the examples we see in press releases never match the real profit-making use cases. If we imagine a pie chart denoting where the money comes from, the PR use-cases make up a tiny sliver. Or, in many cases, like with OpenAI’s Dota bots, no sliver at all (they are not seeking to make money from Dota directly, after all).

More practically, automation where the means of production are owned by a corporation rather than the general public generally serves to accumulate capital to the owners, rather than the general public. There are narratives that suggest that such capital can “trickle down” to the general public, but believing those requires yet another unfounded leap of faith. 

Luckily, some grassroots organizations such as EleutherAI are working to make open source equivalents to OpenAI’s tech. It’s possible that tech can be owned and shared by the general public. I mean, that was supposedly the original idea behind “Open”AI, but if they won’t do it, luckily others will fight for it.

Speaking of automation and “outperforming humans,” there is also a serious problem with the notion of “outperformance” that is found in the “AGI” research of OpenAI, DeepMind, and others. It’s what I call the task fallacy

The idea is that although machines can outperform humans at certain well-defined tasks (and the number of such tasks and degree of outperformance will only increase in the future), we should be careful not to conflate such outperformance on well-defined, controlled tasks, with some holistic notion of “performance.” In other words, with the holistic notion of how technology can increase human thriving, or improve the overall “user experience” of living in the world (that is what our goal should be, right?).

Let’s consider an example: digital pathology, which I worked on at PathAI. There are many studies comparing a DL model’s performance on classifying cell types to an “average doctor” in a controlled setting. In many cases, the predictive accuracy of the models has been shown to be higher than the average doctor’s and furthermore, that doctors have a high rate of disagreement between them. Now, it may be the case that this indicates we should use this model in clinical settings. But let’s be mindful of the precise facts here. 

First of all, as we said before, we could work to make the “average doctor” better. Secondly, although doctor’s may disagree, we have lots of practices which take this into account. For example, we most often have multiple doctors checking results (and doctors have the power to bring in other specialists as needed). Furthermore, “predictive accuracy” is just one metric. There are a multitude of other considerations that go into any assessment of a real person’s condition. And especially for complicated cases, people need to be able to do holistic assessments, understand concepts, and dig deeper (e.g. maybe we don’t even have the right data).

And finally, there is one more important aspect to the task fallacy, which is related to its practical usage in the world. Specifically, if we break everything, such as treating a patient, into a series of predictive tasks without being mindful of the overall “user experience,” well, the result is simply a bureaucratic nightmare. What could be a pleasant, holistic experience becomes choppy and impersonal. And if money shifts to the pockets of the owners of digital doctors, that will likely mean less money in the pockets of doctors, and fewer doctors there to improve the system over time and provide that human element. 

In theory, technology can be used in brilliantly ergonomic, empowering ways, and we should fight for these uses. But importantly, growth-oriented companies are not directly incentivized to optimize for empowerment. So again, we need to be skeptical of individual companies and products, and we need to work towards better incentives.

4. The Bigger Picture

OpenAI is an extreme company. I mean, they’re literally trying to indoctrinate us with a modern day Millenarian or Messiah myth. They’re quite literally creating a kind of religion, fueled by over a billion dollars of capital from Microsoft. And their doctrine speaks of “benefiting humanity” whilst all logic suggests otherwise.

And yet, it’s not that extreme. Not for Silicon Valley. In fact, it’s just common, run-of-the-mill startup advice that one should try to start a “cult.” Or a “movement.” To quote the well-known venture capitalist, David Sacks:

The best founders talk eloquently about their mission and the change they want to make in the world. They speak about something larger than dollars and cents. They articulate a vision of the future that attracts adherents. They create a movement.

He wrote this in his essay, Your Startup is a Movement, which is essentially a playbook of Machiavellian marketing techniques. One that never mentions the basic concept of, you know, making a product that’s actually fucking useful.

Meanwhile another notable venture capitalist, Chamath Palihapitiya, has openly referred to SV as “a dangerous, high stakes Ponzi scheme.” One where, as always, working class people are set to lose the most.

And it’s not just startups. The problems with big corporations are even worse. At companies like Google, Facebook, and TikTok, deep learning is being used most profitably towards better harnessing user data for advertisement and for media products. With the media products being directly optimized towards “engagement.” In other words, towards territorializing our desires, or what could simply be called “addiction.” Of course, they try to make sure that the precious human time of ours that they harness is somewhat worth our while. That people don’t regret it too much. But their primary aim, as I’ve seen first hand, is “engagement.”

Mind you, not every online, social platform or forum does this. It’s not a problem with technology itself (those inanimate tools that are always wielded by people for particular purposes). Merely with the incentives of an unregulated market.

Now, I think we can all agree that this is not technology at its best. And that it’s not the kind of technology people naturally are motivated to build of their own accord. Merely what’s encouraged or required in the current system.

But luckily, there are solutions—if we’re willing to fight for a better system.

5. A Better Path Forward

I respect the work of groups that seek to make software like what OpenAI is developing actually open, actually owned by the general public. But if we want a better world, we have to go farther and question the incentives that are driving the development of specific technologies in the first place. 

So here are a few large-scale action items to improve the current state of affairs. Mind you, these are just a few of my own, possibly naive, ideas. So I encourage you to do your own assessment. And of course, I’m happy to discuss and work together (and there are many local groups already working together towards similar kinds of change).

Item 1: Regulate Data-Driven Media Products and Advertisement

If we made a pie chart representing the money that is made from deep learning, we’ll find that an inordinately large portion comes from advertising and “engagement”-optimized media products (and even OpenAI’s research can be used towards these aims, as well as some of their tools). These use-cases are the lifeblood of the ML juggernauts of Google, Facebook, and TikTok, after all.

How is deep learning applied to such use-cases? Well, the basic idea is that massive amounts of personal user data can, through advanced statistical modeling, be turned into approximate models of real consumers. And those models can then be used to direct—to some extent, control—consumer behavior.

Importantly, though, so long as this use-case makes up a huge portion of the pie chart, we have to be wary that any new ML “breakthrough” could simply allow for more consumer control. Actually, we should not even bother being wary. We can be certain that it will happen. 

It’s a sad fact but it’s simply our reality until something changes. And although I have no doubt it can change, the corporations are not going to change it out of the goodness of their hearts. They, in fact, are actively pushing in the opposite direction. So far in the other direction that companies like Facebook will acquire and use a fake VPN service to spy on teens against their knowledge.

Item 2: Funding With Better Incentives

It’s great that Silicon Valley funds technologists to build things. But again, “technology” is not a monolith. If I build a machine to paint the sky purple, well, maybe that is “technology,” but is it worthwhile? What if I build the latest and greatest in pump and dump scams? 

In other words, although we have a lot of capital that is ostentatiously funding “innovation,” and it’s easy to see this, from a personal perspective, as a world of opportunity—I could make loads of money by innovating!—the reality is more problematic.

First, by taking equity, the investors can end up controlling the majority of a company. Even if they, as in the case of Google, do not own the majority of the voting shares, they can still own the majority of the board seats and thus effectively have control. It’s important to note that this total focus on equity-based investing is a relatively new phenomenon. If funding is absolutely needed, it would be better to have some debt-based funding, which lets founders maintain control and skin in the game. And which also wouldn’t require the company to keep growing forever.

The company Basecamp and their founders are great proponents of this kind of philosophy. And they are quite financially successful too, so I recommend reading and listening to what they’ve said on this.

It could also be better to pivot to government funded innovation for hardware and software, like a computing equivalent to the NIH. This of course would come with a whole new set of concerns (and would certainly still end up tied to corporate interests), but it could be an improvement if implemented well. 

Item 3: Regulate Corporate Political Donations

Ever since the Citizens United v. FEC ruling, there has been no limit on corporate donations to political campaigns. And given the ubiquity of advertising and the fact that many of us don’t put a ton of research into every political candidate, this means that, to a large extent, politicians can be bought by corporate entities and/or wealthy individuals. 

Naturally, this allows the wealthy and powerful to maintain the status quo and even shift more capital away from the general public and into their hands. So we have got to put a stop to this legalized bribery (though, regulating data-driven advertisement would also help by taking some power and incentive away from advertisement).

Perhaps the connection between this and problems with technology are not obvious, but they should be: Tech corporations and/or investing groups can effectively buy politicians, who then prevent any systematic changes or regulation that would go against them. I mean, why, in the face of incredible evidence (and simple common sense) pointing to mental health problems due to social media—problems on a never before seen societal scale, problems which continue to increase at a rapid pace—is there never any regulation in America?

Item 4: Software Against the System

I’ve found an interesting pattern: software that is the most ethical—or the most rooted in our social reality—is often the most interesting and elegant software too. For example, the free and open-source software movements have developed incredible, extensible tools. And bitcoin, although it spun off into many get-rich-quick crazes, was created by cypherpunk hackers seeking an alternative system in the wake of the 2008 financial crash. They weren’t seeking fame or funding, and if they had been, they wouldn’t have come up with such a breakthrough.

So we need more hacking like that, and really, there are a million ideas and things to do. Software to increase transparency and fight false narratives. Software to increase accountability. Software to empower normal people. Software to better organize together. Software to give real independent nonprofits a voice. Or just lovely, user-centric products to show that products can be better when they’re not focused primarily on money and metrics.

Conclusion: If we really want to transform society for the better, we can’t just pray for a Messianic intervention. We can’t just believe in silly myths and shut off our critical thinking. No, we’ve gotta face up to real problems honestly and critically, and work together.


Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s