“I saw the best minds of my generation destroyed by madness” -Allen Ginsberg, Howl
AIpocalypse?
In their recent documentary, The A.I. Dilemma, Tristan Harris and Aza Raskin of the Center for Humane Technology make a heartfelt plea to imagine a world where "Gollem Class AIs" conspire to replace humanity, saying that this is nothing short of a potential extinction-level event. Generative Large Language Multimodal Models, GLLMMs aka Gollems, are the class of AI of which ChatGPT is just one example. They correctly claim that if a technology confers power, it will start a race. This is clearly visible in the frantic pace that Microsoft, Meta, Google, and OpenAI are competing to be the dominant player. They all already have very sophisticated AIs but they are uniformly embracing GLLMMs to augment their current capabilities.
There has been a great deal of other breathless commentary about the dangers of AI. Geoffrey Hinton quit his role at Google in order to warn the world about the dangers of AI. Max Tegmark penned an open letter which has almost 30,000 signatories where they "call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4" and that if they don't inspire voluntary compliance that "governments should step in and institute a moratorium." Their proposal is that the future of AI should be carefully managed by independent experts who would use the time to craft a regulatory framework to manage the safe use and development of AI.
Lex Fridman has been busy interviewing numerous players in the space, most recently Stephen Wolfram (UIUC), but also Manolis Kellis (MIT), Max Tegmark (MIT), Eliezer Yudkowsky (Machine Intelligence Research Institute, MIRI), and Sam Altman (OpenAI). The message is fairly consistent regarding AI’s potential to be a force for good but is also currently an existential threat because no one really understands how it works or the extent of what it actually ‘knows’.
Exponential Growth
Part of the panic stems from what Harris and Raskin call the "double exponential" rate of progress, implying that the progress is exponential and that the rate of improvement of that rate is also exponential. Since no one currently understands how GLLMMs currently do what they do, the risk is that they’ll get away from us and eventually autonomously develop into Artificial General Intelligence (AGI). At that point, AI could develop the ability to compete with or even outcompete humans which is the trigger that they claim could very easily lead to our demise. I agree that the progress is currently exponential but like all natural growth, the exponential curve will eventually give way to something more logarithmic. Growth of natural things follow an exponential curve in the early phases but then plateau, much like the sigmoid function used in neural networks themselves. That’s not to say that AI could not develop dangerous capabilities before hitting that plateau, only that speaking about the double exponential is only half the story.
Tegmark compares AI to a human created asteroid heading directly for earth. A frequent refrain in his conversation with Lex is that GLLMMs are an example of Moloch, the chaos agent in Ginsberg's "Howl" and also an ancient god associated with human sacrifice. This framing seems tailor made to invoke fear. In essence, Tegmark is saying GLLMMs are a kind of boogeyman.
There is an element of truth to the comparison. An agent of chaos which creates division is something we've already seen in the current generation of AIs. Twitter's and Meta's algorithms were originally designed to facilitate human connection but, after optimizing for advertiser outcomes, they resulted in sowing division and creating outrage. GLLMMs are vastly more powerful so upgrading the Twitter and Facebook algorithms to more modern AIs could have an amplifying effect on those divisive dynamics.
Pandora Unchained
Meta open sourced their LLM, called LLaMA, which was ostensibly only available to bona fide researchers but was immediately leaked and improved upon. GPT-4 level technology is now out in the wild. A leaked document from Google worries that the open source versions of these models are improving at a faster pace and use fewer resources than the LLMs at hyperscale companies. What was once only possible with a datacenter and millions of dollars of compute time is now accessible for a researcher with a single machine and a commodity graphics card. For better or worse, Pandora's Box is open and the technology is now widely distributed for anyone to use.
A Breather
At the 51 minute mark of The A.I. Dilemma, Harris asks us to take a “genuine breath” because of how difficult the material is. I will join him in that breath but for a different reason. The apocalyptic language and the coordinated messaging of the threat of AI triggers my spidey sense. The more I've dug into this messaging, the more skeptical I’ve become. Let's explore.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44663d6b-5401-45a6-8e55-bf42cb002b79_1024x1024.png)
Asteroids and Gollems and Molochs, oh my
When have we heard an apocalyptic message, promulgated by experts and thought leaders, calling for an emergency intervention by the government? The timing of the messaging and its uniformity itself is interesting. When people in positions of power call for emergency action, there is often more to the story. China ostensibly locked down because of covid but there have been cogent arguments that it was possibly related, at least in part, to curtailing inflation by curtailing aggregate demand. So what could the second order effect be of pausing for six months in order to create a regulatory framework around the development of AI? One possibility that comes to mind is to concentrate access to the technology into the hands of a small cadre of established authorities and researchers in the space. This could create a kind of AI priesthood where access to the technology was gated by existing members. (AIristocracy, perhaps?) Why might they want to do that? Raskin and Harris warn that AI confers unprecedented power. Concentrating this power in the hands of the few, regardless of their original intentions, could become a tool for oppression and control just as easily as it could be used for more noble purposes. Who might want to be in control of this kind of power?
AInnoculation
Regardless of whether this moratorium is adopted, GLLMMs will certainly be used to try and shape human behavior by other humans even if they do not develop into a humanity ending AGI. The potential for abuse is massive. As individuals, we can innoculate our minds from manipulation simply by remaining skeptical of the things we see and hear. If we thought the world was full of misinformation before, just wait until GLLMMs are used by advertisers and political parties in their campaigns. If you find yourself having a strong emotional response to a message, it might be time to pause, take a breath, and consider who would benefit from making you feel that way.
I think it is also important, perhaps more important, to be skeptical of the message this illustrious and credentialled group of experts is telling us. It could be a genuine attempt to save humanity from certain annihilation. It could also be a run-of-the-mill power grab.
People in power have used fear to control the thinking of the population since time immemorial. The antidote to fear is courage. GLLMMs will surely change the world but our place in it is probably more in danger from each other than from machines.
This is GOLD!
“If you find yourself having a strong emotional response to a message, it might be time to pause, take a breath, and consider who would benefit from making you feel that way.”
Will AI compete with other AIs and if two AIs become self aware is that bad for humans? Could it be that AIs will end up competing with other AIs and leave us alone?