A Phase-Change in History? How Quickly AI Could Have Transformative Impact Skip to main content

A Phase-Change in History? How Quickly AI Could Have Transformative Impact

AI could change things fast, and that means preparing now is a good idea

Apparently, humanity has always seen itself as on the brink of “the end of history.”

Pretty much every religious tradition has beliefs about the end of the present age, epoch, or world itself—doctrines known as eschatology (don’t worry, nobody else knows how to pronounce it either). For example, a common characterization of the historical Jesus is that he was “an apocalyptic preacher”, given that his message revolved around the impending kingdom of heaven. Verses about the coming last days are ubiquitous in the Quran. Abrahamic religions don’t have a monopoly on the apocalypse, either. Records indicate that ancient Aztecs were just as fascinated by it, for example. There’s a strong case to be made that almost all the oldest and deepest worldviews shared by humanity have a strong emphasis on preparation for radical and immanent changes in our situation in the world. It’s a universal human obsession.

Often, secular worldviews have stood in contrast to these views, seeing the sweep of history as a story of gradual progress, and the future as the continuation of this trend.

Then there’s the AI community.

In contrast with the business-as-usual that characterizes much of the scientific perspective on the future, the intellectual milieu surrounding the field of artificial intelligence has given rise to a more radical stance: that we find ourselves on the precipice of history—on the brink of growth so transformative and fast that there are no words to describe it other than a technological singularity. It’s a perspective that, for all its apparent presumptuousness, might sound vaguely reminiscent to the prophecies of traditional religion.

Unlike its eschatological predecessors, though, expectations of an AI-inaugurated “end of history” is based in quantitative prediction—a rational forecast from the expected dynamics of a world shaped by general artificial intelligence. Interestingly, it’s an analysis that has converged on a view of the human situation that is philosophically similar to that of many traditional worldviews. It’s a perspective that many at BYU are in a unique position to understand at an abstract level, though the details will undoubtedly be unfamiliar.

Here I’m interested in laying out the basis for the belief among AI experts that advances in AI have the potential to very quickly and very radically change the world: is it just the expression of an archetypal human expectation, not necessarily grounded in fact, or is it justified? To put my cards on the table, I believe this analysis is basically correct, and that there are strong arguments for expecting that conditional upon development of human-level AI, things will get very different very fast. If this is the case, the implications are profound.

...

GDP_History

One of the most remarkable statistical graphs I have ever seen plots world GDP over the past two millennia. It doesn’t look like much at first glance—that's because it can take a second to even see the line.

You’ve heard of exponential growth: behold a super-exponential curve. If you’re like me, then after looking at this for three seconds, you wonder: what the heck happened in the 1950s? And then, even more curiously: what wasn’t happening in the previous two millennia? There is no one answer to these questions, other than “a lot of things”. But a compelling perspective is that the thing that produced that graph is a certain kind of positive feedback loop between population and technological progress. It’s an explanation known as the endogenous growth model, and the details are relevant: these kinds of positive feedback loops are at the heart of why the AI community expects artificial intelligence to send civilization down the fast track of progress at a blistering pace.

Here’s the core of the positively reinforcing cycle: more resources → more ideas and innovation → more resources... and so forth.

Prehistoric human societies spend their time hunting and gathering in small bands of around 40. The limiting factor to their population was food: there simply wasn’t enough of it for more people. But, eventually and somewhat miraculously, someone thought up agricultural techniques. This innovation allowed for more food to be produced more efficiently, to support much larger populations. More people implied more ideas, and these innovations allowed civilization to leverage more resources, to support yet larger populations, eventually looping into the unprecedented number of people on the planet today and the correspondingly massive economic growth rate. That compounding growth accounts for the reversed L shape of the graph above: the compounding effects don’t seem to be doing much until it gets a foothold and then explodes into the stratosphere.

Here’s the kicker: if human-level machine intelligence is indeed possible, then that same positive feedback loop will be instantiated again, except the loop will be even tighter and more powerful.

At this point it’s reasonable to ask, hold up, is human-level AI possible? It is an empirically open question as to whether current machine learning techniques can scale all the way to human cognition, but the evidence is certainly favorable. There’s this observation called “universality,” which says that artificial neural networks and our brain independently learn similar circuits, and it’s strong evidence that current methods in deep learning are sufficient for modeling human cognition even if they're suboptimal. On top of that, at a basic level, what’s happening under the hood of fancy machine learning systems is approximation of data-sources—AI models learn how to reproduce the stuff we throw at them. Technically speaking, information-theoretical approaches to machine learning indicate that deep learning techniques are universal function approximators, and that with more data and compute, a sufficiently large neural network will be able to produce outputs indistinguishable from human outputs. And yes, that would include reasoning, creativity, and, crucially, innovation. But that’s a whole different rabbit hole: the point here is that, if human-level intelligence is instantiated on silicon, it will share the same underlying dynamics that produced the super exponential of historical GDP growth.

Here’s how it could look. If AIs capable of making scientific progress are trained, they could then be run in parallel by the millions. This is because the compute (aka, the number of mathematical operations, like adding two numbers) needed to train a system is orders of magnitude more than the compute needed to run the resulting system (kind of like how simulating evolutionary history would be way more difficult than simulating the brain which results from it). In addition to strength in numbers, AIs would also have speed. Like, a lot of it. AI models can, and will increasingly be able to process information much faster than humans. Our brains are limited by the constraints of biology: action-potentials sent between neurons are quite slow in comparison to electrons moving between resistors. This body of super-fast AI researchers could then make both algorithmic progress to develop better machine learning techniques to train more capable AI models, and also hardware progress to more efficiently convert matter into chips which can run more of those models. Progress in both of these directions would compound, allowing for an even bigger population of AI systems to make even more progress. Imagine a country of scientists doing nothing all day every day but researching how to make more and better scientists, and succeeding. That’s the kind of thing the endogenous growth model predicts about AI.

This story rests on a few premises. The first, as mentioned before, is that human-equivalent cognition can be instantiated on a computational substrate. Another is that existent AI systems will be steered towards developing AI further. Why should we believe the second premise? The short answer? Because there’s money in it. Basically every economic model you can shake a stick at predicts that once this feedback loop is possible, economic incentives will drive it until it hits some kind of limit. Because innovation drives economic growth, economies that want to maintain a competitive advantage will have to invest in expanding their cognitive workforce by reallocating gains from AI back into improving AI.

All the while, though, there will likely be enough slack for a fraction of the gains from an intelligence explosion to be diverted into more general technological progress. Some have forecasted that the result could quickly mean a doubling world economic output sub-annually. Open Philanthropy's compute-centric model suggests that these dynamics imply that AI systems capable of automating 20% of human cognitive tasks would be able to perform 100% of them a mere 3 years later. See also Epoch’s information-theoretical approach and Ajaya Cotra’s Biological Anchors Report.

And what happens after that? At the very least, it means that we could find ourselves in an increasingly sci-fi future, filled with technologies we might have naively expected to take many centuries to develop. Homo Sapiens owe our mastery of the natural world, and the changes we have imposed on it, to our intelligence. We gained a neocortex and just 3 times the brain mass of chimpanzees, and unlocked the capacity for creating wonders our evolutionary ancestors wouldn’t even comprehend. Perhaps we are in as poor a position to predict what a post-intelligence-explosion world would look like as they are. But it seems clear that even if AI guarantees an acceleration of the future, a force-multiplier of huge magnitude, the direction of this acceleration is very much a free variable. In the words of Sam Altman, CEO of OpenAI: “I think the good case for AI is just so unbelievably good that you sound like a crazy person talking about it... I think the worst case is lights-out for all of us.”

This is a wild view of the future. If it is even approximately correct, we’re not ready for it. It means that we should be feverishly at work preparing social structures and frameworks for the impending wave. It means that unsolved technical problems such as the alignment problem (a solution to which would guarantee that AI systems are reliably steerable) are among the most important causes in the world. At a broader scale, it means that 21st century people find ourselves in a pivotal time in history, a time in which individuals have an exceptional amount of leverage over the direction of the future, and in which critical decisions could reverberate indefinitely far into the future.

From a psychological perspective, the apocalyptic traditions which have characterized humanity’s beliefs about the future since antiquity can be seen as saying something profound and simple: lift your eyes above myopia and live in preparation for what is important and real and impactful, the stuff that will matter in a thousand years. It strikes me as a deeply relevant injunction to our current situation, because if there is even a chance that we find ourselves in the face of an impending phase-shift in the human condition, then preparation for it is the essential task of our age.