AI could change things fast, and that means preparing now is a good idea
Apparently, humanity has always seen itself as on the brink of “the end of history.”
Pretty much every religious tradition has beliefs about the end of the present age, epoch, or world itself—doctrines known as eschatology (don’t worry, nobody else knows how to pronounce it either). For example, a common characterization of the historical Jesus is that he was “an apocalyptic preacher”, given that his message revolved around the impending
Often, secular worldviews have stood in contrast to these views, seeing the sweep of history as a story of gradual progress, and the future as the continuation of this trend.
Then there’s the AI community.
In contrast with the business-as-usual that characterizes much of the scientific perspective on the future, the intellectual milieu surrounding the field of artificial intelligence has given rise to a more radical stance: that we find ourselves on the precipice of history—on the brink of growth so transformative and fast that there are no words to describe it other than a technological singularity. It’s a perspective that, for all its apparent presumptuousness, might sound vaguely reminiscent to the prophecies of traditional religion.
Unlike its eschatological predecessors, though, expectations of an AI-inaugurated “end of history” is based in quantitative prediction—a rational forecast from the expected dynamics of a world shaped by general artificial intelligence. Interestingly, it’s an analysis that has converged on a view of the human situation that is philosophically similar to that of many traditional worldviews. It’s a perspective that many at BYU are in a unique position to understand at an abstract level, though the details will undoubtedly be unfamiliar.
Here I’m interested in laying out the basis for the belief among AI experts that advances in AI have the potential to very quickly and very radically change the world: is it just the expression of an archetypal human expectation, not necessarily grounded in fact, or is it justified? To put my cards on the table, I believe this analysis is basically correct, and that there are strong arguments for expecting that conditional upon development of human-level AI, things will get very different very fast. If this is the case, the implications are profound.
...

One of the most remarkable statistical graphs I have ever seen plots world GDP over the past two millennia
You’ve heard of exponential growth: behold a super-exponential curve. If you’re like me, then after looking at this for three seconds, you wonder: what the heck happened in the 1950s? And then, even more curiously: what wasn’t happening in the previous two millennia? There is no one answer to these questions, other than “a lot of things”. But a compelling perspective is that the thing that produced that graph is a certain kind of positive feedback loop between population and technological progress. It’s an explanation known as the endogenous growth model
Here’s the core of the positively reinforcing cycle: more resources → more ideas and innovation → more resources... and so forth.
Prehistoric human societies spend their time hunting and gathering in small bands of around 40. The limiting factor to their population was food: there simply wasn’t enough of it for more people. But, eventually and somewhat miraculously, someone thought up agricultural techniques. This innovation allowed for more food to be produced more efficiently, to support much larger populations. More people implied more ideas, and these innovations allowed civilization to leverage more resources, to support yet larger populations, eventually looping into the unprecedented number of people on the planet today and the correspondingly massive economic growth rate. That compounding growth accounts for the reversed L shape of the graph above: the compounding effects don’t seem to be doing much until it gets a foothold and then explodes into the stratosphere.
Here’s the kicker: if human-level machine intelligence is indeed possible, then that same positive feedback loop will be instantiated again, except the loop will be even tighter and more powerful.
At this point it’s reasonable to ask, hold up, is human-level AI possible? It is an empirically open question as to whether current machine learning techniques can scale all the way to human cognition, but the evidence is certainly favorable. There’s this observation called “universality
Here’s how it could look. If AIs capable of making scientific progress are trained, they could then be run in parallel by the millions. This is because the compute (aka, the number of mathematical operations, like adding two numbers) needed to train a system is orders of magnitude
This story rests on a few premises. The first, as mentioned before, is that human-equivalent cognition can be instantiated on a computational substrate. Another is that existent AI systems will be steered towards developing AI further. Why should we believe the second premise? The short answer? Because there’s money in it. Basically every economic model you can shake a stick at predicts that once this feedback loop is possible, economic incentives will drive it
All the while, though, there will likely be enough slack for a fraction of the gains from an intelligence explosion to be diverted into more general technological progress. Some have forecasted that the result could quickly mean a doubling world economic output sub-annually. Open Philanthropy's compute-centric model
And what happens after that? At the very least, it means that we could find ourselves in an increasingly sci-fi future, filled with technologies we might have naively expected to take many centuries to develop. Homo Sapiens owe our mastery of the natural world, and the changes we have imposed on it, to our intelligence. We gained a neocortex and just 3 times
…
This is a wild view of the future. If it is even approximately correct, we’re not ready for it. It means that we should be feverishly at work preparing social structures and frameworks
From a psychological perspective, the apocalyptic traditions which have characterized humanity’s beliefs about the future since antiquity can be seen as saying something profound and simple: lift your eyes above myopia and live in preparation for what is important and real and impactful, the stuff that will matter in a thousand years. It strikes me as a deeply relevant injunction to our current situation, because if there is even a chance that we find ourselves in the face of an impending phase-shift in the human condition, then preparation for it is the essential task of our age.