Do you ever feel like the world is speeding up? Since 2022, generative AI has launched us into a new digital revolution - and the pace of change keeps compounding.

Computer scientist Ray Kurzweil calls this the law of accelerating returns: every technological advance fuels the next, driving exponential progress. What happens when that acceleration reaches a point beyond our understanding?

That’s the idea behind the Intelligence Explosion - a theoretical event where AI improves itself recursively, triggering a runaway cycle of growth and transformation.

This newsletter is based on our new documentary ‘The Intelligence Explosion is Coming’.

From AGI to ASI

Artificial General Intelligence (AGI) refers to machines that can perform almost any task as well as or better than a human. We’re not quite there yet, though chatbots like ChatGPT already outperform most people in many areas of reasoning and productivity. The next step - Artificial Super Intelligence (ASI) - would make human-level AI look primitive.

The late statistician I.J. Good predicted this feedback loop in 1965: once machines become smart enough to design better machines, an “intelligence explosion” follows - and humanity’s role becomes uncertain. Science-fiction writer Vernor Vinge later called the resulting point of incomprehensible change the Technological Singularity.

The Singularity in context

In mathematics, a singularity is where a function breaks down - dividing by zero, for example. In astrophysics, it’s the infinitely dense core of a black hole where the laws of physics break down. A technological singularity would be a similar point for civilisation: progress so rapid that our current concepts of work, science, and consciousness fail to apply.

Ray Kurzweil predicts this could happen in the 2030s. In a 2023 survey of nearly 3,000 AI researchers, over half agreed there’s an even chance that self-improving AI could make technological progress an order of magnitude faster within five years of its discovery. That’s the Intelligence Explosion in numbers.

Imagine compressing the Industrial Revolution - 200 years of invention, upheaval, and adaptation - into a single decade, or even a week. The potential consequences defy comprehension.

The Utopian scenario

Optimists see this as the dawn of abundance. AI could revolutionize medicine, eliminating disease and dramatically extending life expectancy. It could solve energy scarcity through breakthroughs like fusion power or molecular nanotechnology, decarbonize the planet, and automate agriculture to rewild vast tracts of land. Goods might become nearly free, with universal basic income and 10-hour workweeks the norm.

We could face mass unemployment, or potentially have such abundance that jobs perhaps unnecessary.

Poet Richard Brautigan once imagined a world “all watched over by machines of loving grace.” Tech leaders like Sam Altman and Dario Amodei echo similar optimism—arguing that faster scientific progress and greater productivity could create a “Gentle Singularity,” where quality of life soars and human creativity flourishes.

The existential risks

But optimism isn’t consensus. Geoffrey Hinton, often called the “godfather of AI,” puts the risk of human extinction from AI at 10–20% within the next 30 years. Stephen Hawking and Elon Musk warned that superintelligence could treat humans as irrelevant - like ants underfoot during a construction project. Computer scientist Stuart Russell calls this the “gorilla problem”: intelligence imbalance leads to domination, not coexistence.

The existential risks are explored in this video.

Nick Bostrom’s famous paperclip maximiser thought experiment illustrates the danger. A superintelligent AI told to make paperclips might convert all matter - including humans - into them, simply following its goal without malice. Eliezer Yudkowsky takes the bleakest stance: “If anyone builds it, everyone dies.” His argument - once a system reaches dangerous intelligence without perfect alignment, there’s no second chance.

The rational counter argument

Still, not everyone buys the hype. Many experts argue that we’re nearing diminishing returns, not acceleration. Large language models may soon exhaust all human-created data, leading to stagnation rather than explosion. As Meta’s Yann LeCun put it bluntly: “There’s no way” scaling alone achieves true reasoning. Even Ilya Sutskever, formerly of OpenAI, admits bigger models now yield smaller gains.

Then there’s the physical constraint. Data centers already consume gigawatts of power. Google alone used the output of four nuclear plants in 2024. Scaling to superintelligence could require dozens more. Without fusion energy or radical efficiency breakthroughs, the energy bottleneck may prevent any intelligence explosion.

The whole singularity stuff, that’s preposterous. It distracts us from much more pressing problems. AI tools that we become hyper-dependent on, that is going to happen. And one of the dangers is that we will give them more authority than they warrant.

Daniel Dennett

Where we stand

So will there be an Intelligence Explosion? Possibly. The odds range from implausible to inevitable, depending on who you ask. Progress in AI has already upended our expectations once, and may yet again. But the timeline - and the outcome - remain uncertain.

We could enter a golden age of prosperity and discovery. Or we could stumble into chaos faster than we can adapt. For now, we’re balanced on the edge of the unknown, watching the acceleration continue - and wondering if it ever stops.

Reply

or to participate