In April 2025, a report called "AI 2027" exploded through AI circles and even became a topic of discussion in the White House. It laid out a scenario where AI achieves fully autonomous coding by 2027, triggering an intelligence explosion that could end with human extinction by 2030.

Then in early January 2026, Daniel Kokotajlo, one of the report's authors, quietly revised his predictions. Autonomous coding has been pushed to the early 2030s. Superintelligence to 2034. "Things seem to be going somewhat slower than the AI 2027 scenario," he wrote on X.

Meanwhile, the CEOs of OpenAI, Google DeepMind, and Anthropic have all predicted that AGI could arrive within five years, yet Yann LeCun, Meta’s recently departed Chief AI scientist, has said there’s ‘no way’ that AGI will be reached through scaling up Large Language Models.

So who's right?

The definition problem

AGI stands for Artificial General Intelligence, and broadly means an AI that can do anything a human can do. Write code. Conduct scientific research. Run a business. Some researchers have called it the last invention humanity will ever need to make.

The problem is that if you ask five AI researchers what AGI actually means, you'll get five different answers.

Google DeepMind's framework describes it as "an AI system that outperforms 99% of skilled adults at most cognitive tasks." But what does "most" mean? 51%? 90%? Does that include emotional intelligence? Physical reasoning? Common sense?

Henry Papadatos, executive director of the French AI nonprofit SaferAI, has said: "The term AGI made sense from far away, when AI systems were very narrow. Now we have systems that are quite general already and the term does not mean as much."

In other words, we might already have passed what we once thought AGI would look like. And yet the goalposts keep moving.

What the market is telling us

If AGI were actually imminent, you'd expect companies to be throwing money at AI systems to replace human workers at scale.

Dwarkesh Patel made this point in a December 2025 blog post. Knowledge workers around the world earn tens of trillions of dollars in wages each year. If AI systems were genuinely approaching human-level capability, companies would be spending trillions buying AI tokens to replace them. The actual revenues of AI labs are nowhere close. The market is telling us something.

The fundamental problem, Dwarkesh argues, is that AI systems don't get better over time the way a human would. He calls this the "continual learning" problem. If you hire a human editor to edit your videos, they learn your preferences. They notice what resonates with your audience. They pick up improvements and efficiencies as they practice. After six months, they're dramatically better than when they started.

Dwarkseh: Strong beard

AI systems don't do this. You can fiddle with the system prompt, but that doesn't produce the kind of learning and improvement that human employees experience.

Dwarkesh offers an analogy. Imagine teaching a child to play saxophone. The moment they make a mistake, you send them away and write detailed instructions about what went wrong. Then a completely different child reads your notes and tries to play Charlie Parker cold. When they fail, you refine the instructions for the next child.

No child is going to learn saxophone this way. But this is the only way we can currently "teach" AI systems anything.

Technical limits

For the past decade, AI progress has followed a simple formula: more data, more compute, better results. But there are signs this approach is hitting its limits.

Ilya Sutskever was one of the architects of OpenAI's scaling strategy. Since leaving the company in May 2024, he has argued publicly that simply enlarging models is yielding diminishing returns. The next breakthroughs, he suggests, will require fundamentally different approaches.

LLMs have been trained on essentially the entire written output of human civilisation. By some estimates, they will have consumed all available high-quality human-generated text by 2028.

Then there's the power question. Leopold Aschenbrenner projects that AI training clusters could require 10 to 100 gigawatts of electricity by the end of the decade. Google's data centres consumed roughly 3.5 gigawatts in 2024, equivalent to four large nuclear power plants. Meeting those projections would require dozens of new power plants within a few years.

A history of miss predictions

In August 1955, a group of prominent computer scientists submitted a proposal to the Rockefeller Foundation, believing that "a significant advance can be made" if researchers worked together for a summer at Dartmouth College. Fifty years later, John McCarthy reflected: "My hope for a breakthrough towards human-level AI was not realized at Dartmouth."

In 1965, AI pioneer Herbert Simon predicted that machines would be capable of doing any work a man can do within 20 years. In 1970, Minsky promised human-level AI within a generation. That was 55 years ago.

Sam Altman's positions have shifted dramatically. In late 2024, he told a Y Combinator podcast that AGI might be achieved in 2025. In January 2025, he wrote that OpenAI was "now confident we know how to build AGI." By August 2025, he was calling AGI "not a super useful term." The goalposts don't just move. Sometimes they disappear entirely.

The human boundary will be breached quickly

If AGI is human level intelligence, then why have we set that boundary? Creating such a level of intelligence essentially represents an intelligence explosion. If you can have replicable networks of human level minds, you could theoretically expand this intelligence exponentially. Almost as soon as we reach AGI, we're into something completely different. So perhaps there is no such thing as AGI. There is AI and there is superintelligence, and the threshold could very suddenly be breached and become irrelevant.

AI is powerful. It's remarkably useful. But there's a difference between "remarkably useful" and "human-level general intelligence arriving in five years." The latter claim requires the definition problem to be solved, the continual learning problem to be solved, the data constraints to be overcome, and decades of missed predictions to suddenly prove accurate.

When the people making the boldest predictions have hundreds of billions of dollars riding on you believing them, a little scepticism seems warranted.

Reply

or to participate