In December 2025, Sam Altman casually claimed on the Big Technology Podcast that AGI had "gone whooshing by" and that OpenAI had already built it. Then in February 2026, four researchers published a commentary in Nature arguing that by any reasonable standard, artificial general intelligence is already here.
And yet, a survey by the Association for the Advancement of Artificial Intelligence found that 76% of leading AI researchers thought current approaches would be unlikely to yield AGI at all. How is that possible?
It’s because nobody actually agrees on what AGI means. That disagreement shapes everything, from trillion-dollar investment decisions to whether you should be worried about your job.
The term has a surprisingly scrappy origin story. In 1997, a grad student named Mark Gubrud used the phrase ‘artificial general intelligence’ in a paper about autonomous weapons. Nobody noticed.
Five years later, computer scientist Ben Goertzel and colleagues reinvented it when naming a new field. Shane Legg, who would co-found Google DeepMind, suggested the phrasing. Gubrud was technically first to the term, but never realised the career heights of Legg. "I am a 66-year-old with a worthless PhD and no name and no money and no job," he said.
Six definitions, six different futures
OpenAI's charter defines AGI as systems that "outperform humans at most economically valuable work." Critics note this conveniently aligns with their business model. Google DeepMind proposed a framework of ‘Levels of AGI,’ measuring capability across five tiers from ‘Emerging’ to ‘Superhuman.’
François Chollet, creator of the ARC-AGI benchmark, defines intelligence as "skill-acquisition efficiency" - how quickly a system learns new things from minimal examples. By his measure, current AI models still struggle with abstract reasoning that's trivial for most humans.
Mustafa Suleyman proposed ‘Artificial Capable Intelligence,’ with a practical test: give an AI $100,000 and see if it can legally turn it into $1 million.
Dan Hendrycks and a team including Gary Marcus and Eric Schmidt grounded their definition in the Cattell-Horn-Carroll model of human cognition, scoring AI across ten cognitive domains. GPT-4 scored 27%. GPT-5 scored 57%. Rapid progress, but with a "jagged" profile — strong on knowledge, weak on memory, perception, and world modelling.
And then there's an academic paper arguing AGI is an "unscientific myth" built on three fallacies. The definition you choose determines whether AGI has already arrived, is five years away, or is fundamentally impossible.
The case for and against
The Nature commentary presents the strongest case that AGI exists now. Current models demonstrate gold-medal maths performance, solve PhD-level problems across fields, write complex code, and assist with frontier research.
The researchers argue these achievements exceed what we see from HAL 9000 in 2001: A Space Odyssey, and that resistance comes from a "toxic cocktail" of vague definitions and fear of displacement.
On the other side, Gary Marcus points to economic evidence: AI can reliably perform only a small fraction of typical job tasks. Economist Daron Acemoglu estimates AI-driven automation would increase productivity by no more than 0.66% over a decade. If we'd really built AGI, that number should be considerably higher.
Yann LeCun argues that a four-year-old child understands more about physics and cause and effect than all the text on the internet contains. "We can't even build cat-level intelligence yet," he's said.
And as Dwarkesh Patel observed, if AI were genuinely approaching human-level capability, companies would be spending trillions on AI tokens to replace knowledge workers. They're not even close.
Why the definition actually matters
If you accept Altman's framing, the conversation immediately pivots to superintelligence - a system that could outperform any human at being President, CEO, or running a scientific lab.
That's a significant escalation for safety and governance. If you accept the Hendrycks framework, you're looking at specific bottlenecks that may or may not yield to current approaches.
The definition also matters because of money. OpenAI's deal with Microsoft includes provisions that activate differently depending on whether AGI has been achieved. A concept nobody can agree on is literally worth tens of billions of dollars. When the people making the boldest predictions have that much riding on your belief, scepticism seems warranted.
Altman's own goalposts have shifted repeatedly. In late 2024, AGI might arrive in 2025. In January 2025, OpenAI was "confident" they knew how to build it. By August, AGI was "not a super useful term." By December, it had already happened without anyone noticing. The goalposts don't just move. Sometimes they disappear entirely.
AGI is probably best understood not as a destination but as a direction. The term was coined in a university basement by a grad student writing about weapons, reinvented by researchers who needed a book title, and is now the stated objective of companies collectively worth trillions.
After nearly three decades, nobody can tell you what it means.







