The moment we had all been waiting for. It roared out from the belly of the Earth last Thursday and social media went… what is this? It’s not what you said it would be… it’s not good enough.
That was the broad vibe I get from reading about GPT 5. As an aside, it takes a special model to get its own OpenAI model, given how daftly the rest of the set names it. GPT 4o mini. What even is that? It’s product naming for geeks that no one really understands.
In he build up, Lord High Hype Sam Altman, of course, was pumping the atmosphere full of hot air.
There are moments in the history of science, where you have a group of scientists look at their creation and just say, you know: ‘What have we done?
But GPT 5 was a disappointment. No doubt about it. The baying digital crowds were expecting Optimus Prime. An AI that’s better than you, or me, or anyone really. They were expecting something akin to ‘AGI’. But GPT 5 ain’t it.
What is AGI?
Artificial General Intelligence is better known by its acronym of AGI, but it’s a term so woolly that it could be the world’s greatest sheepskin rug. Frankly, no one can really put their finger on it.
Sam Altman keeps banging on about it. That it’ll be reached soon and OpenAI will be a kind of Messiah figure that frees the world from poverty, disease and indeed work. For the latter point, there is a vacuum of acknowledgement for the increasing likelihood of severe disruption to established incomes.
What happens when people on good salaries in the West are laid off and replaced by increasingly advanced AI? It’s a ticket to a kind of civic breakdown that will make Trump 2.0 look like it was the sunlit uplands. Will be very fortunate to reach utopia. This is called the ‘accumulative risk hypothesis’ - and Mustafa Suleyman’s The Coming Wave covers it in meticulous detail.
Then let’s take Google Deepmind’s attempt at defining AGI:
(AGI is an) AI system (or collection of systems) that outperforms 99% of skilled adults at most cognitive tasks.
‘Most cognitive tasks’ - huh? That is so broad it’s non commital. What is ‘most’? How can we define a ‘cognitive task’? The sheepskin rug has doubled in size, if not grown exponentially.
Artificial ‘Capable’ Intelligence
Mustafa Suleyman has a more practical take. He wrote in MIT Technology Review two years back that we should rather consider ‘ACI’ or Artificial Capable Intelligence. With his ‘Modern Turing Test’ being to tell an AI to take $1m out of a $100k investment, with little in its prompt than ‘just do it.’ In hindsight, he rather overegged how close it could be. After all, Anthropic’s attempt to setup a Claude agent to run an ecommerce store failed the test in fairly hilarious ways.
What Suleyman is getting at is there’s probably a bridge or evolutionary step before we reach AGI, and wealth generating agents are a neat conceptualisation of that. But is it realistic?
The case against
Having worked with some frustration with agents and automation for most of this year (I don’t deny they are good, just not that game changing in many cases yet), I can say with some certainty that AI is still a long way from fundamentally changing everyone’s bank balance or global GDP.
Signal President Meredith Whittaker pretty much delivered the master of all takedowns for all things agent in this video. Just watch it - it’s all the things you were thinking of why rapid AI transformation probably won’t happen.
Julian Toglius also wrote a great MIT Press Essential Knowledge book on AGI. My main takeaway? It’s very difficult to define what ‘intelligence’ is. Can an AI really be that clever if it has no lived experienced? Self driving Tesla’s kind of do I guess, but then do they have any emotional intelligence? Are they conscious? Almost every AI book I’ve read this year has a chapter on the latte point and goes around in the same philosophical circle. Maybe, maybe not.
There’s also Chief Genius at Meta Yann Le Cunn who is probably the most amusing anti AI hype person in the world. It’s funny because he shouldn’t be, given his position. His response to being asked if LLMs can lead to AGI: ‘There’s no way.’
The interviewer then asked, ‘In your opinion?’
‘There’s no way.’ I’d never heard of him until I saw this interview. After watching it I thought, this guy is making a lot of sense. Perhaps Meta just wants to lay down the gauntlet on Sam Altman’s claims? It’s a strongly contrasting position, which is funny because Meta now appear to be poaching all of OpenAI’s best engineers.
I’m not as intelligent as anyone mentioned above (or maybe I am?) but I have a few thoughts on AGI. Plainly, it isn’t really a thing and it probably never will be. There will be no single AGI moment.
As an aside, I do think we will have an ‘intelligence explosion’ in the next 15 years but that is not AGI. It is way beyond it. Superintelligence is something where the limits of knowledge are governed by the laws of physics, not the cognitive power of humans - and with that come eye watering existential risks.
Meanwhile, I feel like I come across advanced AI most days, from just running prompts. I came to a strange realisation just last week. An agent I’d prompted performed a pretty super human analysis in 5 minutes, on a task that would have taken me more than 5 days (and would not have been commercially worth it).
To me this sort of thing will just be a gradual evolution, and hopefully we retain our jobs so we can make it happen just before it figures out how to deliver us all a healthy Universal Basic Income. Is that the moment of AGI? I don’t know. I’ll probably be sat on a deckchair, drinking a margherita, thankful that I never have to look at my phone again, as every other human struggle is ticked off its giant list. But such hopes remain in the delicate balance, and with it comes a rising fear.
Slightly late this week. Running on fumes.




