Dario Amodei, the CEO of Anthropic and the person behind Claude, published two major essays in late 2024 and early 2026 with a big claim. He believes that "powerful AI" could arrive within two years. By his definition, that means systems smarter than Nobel Prize winners across most domains.
He’s not alone. Sam Altman has claimed OpenAI knows how to build AGI as traditionally understood. Demis Hassabis at Google DeepMind puts the odds at 50% by 2030. Elon Musk thinks AI will surpass any individual human by the end of this year. But not everyone agrees, and the gap between the optimists and the sceptics is wide.
Amodei is careful to avoid the term AGI, calling it a "marketing term." Instead he describes "a country of geniuses in a datacenter" with millions of AI instances, each smarter than a Nobel laureate, thinking 10 to 100 times faster than humans and working around the clock.
He predicts AI could replace most software developers within a year and conduct Nobel-level research within two. At Davos in January 2025, he doubled down: "I’m more confident than I’ve ever been that we’re close to powerful capabilities... in the next 2-3 years."
Amodei has historically been one of the more cautious voices in AI. He left OpenAI specifically because he was worried about safety, and his essays spend as much time on risks as they do on benefits. In "Machines of Loving Grace," published in October 2024, he outlined five areas where AI could transform life: biology and health, neuroscience, economic development, governance, and human meaning. He talks about compressing a century of medical progress into a decade and eliminating most infectious diseases.
5 risks that keep Amodei up at night
His second essay, "The Adolescence of Technology," published in January 2025, is less optimistic. Amodei lays out five categories of risk.
First is autonomy risk, where AI systems start pursuing goals that conflict with human interests. The danger, as he frames it, is that as systems become more capable, we might lose the ability to verify what they’re actually doing.
His proposed solution is Constitutional AI. Instead of hardcoding every rule, you give the system a set of principles and train it to reason from them. Anthropic publicly released their updated constitution in January 2025, an 84-page document expanded from the original 2,700 words. It establishes a priority hierarchy: safety first, then ethics, then compliance, then helpfulness. The constitution includes provisions for Claude to refuse requests that would help concentrate power, even if those requests come from Anthropic itself.
Second is bioterrorism, where AI could lower barriers to creating dangerous pathogens. Anthropic has hardcoded prohibitions for certain categories of information that cannot be overridden under any circumstances.
Third is authoritarian capture, where powerful AI ends up controlled by governments that don’t share democratic values. Amodei’s proposed solution is maintaining Western leadership through chip export controls.
Fourth is economic disruption. If AI can do most cognitive work, what happens to the people currently doing it? His suggestions are more conventional here: progressive taxation, economic monitoring, safety nets.
Fifth is indirect effects, like AI-powered disinformation at scale or the erosion of shared reality when anyone can generate convincing fake evidence.
The sceptics push back
The most prominent voice on the other side is Yann LeCun, Meta’s former chief AI scientist, a Turing Award winner, and one of the founding figures of deep learning. LeCun thinks current AI systems are fundamentally limited.
His argument is that large language models learn from text, but text is an impoverished representation of the world. A four-year-old child processes more sensory information about physics, object behaviour, and cause and effect than all the text on the internet contains.
LeCun’s prediction is "several years if not a decade, with a long tail." He thinks we need fundamental breakthroughs in how AI builds world models, involving new architectures that haven’t been invented yet.
François Chollet, creator of the ARC-AGI benchmark, is even more sceptical. He puts human-level AI at 2038 to 2048 and has argued that the current focus on large language models may have set back AGI research.
The optimists point to genuine shifts, though. OpenAI’s o3 model scored 87.5% on the ARC-AGI benchmark, a test designed to be difficult for language models. Humans score around 85%. Previous AI systems couldn’t crack 30%. Reasoning via reinforcement learning has produced dramatic improvements in verifiable domains like maths and coding.
Google Deepmind’s Demis Hassabis occupies a middle ground. He agrees progress is rapid in coding and mathematics but emphasises that scientific discovery and creative reasoning remain more difficult. His test for real AGI is whether it can discover new science and come up with something like General Relativity from scratch. Forecasting platforms Metaculus and Manifold Markets show peak probability for AGI around 2027, with a median of 2031.
Building the plane while it’s already in the air
If Amodei is right and powerful AI arrives within two to three years, there is almost no time to build the governance structures, safety measures, and economic protections we’ll need. The decisions being made right now, in a handful of AI labs and by a small number of people, could shape the trajectory of human civilisation.
If LeCun is right and we’re decades away, we have more breathing room but also the risk of complacency. Even on the conservative timeline, the most significant technology in human history would arrive within our lifetimes. The disagreement between the camps is about when, not whether.
Amodei titled his recent essay "The Adolescence of Technology" and the metaphor fits. Adolescence is a period of rapid change, of capabilities outrunning wisdom, of potential for both growth and serious mistakes. How we navigate it will be the defining challenge of our era.







