Something strange happened while I watched an AI agent work for 15 minutes. Google Antigravity built exactly what I wanted without approval. It made decisions, corrected errors, and delivered. I thought: this is AGI.
After posting a video on the case against AGI, several viewers said they think we've already reached it. Investment fund Sequoia Capital recently published "2026: This is AGI," opening with: "While the definition is elusive, the reality is not. AGI is here, now."
Sequoia simplified AGI's definition to "the ability to figure things out." They argue three ingredients have converged: baseline knowledge from pre-training, reasoning from inference-time compute, and iteration through long-horizon agents.
The evidence is mounting
Research from METR shows AI's ability to complete long-horizon tasks doubling every seven months. If this continues, agents could complete tasks taking human experts a full day by 2028, and a full year by 2034.
Dario Amodei, CEO of Anthropic, has predicted that powerful AI could come as early as 2026. Anthropic told the White House in March 2025 that powerful AI systems will emerge in late 2026 or early 2027, with capabilities matching Nobel Prize winners.
At Davos, Amodei said he has engineers who no longer write code. They let the model write it and edit. He predicted we might be six to 12 months from when the model does most of what those engineers do end to end.
The definition debate becomes irrelevant
Models like GPT-5 or Gemini 3 get more correct, more often. If you saw the improvements against the hallucinatory early years of ChatGPT, wouldn't you think it AGI?
It isn't seen that way because it hasn't been let out of its box. Regular paid chatbot usage is still not at a majority. Only when placed in multi-step workflows with permission do you see their potential.
In August 2025, Sam Altman called AGI not a "super useful term" because it means different things. Intelligence is not clearly defined. We struggle to define human intelligence with precision. IQ tests capture something, but nobody believes they capture everything.
If we can't define human intelligence, perhaps expecting a rigorous definition of AGI was always unrealistic.
The gentle singularity
In June, Sam Altman wrote The Gentle Singularity. The point is that we perceive the Singularity as a specific event. But possibly we're moving into it already.
When people live in historical moments, it doesn't seem as seismic as the future perspective suggests. Historians bracket events with dates, but reality doesn't work like that.
AGI development is likely more evolutionary than sudden. It feels like the generative AI era started with ChatGPT in November 2023, but generative AI existed before and AI development stretches to the 1950s. Such shifts come more gradually.
Sequoia's functional approach sidesteps this problem. They ask whether systems can figure things out autonomously. By that measure, the definition debate becomes irrelevant. The systems work or they don't.
My experience with Google Antigravity proved this. The system worked autonomously for 15 minutes, made decisions, corrected errors, and delivered.
The skeptics still make valid points
Yann LeCun, former Chief AI Scientist at Meta and Turing Award Winner, remains critical. In mid 2025 he said there was 'absolutely no way' we'd reach human level AI by scaling up LLMs. He doubled down in December: "There is no such thing as general intelligence. It's an illusion."
Google DeepMind CEO Demis Hassabis said LeCun was 'plain incorrect,' but Hassabis predicts AGI is more likely five years away. Both work, or have worked, at Big Tech corporations with longer timeframes than smaller AI-focused companies. The hype from newer companies may be investment-driven.
Tim Dettmers from Carnegie Mellon argues AGI and superintelligence are physically impossible. We're near the limits of current architecture, echoed by OpenAI's former Chief Scientist Ilya Sutskever.
I also find the human level 'AGI' definition philosophically difficult. Human intelligence is broad. Only a minority are good at coding, even fewer are Nobel Prize winners. If we create a country of geniuses in a datacenter, it wouldn't take long to bypass human level.
In Superintelligence Nick Bostrom writes: "The train may not stop, pause or even decelerate at Humanville Station." So powerful AI is not AGI, it's superintelligence.
In my previous video, I presented the case against AGI and found it persuasive. The definition problem, technical constraints, and missed predictions remain true.
But something shifted in my direct experience. Watching an AI work autonomously for 15 minutes, making decisions and correcting errors felt different from using a tool.
The Sequoia thesis isn't that we've solved consciousness. Their claim is simpler: AI systems can now figure things out. They work autonomously, iterate on mistakes, and solve problems they weren't trained for. By that definition, AGI isn't coming. AGI is here.
Whether that's cause for celebration or concern depends on what happens next. Altman talks about a "gentle singularity." Yudkowsky warns that if anyone builds it, everyone dies. Both could be right. What seems clear is we've crossed a threshold. The debate about when AGI arrives may have been settled while we argued about what it means.








