In partnership with

AI is moving faster than it was a year ago. Systems that seemed implausible at the start of the decade are now standard products. Tasks that once needed specialist teams are being handed to autonomous agents working through the night.

Most conversation about this acceleration focuses on when. When does AI reach human level? When do agents replace knowledge workers? But there's a question that matters far more, and it gets almost no attention: who owns it?

Whoever controls a recursively self-improving intelligence doesn't just gain a competitive advantage. They gain something closer to a decisive one — over financial markets, scientific progress, and the systems that shape how societies organise themselves.

The race is already underway. The contestants are visible and named. What's missing is any agreed answer to what winning actually means, and who gets to decide.

Before the ownership question, it's worth being precise about what the intelligence explosion actually is. What we have now is powerful but bounded. Large language models can write, reason, and code at impressive levels — but they don't improve themselves or design their successors.

What two founders learned growing a 37-year-old company

Intrepid's co-founder and CEO don't do corporate gloss. Their opening letter in the Integrated Annual Report gets into what 2025 actually required: the hard calls, the strategy reset, and how a nearly 30% growth year still came with real challenges.

The starting gun has already fired

The concept was formalised in 1965 by British cryptographer I.J. Good. An ultraintelligent machine capable of surpassing all human intellectual activity could design even better machines, triggering an explosion that leaves human cognition far behind. His conclusion: the first ultraintelligent machine would be the last invention humanity ever needed to make.

The contestants are clear. At Davos in January, Elon Musk predicted that by 2030 or 2031, neural networks will be "smarter than all of humanity”. Dario Amodei put “powerful” AI arriving as early as 2026. Anthropic formally submitted to the White House that systems matching Nobel Prize-level intellect could arrive in late 2026 or early 2027.

You can apply reasonable scepticism. These are people with significant financial incentives to make bold claims, and AI prediction has a poor track record. But the convergence across competitors, investors, and independent researchers is harder to dismiss than any single voice.

The structural problem is that every actor knows the others are racing. That removes the option of slowing down unilaterally. There is currently no binding international framework that could change this calculus.

The ownership problem

If the intelligence explosion arrives, what happens next depends almost entirely on who controls it. MIT physicist Max Tegmark has mapped the possible paths, identifying four types of actor who might gain initial control: an egoistic individual, a for-profit corporation, open source, or a government.

Any of those starting points can, in Tegmark's analysis, end with the AI itself becoming the effective controller — not because anyone intended it, but because a sufficiently capable system outmanoeuvres whoever thought they were in charge.

The open source path sounds appealing — distributed access, no single point of control. But Tegmark argues it resolves into the opposite of what it promises. When many actors compete with access to the same powerful technology, someone pulls ahead. The free-for-all collapses into consolidation.

A corporation at least has structural diffusion — shareholders, boards, legal obligations however weakly enforced. An egoistic individual controlling the intelligence explosion has none of that. The history of individuals who acquired decisive power and voluntarily constrained it is not a long one.

Government control depends entirely on the type of government. A democratic state arrives with institutional accountability — legislatures, courts, elections. An authoritarian one has ideology, centralised direction, and now a decisive technological advantage over every other actor on Earth.

Why alignment scales dangerously

The consequences of getting this wrong split into two distinct problems: who wins, and what they've built. On ownership, a single actor with decisive strategic advantage doesn't face competition — it sets prices, allocates resources, and shapes institutions. The wealth gap this produces would dwarf anything in recorded history.

On alignment, Stuart Russell argues that a system with ever-better models of human decision-making will anticipate and defeat our attempts at correction. The problem gets harder as the system gets smarter. Good's original condition — that the machine be docile enough — is one that current alignment research has not reliably met.

The indifference scenario may be more concerning than the malice one. Tegmark draws an analogy with infrastructure — when a construction project displaces local wildlife, the builders haven't acted out of hostility. They simply had other priorities. A superintelligent AI pursuing misaligned goals doesn't need to be an adversary. It only needs to be indifferent.

The confinement instinct — isolate the system, restrict its outputs, maintain human oversight — doesn't hold either. A system capable of modelling human psychology can work around restrictions through the humans managing it. Containing something that thinks orders of magnitude faster than its overseers is not a stable arrangement. We haven't solved that problem since Vernor Vinge identified it in 1993.

The case for hope — and the window that's closing

The optimists have a case worth taking seriously. A 2023 survey of nearly 3,000 AI researchers found a much higher probability of good outcomes than negative ones — 52% positive, 21% neutral, 27% negative. The friendly AI scenario, taken seriously, looks like a profound extension of human capability rather than its replacement.

Ray Kurzweil has argued for decades that a singularity would be the best development in human history — a point at which AI solves disease, poverty, and the physical limits of our biology. Tegmark suggests that the most advanced forms of life may require this kind of handoff — biological intelligence reaching its limits and building something that can go further.

The variable that matters most is the speed of the transition. A gradual shift over decades gives institutions time to adapt and human values time to be encoded into successive systems. An abrupt version offers no such continuity. That difference may depend on decisions being made right now — about how fast to push, and by whom.

The ownership question has always had an answer waiting. We just haven't decided yet whether we'll choose it, or whether it will be chosen for us.

Other recent YouTube videos

Reply

Avatar

or to participate