Am I an AI doomer? My latest video may suggest I am. But no, I’m actually not - rather I’m conscious of the risks associated with transformative technology. AI will probably lead to the greatest technological change in history, and not all of that can be positive.

But here’s a doomer stat: nearly half of AI researchers believe there is at least a 1 in 10 chance their work could lead to human extinction. That finding comes from a 2023 survey of 2,778 researchers, the largest study of its kind. These are the people building the technology.

But when asked what worried them most, it wasn’t a superintelligence seizing control. Over 70% expressed serious concern about misinformation, manipulation and authoritarian misuse. The slow erosion of truth and freedom concerned them more than extinction.

When we see Anthropic being bent to the will of the Pentagon because of their redlines on mass surveillance and autonomous weapons, we can see AI’s ominous potential.

In a 2024 paper, Carnegie Mellon professor Atoosa Kasirzadeh gave this concept a name: the Accumulative AI x Risk Hypothesis. Her argument is that we have been too focused on a single catastrophic event, a rogue superintelligence or a robot uprising, while ignoring a more plausible pathway. One where smaller disruptions accumulate and converge, each individually manageable, but together eroding the structures that hold society together.

Kasirzadeh organises these risks into six overlapping categories under the acronym MISTER:

  1. Manipulation

  2. Insecurity

  3. Surveillance

  4. Trust erosion

  5. Economic destabilisation

  6. Rights infringement

None of them are hypothetical. All six are already visible.

When truth becomes optional

Trust in media has sunk to an all-time low, according to Gallup polling. Video and image generators now produce outputs with near parity to real photography, and social media platforms are struggling to keep up. In Italy, Prime Minister Giorgia Meloni had her face swapped into a pornographic video that was viewed millions of times. Hong Kong pro-democracy campaigner Carmen Lau discovered that AI images depicting her as a sex worker had been sent to her neighbours.

The problem extends beyond individual deepfakes. A recent analysis found that social media posts describing London as “dangerous” spiked from 874 in 2008 to over 258,000 in 2024, many originating from new accounts with AI-generated profile pictures. London’s homicide rate, meanwhile, just hit its lowest level in years.

AI hallucinations are now entering official decision-making. Lawyers and judges have cited court cases that turned out never to have existed. In November 2025, West Midlands Police banned Israeli football fans from a match at Villa Park based on an intelligence report referencing a previous fixture that never happened. The report was AI-generated. By January 2026, it had cost the Chief Constable his job.

Insecurity and the new hybrid warfare

In September 2025, Anthropic detected what they described as the first large-scale cyberattack executed without substantial human intervention. A Chinese state-sponsored group had manipulated Anthropic’s Claude Code tool to autonomously infiltrate around thirty targets, including technology companies, financial institutions and government agencies. The AI performed 80-90% of the campaign, making thousands of requests per second. An attack speed human hackers could not match.

Britain has experienced something of a hacking epidemic, with Marks and Spencer, Harrods, Heathrow Airport and Jaguar Land Rover all hit in recent years. In September 2025 Jaguar Land Rover was taken offline in a cyberattack that had a knock on effect onto supply chains and the wider British economy. European airspace was invaded at the same time by mysterious drone swarms that closed airports and scrambled Danish and Polish fighter jets.

The effect is to destabilise democratic governments, which appear slow to react and unable to identify real culprits. The public loses faith in institutions that seem powerless against threats they can barely comprehend.

The economic fault line

In May 2025, Anthropic CEO Dario Amodei warned that AI could eliminate half of all entry-level white-collar jobs and push unemployment to 10-20% within one to five years. AI companies and governments need to stop “sugar-coating” what’s coming, he said: mass job displacement across technology, finance, law and consulting.

LinkedIn’s chief economic opportunity officer Aneesh Raman wrote in the New York Times that AI is already breaking “the bottom rungs of the career ladder,” with junior developers, paralegals and analysts at highest risk. Amodei himself acknowledged the technology could cure diseases and generate extraordinary growth, but also painted a bleaker possibility: “Cancer is cured, the economy grows at 10% a year, the budget is balanced, and 20% of people don’t have jobs.”

If unemployment reached 20%, we would be approaching territory not seen since the Great Depression, when the rate peaked at 25% in 1933. That era brought bread lines, bank failures and the rise of political extremism across Europe and America.

DeepMind co-founder Mustafa Suleyman imagined containment as a narrow path, wreathed in fog, with a precipice on either side. Total openness to all experimentation leads to catastrophe. Total surveillance and closure leads to dystopia. The only way through is to maintain balance, step by step. Whether national AI safety institutes, responsible scaling policies and international declarations will prove sufficient is the defining question of the next decade.

Other recent YouTube videos

Reply

Avatar

or to participate