I log in to Instagram occasionally these days. Each time I do, it’s like reality has been faded away a little more. Now the endless slot machine of Reels is a jackpot of synthetic media - but no one is winning.

LinkedIn is much the same. A torrent of bullshit assisted, if not entirely created, by ChatGPT. You can see it in the em—dashes. In the it’s not X, it's Y. The LinkedIn feed has turned into a complete load of bollocks (if it wasn’t already).

I say this as someone whose career has been intertwined with social media for nearly two decades. I’ve led social media at a national newspaper and built followings across multiple platforms (about 70,000). I’m not a refusenik. But the destruction is becoming hard to ignore.

According to SEO firm Graphite, which analysed 65,000 English-language articles published between January 2020 and May 2025, 52% of all new written content on the internet is now AI-generated. Cybersecurity firm Imperva found that 51% of all internet traffic is bots.

Developer Simon Willison, one of the early proponents of the term “slop,” told the Guardian that naming the problem matters. He hoped the word would do for AI garbage what “spam” did for junk email. Merriam-Webster named it 2025 Word of the Year.

The scale of the flood

On Facebook, researchers from Stanford and Georgetown tracked 120 pages frequently posting AI-generated images and found they collectively received hundreds of millions of engagements. Josh Goldstein of Georgetown called it a new kind of “engagement bait,” content that is visually arresting, trivially cheap to produce, and optimised for recommendation algorithms.

Facebook’s own data confirmed the trend: two of the top five most widely viewed images on the platform in Q3 2024 were AI-generated, viewed 38.6 million and 35.8 million times respectively. More than 31% of all content viewed on Facebook now comes from accounts unconnected to the user, up from 8% in 2021.

YouTube is similarly affected. A study by Kapwing found that more than 20% of videos shown to new users are AI slop. Globally, 278 AI slop channels have amassed 63 billion views and 221 million subscribers, generating around $117 million in revenue each year.

Behind this is a surprisingly mundane industry. Researchers at Queensland University of Technology interviewed a social media entrepreneur called Xiaonan who runs six TikTok accounts, each with more than 100,000 followers, earning over $5,500 in a single month. For a fee, he teaches others to replicate his success, sharing his most effective AI prompts, headlines and hashtags. His students are housewives, unemployed people, college students. You can do it from your phone, at home, for basically nothing.

The platforms are building this

Meta re-designed its home feeds into a “discovery engine,” pushing content users might not otherwise see. AI slop was perfect for that system: cheap, visual, optimised for engagement. Mark Zuckerberg made it explicit in Meta’s Q4 2025 earnings call, telling investors users would soon have an AI that understands them and can generate personalised content for them.

Zuckerberg calls this the “third era” of social media. The first era was friends and family. The second was creators. The third is AI. Meta launched an app called Vibes where users have generated over 20 billion AI images. Meta AI has more than a billion monthly active users.

YouTube’s 2026 product roadmap goes just as far. CEO Neal Mohan announced that creators will soon produce Shorts using AI avatars of themselves, digital twins that replicate their face, voice and presenting style. A creator records themselves once, builds an avatar, and that avatar produces content on their behalf indefinitely. Their face is on screen, their voice is speaking, but they’re not really there.

If AI slop captures human attention more cheaply than human content, why would platforms fight it? Even horrified scrolling counts as engagement. Meanwhile, both Meta and X have cut their moderation teams. The infrastructure for authenticity is being dismantled just as the tools for fakery become more powerful.

What gets lost

The weaponisation is already here. During the US government shutdown in late 2025, AI-generated videos made with OpenAI’s Sora 2 flooded TikTok showing fabricated scenes of people selling food stamps for cash. One clip got nearly 500,000 views. The slop manufactured consent for cutting a program that feeds 42 million people.

The BBC tracked a network of Pakistan-based creators collaborating on Meta’s monetisation program. Among one account’s posts, which attracted more than 1.2 billion views across four months, were AI-generated photos of fictional Holocaust victims. The Auschwitz Memorial called these images “a dangerous distortion” and said their own genuine posts were now attracting comments accusing them of using AI.

If AI slop drowns out authentic content, what will the next generation of models be trained on? Researchers call this “model collapse,” a degradation spiral where AI trained on AI produces increasingly generic, meaningless outputs. Social media will eat itself.

There is some pushback. Ed Newton-Rex, founder of non-profit Fairly Trained, argues that platforms should label and downrank outputs where human involvement is minimal. YouTube says it’s working to reduce the spread of low quality AI content.

Social media was supposed to connect us, to let us share our lives with people we care about across distances that would otherwise be impossible. What we’re getting is a content slurry optimised for engagement metrics, increasingly generated by machines, consumed by other machines, with humans as an afterthought. The soul is being ripped out of the body.

Other recent YouTube videos

Reply

Avatar

or to participate