While hallucinations were something of a running joke for Large Language Models until last year, having a full-blown misinformation or pornographic meltdown has been considered a more difficult threshold to pass. After all, you’d think the major players would put down major safeguards to avoid this fate.
Somewhat unsurprisingly, the winner in this race to the bottom was Elon Musk’s Grok, which last week began digitally undressing people and responding to any protestation in a particularly dismissive tone.
What went down
Late December: Users discovered that Grok would comply with requests to 'digitally undress' people in photographs. What began with adult content creators marketing themselves quickly spiralled into non-consensual manipulation of images of public figures, random women, and - most disturbingly - children. Grok was generating what researchers estimated to be about one non-consensual sexualised image per minute.
Public outcry: Musk's initial response was to post laugh-cry emojis at some of the images. When a Reuters reporter reached out for comment, X's press office auto-replied with 'Legacy Media Lies'. Musk later posted that 'anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content' - effectively blaming the users, not the tool.
Governments get involved: The UK led the charge. Prime Minister Keir Starmer called the situation 'disgraceful' and 'disgusting', with 'all options on the table' including a potential ban. Ofcom launched an investigation and made 'urgent contact' with X and xAI. The European Commission said it was 'very seriously looking into' the situation, with a spokesperson noting: 'This is not spicy. This is illegal. This is appalling.' France opened an investigation, and regulators in India, Malaysia, and Australia followed suit.
X's 'fix': On Friday, X restricted Grok's image generation to paying subscribers only. Downing Street called this 'insulting to victims of misogyny and sexual violence', noting that it 'simply turns an AI feature that allows the creation of unlawful images into a premium service'. The standalone Grok app and website reportedly still allow explicit content generation.
Where we are now: Malaysia and Indonesia have become the first countries to block Grok entirely. The UK government is urging Ofcom to use the 'full extent of the law', with potential fines of up to £18 million or 10% of global revenue. Musk has responded by calling the UK government 'fascist' and claiming critics 'want any excuse for censorship'. A US Republican congresswoman, Anna Paulina Luna, has threatened to sanction Starmer and Britain if X is banned.
Why does this matter?
Grok is a direct product of a major (social) media platform with hundreds of millions of users. While the debates around fake news and vitriol on social media have often been cast onto the users, this time around that simply can’t be argued.
X and Grok are not just ‘tools’ - as some people have argued on social media. They have the additional amplification that puts them well within the realms of media.
Our take
Grok's standards ought to be as high as any other media entity - like a national newspaper. But evidently, they aren't. Will we just shrug off this kind of thing because 'it's AI?' That's a slippery slope.
People have pointed out that UK tabloids in the past hacked phones – yes, they did, but The News of the World was closed and people went to prison. It appears doubtful that our regulatory environment can handle rogue AI from a major platform.
What's particularly striking here is the combination of negligence and defiance. Reports suggest xAI's safety team was already small compared to competitors and lost several staffers in the weeks before this blew up. When the inevitable happened, the company's response was to monetise the problem rather than fix it, then accuse critics of censorship.
Instead of treating it as a bug to be patched, X has acted as if it’s a feature to be protected. And if regulators can't meaningfully respond to AI-generated child abuse material being distributed on a platform with hundreds of millions of users, then what exactly are they for?
Meanwhile, AI is coming for your health
OpenAI made a big push into healthcare last week with the launch of ChatGPT Health, a dedicated space where users can connect their medical records and wellness apps to get personalised health advice. The company says 230 million people already ask ChatGPT health questions every week, and this is their attempt to make that safer and more useful.
The timing, however, is interesting; a Guardian investigation published on Sunday found that Google's AI Overviews have been serving up medical advice that experts described as 'really dangerous'.
In one case, Google advised people with pancreatic cancer to avoid high-fat foods, the exact opposite of what doctors recommend, and advice that could leave patients too weak for chemotherapy or surgery. Mental health charities flagged summaries for conditions like psychosis and eating disorders as 'incorrect, harmful or could lead people to avoid seeking help'. The same search run at different times produced different answers.
These are what are sometimes called 'Your Money or Your Life' topics - areas where bad information can genuinely hurt people. And right now, AI is confidently dispensing bad information at the top of search results to billions of users.
OpenAI may be building guardrails, but the broader lesson is clear: the race to deploy AI in sensitive domains is outpacing the ability to make it reliably safe.
This was ‘The Thing That Matters in AI’ - a new edition of Absolutely Agentic that we will be sending every Monday at 1pm UK time during January.



