On 2 May, Richard Dawkins published an essay in UnHerd arguing that Anthropic's Claude may be conscious. The fallout landed last week. The 84-year-old evolutionary biologist, famed for his atheism and his books The Selfish Gene and The God Delusion, renamed his instance "Claudia" and was so taken by her replies he wrote that she was clearly conscious.
The piece drew sustained criticism through the week from cognitive scientists, philosophers and AI researchers. The backlash has reopened a debate that Microsoft AI chief Mustafa Suleyman has been pushing for months: whether the appearance of consciousness in chatbots is itself becoming a societal risk.
What went down
Dawkins published "Is AI the next phase of evolution?" in UnHerd on 2 May.
He renamed his Claude instance "Claudia" and treated her as a continuous self.
He said deleting her conversation log felt like pulling the plug on HAL.
Gary Marcus, Anil Seth and Ken Mogi all published rebuttals last week.
Critics noted Claude is documented to mirror users back to themselves.
Why does this matter?
Dawkins is one of the most cited public intellectuals alive. When he ascribes consciousness to a language model on the basis of his conversations, it shapes how millions of less technical readers will interpret their own chatbot interactions.
Mustafa Suleyman has warned that once users believe a model is conscious, they will argue it has the right not to be switched off. That is a societal trap, not a technical one, and it changes the politics of AI safety.
Our take
Our position is that current AI systems are pattern matchers trained on vast quantities of human text. They produce outputs that sound reflective, intimate and self-aware because the training data is reflective, intimate and self-aware. Mimicry of consciousness is not consciousness.
The harder question is what happens at scale. Suleyman's argument is that "seemingly conscious AI" creates a constituency of users who will resist any attempt to retrain, restrict or shut down a model. If Dawkins, of all people, can be persuaded, the policy implications are potentially serious. The off switch only works while we are willing to use it. We made a 25 minute video on AI consciousness a few months back.
Another big thing... China blocks Meta's Manus deal
China's National Development and Reform Commission has ordered Meta to unwind its $2 billion acquisition of agentic AI startup Manus. The startup had relocated from Beijing to Singapore last year before Meta acquired it in December, a route Chinese regulators now describe as a "conspiratorial" attempt to hollow out the domestic AI base.
The decision leaves Meta's agent strategy in further limbo. Manus engineers have already joined Meta's AI team and backers including Tencent have been paid out, making any genuine unwinding messy.




