Since ChatGPT’s debut in November 2022, Google has appeared somewhat of a laggard in AI. It’s either the uncool uncle or the twilight of the evil empire. We can’t quite decide.
The most confusing thing for us is just how many vocal people within the SEO community have shunned Google AI products in favour of ChatGPT. It’s a curious phenomenon given they are generally otherwise wedded to Google products like Drive, Gmail and Analytics.
But last week the empire struck back at the Google I/O conference. The uncle was suddenly hip. AI was everywhere, across everything. The Gemini 2.5 Pro model took centre stage with improvements in logic, memory, multilingual fluency, and speed (via the lightweight Gemini Flash).
A new ‘Gemini Live’ experience promises natural, two-way conversations, including voice interactions that feel closer to sci-fi than software.
Perhaps most striking was Project Astra, DeepMind’s prototype AI agent that can see, hear, and remember the world around it. Demonstrated through smartphone and glasses interfaces, Astra points toward a near-future assistant that is always present, always perceiving, and increasingly proactive.

Meanwhile, Android XR - a mixed-reality platform built in partnership with Samsung - marked Google’s deeper commitment to spatial computing, with smart glasses powered by Gemini offering real-time contextual overlays and object recognition.
In parallel, Google Search is undergoing its most radical overhaul yet. AI Mode now delivers full-page, Gemini-generated responses by default - a shift that could change how we discover, learn, and navigate online.
See an excellent write up from Jessie Williams and Shelby Blackley for more on that one. It seemed to strike an existential chord for those working in search.
In the creative sphere, Google's Veo 3 and Flow tools showcased AI’s growing ability to generate hyper-realistic videos, audio, and multimedia content from simple prompts.
Lip syncing and the general appearance and interactivity of AI generated humans in Google Veo 3 were giant steps forward. It is a long way ahead of any other video generator now.
The quiet colonisation of reality
Beneath the product announcements, Google’s true ambition at I/O 2025 was more existential: it wants to integrate AI so deeply into our environments that it becomes invisible. With Project Astra and Android XR, Google isn’t just building tools. It’s building the lens through which we’ll soon see and experience the world.
This is not the clunky ‘smart glasses’ of the past. What Google showed - and what OpenAI seems to be exploring through its reported hardware partnership with Jony Ive - is a world where reality itself becomes annotatable, filterable, and co-navigated by AI.
Your surroundings are no longer static; they’re dynamic canvases for suggestion and assistance. AI doesn’t just answer questions, it decides which questions are worth asking.

Jon Ive and OpenAI’s Sam Altman discussing their collaboration in a recent promo video.
The changing form of Search
This year’s I/O also made something else unmistakably clear: Google is no longer treating Search as the heart of the internet. At least, not in the form we know it.
With AI Overviews and Gemini-generated answers becoming the default experience, traditional search is being quietly phased out in favour of something more conversational, curated, and closed-loop. You don’t get ten blue links anymore, you get one opinionated synthesis, drawn from the web but often without pointing back to it.
For SEO, this marks a seismic shift. If fewer users are clicking through to websites - and more are receiving answers directly from AI - the traditional game of rankings, keywords, and traffic funnels becomes increasingly irrelevant.
Google is absorbing the web’s knowledge and re-serving it inside its own interface. The open web, long governed by search and discoverability, risks becoming an API for Google’s answer machine. The irony? The very company that organised the web is now abstracting it away.
Are we ready?
As Gemini becomes ever more multimodal - understanding voice, text, video, and intent - it positions itself as the intermediary between human cognition and the digital universe. You don’t Google things anymore. You just ask. Or more likely, you don’t even need to - Astra remembers, predicts, and responds before you think to.
It’s frictionless. And maybe a little eerie.
Because when AI starts perceiving your world, the boundaries between public and private blur. The power dynamic subtly shifts: not just what you want to know, but what it decides you should know.
Who gets visibility? Whose version of truth surfaces first? When memory and perception are outsourced to predictive systems owned by ad-driven corporations, that’s not just a UX change - it’s a philosophical one.
Google’s announcements aren’t just about smarter devices. They’re signals of a broader shift: from using the internet, to living inside it.



