Google rolled out Gemini 3 on Wednesday, calling it its most advanced artificial intelligence model yet and highlighting strengths in reasoning, video production, and code writing. Company officials emphasized that the release is meant to be integrated into existing offerings rather than function only as a chat service, and they suggested the model could help bolster revenue from Search as it begins to appear across Google products today.
“We are the engine room of Google, and we're plugging in AI everywhere now,” Demis Hassabis, CEO of Google DeepMind, an AI-focused subsidiary of Google’s parent company, Alphabet, said ahead of the announcement.
Hassabis acknowledged signs of froth in the AI sector, noting that a number of startups have attracted multibillion-dollar valuations without proven business models. Big tech firms, Google among them, are spending heavily to erect the data centers needed to train and operate large models, a build-out that has raised concerns about the possibility of a market correction.
But Hassabis argued that Google has buffers in place should a pullback occur. The company is already applying AI to established products such as Google Maps, Gmail, and Search. “In the downside scenario, we will lean more on that,” Hassabis says. “In the upside scenario, I think we've got the broadest portfolio and the most pioneering research.”
The company has been shipping consumer-facing tools that showcase practical uses of its systems. NotebookLM can auto-generate podcasts from written material, and AI Studio is intended to speed up the prototyping of applications. Google is experimenting with embedding the technology into gaming and robotics, areas Hassabis says might yield significant returns over the coming years regardless of broader market swings.
Gemini 3 is available immediately through the Gemini app and via AI Overviews, a Search feature that synthesizes information alongside standard results. In demonstrations ahead of the launch, Google showed how certain queries — such as a request about the three-body problem in physics — can trigger Gemini 3 to build a custom interactive visualization on the fly.
Robby Stein, vice president of product for Google Search, said at a briefing ahead of the launch that the company has seen “double-digit” increases in queries phrased in natural language, which are most likely targeted at AI Overviews, year over year. The firm reported a 70 percent spike in visual search, a use case that depends on Gemini’s ability to analyze photographs.
Even after years of investment and key technical advances — including inventing the transformer model that powers most large language models — Google was rattled by ChatGPT’s sudden prominence in 2022. The chatbot propelled OpenAI into the spotlight for AI research and offered a different, sometimes easier, route for people seeking information online.
Apprehension that generative systems would rapidly displace classic search results has cooled as Google closes gaps with competitors. The company is reportedly nearing a deal with Apple to supply Gemini for Siri, according to Bloomberg. Nano Banana, an image-generation and editing tool, has drawn user attention. Crucially, generative features do not yet appear to be eroding Google’s core search franchise: Alphabet said in its quarterly earnings this July that AI Overviews had driven a 10 percent increase in search queries.
OpenAI’s most recent frontier model, GPT-5, disappointed some observers when it arrived in August; critics described it as underwhelming, and several users complained about a shift toward a more formal persona in outputs.
Google claims Gemini 3 outpaces GPT-5 and other rivals on a number of benchmark leaderboards, including LMArena, a popular site for model scoring. The company says the new model handles simulated reasoning tasks better by breaking problems into parts and planning across longer horizons, improvements that can lift the performance of agent systems that call external tools and consult the web.
“This is our most intelligent model,” Koray Kavukcuoglu, CTO of Google DeepMind, said during the prelaunch briefing. “It is the best model in the world for multimodal understanding.”
Kavukcuoglu pointed to data and reach as advantages that feed model improvement. The Gemini app draws 650 million monthly users, Google reports, there are 13 million developers working with Google’s models, and 2 billion people consult AI Overviews each month. As people interact with chatbots or AI-powered apps, their responses can serve as training signals, revealing areas where a model needs stronger expertise. Kavukcuoglu added that Google's in-house chip design work and its large data center footprint help the company operate its systems at scale. “We have a very differentiated full-stack approach,” he said.
Google plans to roll Gemini 3 out to subscribers of Google AI Plus and Google AI Pro, who pay $19.99 and $249.899 per month, respectively, over the next few weeks. The company is also introducing Antigravity, a new AI programming tool powered by the model.
Bubble talk aside, Hassabis framed Gemini 3 as a foundation for successive advances in capability. “I still think we are five to 10 years away from what I would call proper full AGI,” he says. “And that may require one or two breakthroughs on top of the models that are just getting better and better.”

