Gemini, Google and AI
Digest more
Google’s Gemini Diffusion demo didn’t get much airtime at I/O, but its blazing speed—and potential for coding—has AI insiders speculating about a shift in the model wars.
Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June.
Google’s AI models have a secret ingredient that’s giving the company a leg up on competitors like OpenAI and Anthropic. That ingredient is your data, and it’s only just scratched the surface in terms of how it can use your information to “personalize” Gemini’s responses.
I/O presentation, the company revealed AI assistants of all kinds, smart glasses and headsets, and state-of-the-art AI filmmaking tools.
Explore more
Google CEO Sundar Pichai said the company's Gemini AI chatbot app has more than 400 million MAUs ahead of Google I/O 2025.
Google’s AI models are learning to reason, wield agency, and build virtual models of the real world. The company’s AI lead, Demis Hassabis, says all this—and more—will be needed for true AGI.
I pitched the new Google Gemini against ChatGPT for AI image generation – and the results shocked me
It's because the AI image generation trend started happening after ChatGPT got a serious image upgrade in March, while at the time, Gemini was still relying on Imagen 3, which had some limitations.
Shopping in Google's new AI Mode integrates Gemini's capabilities into Google's existing online shopping features, allowing consumers to use conversational phrases to find the perfect product.