What you should know from the Google I/O 2025 Developer keynote

20 DE MAIO DE 2025

This year at Google I/O we’re showing how you can build across Google’s different platforms, and innovate using our best AI models from Google DeepMind. Here are the top announcements from the Developer keynote.

Link to Youtube Video (visible only when JS is disabled)

Building with Gemini

Google AI Studio is the fastest way to evaluate models and start building with the Gemini API.

Google AI Studio makes it easy to build with the Gemini API: We’ve integrated Gemini 2.5 Pro into the native code editor, enabling you to prototype faster. It’s tightly optimized with the GenAI SDK so you can instantly generate web apps from text, image, or video prompts. Start from a simple prompt, or get inspired by starter apps in the showcase.

Build agentic experiences with the Gemini API: Build agents with Gemini 2.5 advanced reasoning capabilities via the Gemini API and new tools, like URL Context. It enables the model to pull context from web pages with just a link. We also announced the Gemini SDKs will support Model Context Protocol (MCP) definitions, making it easier to leverage open source tools.

Gemini 2.5 Flash Native Audio in the Live API: Build agentic applications that hear and speak, with full control over the model’s voice, tone, speed, and overall style, in 24 languages. Gemini 2.5 Flash Native Audio is much better at understanding conversational flow and ignoring stray sounds or voices, leading to smoother, more natural back-and-forth.

Generate high-quality UI designs with Stitch: A new AI-powered tool to generate user interface designs and corresponding frontend code for web applications. Iterate on your designs conversationally using chat, adjust themes, and easily export your creations to CSS/HTML or Figma to keep working. Try Stitch for UI design.

Our async code agent, Jules, is now in public beta: Jules is a parallel, asynchronous coding agent that works directly with your GitHub repositories. You can ask Jules to take on tasks such as version upgrades, writing tests, updating features, and bug fixes, to name a few. It spins up a Cloud VM, makes coordinated edits across your codebase, runs tests, and you can open a pull request from its branch when you're happy with the code.


Android

Learn how we’re making it easier for you to build great experiences across devices.

Building experiences with generative AI: Generative AI enhances apps by making them intelligent, personalized, and agentic. We announced new ML Kit GenAI APIs using Gemini Nano for common on-device tasks. We showcased an AI sample app, Androidify, which lets you create an Android robot of yourself using a selfie. Discover how Androidify is built, and read the developer documentation to get started.

Building excellent apps adaptively across 500 million devices: Mobile Android apps form the foundation across phones, foldables, tablets, and ChromeOS, and this year we’re helping you bring them to cars and Android XR. You can also take advantage of Material 3 Expressive to help make your apps shine.

Gemini in Android Studio - AI agents to help you work: Gemini in Android Studio is the AI-powered coding companion that makes developers more productive at every stage of the dev lifecycle. We previewed Journeys, an agentic experience that helps with writing and executing end-to-end tests. We also previewed the Version Upgrade Agent which helps update dependencies. Learn more about how these agentic experiences in Gemini in Android Studio can help you build better apps, faster.


Web

We’re making it easier to create powerful web experiences, from building better UI and faster debugging, to creating new AI-powered features.

Carousels are now easier than ever to build with a few lines of CSS and HTML: Build beautiful carousels with CSS that are interactive at first paint. With Chrome 135, we've combined a few new CSS primitives to make building carousels, and other types of off-screen UI, dramatically easier. Use familiar CSS concepts to create rich, interactive, smooth, and more accessible carousels, in a fraction of the time.

Introducing the new experimental Interest Invoker API: Declaratively toggle popovers when visitor interest is active for a small duration. Combine with the Anchor Positioning API and Popover API to build complex, responsive, layered UI elements like tooltips and hover cards, without JavaScript. Interest Invoker API is available as an origin trial.

Baseline features availability is now in your familiar tools: VS Code now displays the Baseline status of features as you build, with support coming soon to other VS Code-based IDEs and WebStorm by JetBrains. Baseline is now also supported in ESLint for CSS, HTML ESLint, and Stylelint. RUMvision combines Baseline information with real-user data, letting you strategically select the optimal Baseline target for your audience. Plus, with the web-features data set now 100% mapped, you can now access the Baseline status of every feature on every major browser.

AI in Chrome DevTools supports your debugging workflow: Boost your development workflow with Gemini integrated directly into Chrome DevTools. With AI assistance, you can now directly apply suggested changes to the files in your workspace in the Elements panel. Plus, the reimagined Performance Panel now features a powerful ‘Ask AI’ integration that provides contextual performance insights to help optimize your web application’s Core Web Vitals.

New built-in AI APIs using Gemini Nano are now available, including multimodal capabilities: Gemini Nano brings enhanced privacy, reduced latency, and lower cost. Starting from Chrome 138, the Summarizer API, Language Detector API, Translator API, and Prompt API for Chrome Extensions are available in Stable. The Writer and Rewriter APIs are available in origin trials, and the Proofreader API and Prompt API with multimodal capabilities are in Canary. Join our early preview program to help shape the future of AI on the web.


Firebase

Prototype, build, and run modern, AI-powered, full-stack apps users love with Firebase. Use Firebase Studio, a cloud-based, AI workspace powered by Gemini 2.5, to turn your ideas into a full-stack app in minutes, from prompt to publish.

Figma designs can be brought to life in Firebase Studio: Import a Figma design directly into Firebase Studio using the builder.io plugin, then add features and functionality using Gemini in Firebase without having to write any code.

Firebase Studio will now suggest a backend: Rolling out over the next several weeks, when you use the App Prototyping agent, Firebase Studio can detect the need for a backend. Firebase Studio will now recommend Firebase Auth and Cloud Firestore, and when you're ready to publish the app to Firebase App Hosting, Firebase Studio will provision those services for you.

Firebase AI Logic: Integrate Google’s gen AI models directly through your client apps, or through Genkit for server-side implementation. As part of the evolution from Vertex AI in Firebase to Firebase AI Logic, we’re also releasing new features such as client side integrations for the Gemini Developer API, hybrid inference, enhanced observability, and deeper integrations with Firebase products such as App Check and Remote Config.


Building with open models

There's so much you can do when building with Gemini, but sometimes it's better to train and tune your own model. That’s why we released Gemma, our family of open models designed to be state of the art, and fit on devices.

Gemma 3n is in early preview: This model can run on as little as 2GB of RAM thanks to research innovations. It is the first model built on the new, advanced mobile-first architecture that will also power the next generation of Gemini Nano, and is engineered for unmatched AI performance directly on portable devices.

MedGemma is our most capable open model for multimodal medical text and image comprehension: A variant of Gemma 3, MedGemma is a great starting point for developers to fine tune and adapt to build their own healthcare-based AI applications. Its small size makes it efficient for inference, and because it’s open, it enables developers with the flexibility to fine-tune the model and run it in their preferred environments. MedGemma is available for use now as part of Health AI Developer Foundations.

Colab is launching an agent first experience that transforms coding: Powered by Gemini 2.5 Flash, Colab helps you navigate complex tasks, such as fine-tuning a model. We showcased how the new AI-first Colab can build UI, saving you lots of coding time.

SignGemma is a sign language understanding model coming later this year to the Gemma family: It is the most capable model for translating sign languages into spoken language text to date (best at American Sign Language to English), enabling you to develop new ways for Deaf/Hard of Hearing users to access technology. Share your input at goo.gle/SignGemma.

DolphinGemma is the world’s first large language model for dolphins: Working with researchers at Georgia Tech and the Wild Dolphin Project, DolphinGemma was fine-tuned on data from decades of field research, to help scientists better understand patterns in how dolphins communicate.


Google Developer Program

We expanded AI benefits for the Google Developer Program, including Gemini Code Assist Standard, a new gen AI developer annual credit, and 3 months of Google One AI Premium. We also announced a new Google Cloud & NVIDIA community where you can connect with experts from both companies in a dedicated forum, and soon gain access to exclusive learning content and credits.


Tune into all of the developer news

Following the keynotes, we’ll be livestreaming sessions across AI, Android, web, and cloud May 20-21. Then, check out all of the Google I/O announcements and updates with 100+ sessions, codelabs, and more available on demand starting May 22.

Make sure to connect with our thriving global community of developers, and follow along on LinkedIn and Instagram as we bring I/O Connect events to developers around the world.