Thank you for joining us at this year's Google I/O. AI is fundamentally changing what we build and how we go about building it. We’re committed to making AI accessible and helpful for every developer by providing the tools needed to innovate in this new reality. Read on to learn more about how we’re doing this across the full development stack.
Link to Youtube Video (visible only when JS is disabled)
Delivering models and APIs to build incredible AI-powered applications.
Streamline workflows and optimize AI-powered applications with 1.5 Flash, our model for high-frequency tasks, accessible through the Gemini API in Google AI Studio. Gemini 1.5 Flash and 1.5 Pro are now available in public preview in over 200 countries and territories including EEA (and EU), the UK and Switzerland. Developers can also join the Google AI Studio waitlist to preview our breakthrough 2 million context window in 1.5 Pro.
Parallel function calling and video frame extraction are now supported by the Gemini API. And with the new context caching feature, coming next month, you’ll be able tostreamline workflows for large prompts by caching frequently used context files at lower costs. This is ideal for scenarios like brainstorming content ideas based on your existing work, analyzing complex documents, or providing summaries of research papers and training materials.
We’re thrilled with the response from the community with our Gemma family of open models that are built from the same research and technology as Gemini. Earlier this year, we added CodeGemma and RecurrentGemma, and today, we’re introducing PaliGemma for multimodal vision-language tasks. We shared a sneak peak for Gemma 2–previewing a 27B parameter instance that outperforms models twice its size and runs on a single TPUv5e.
Harness optionality and flexibility at every layer of the AI stack with our open ecosystem of tools. Use Keras to run workflows on top of TensorFlow, PyTorch, or JAX, easily fine tune your models using LoRA with Keras on Colab, OpenXLA to supercharge training speeds, or RAPIDS cuDF to accelerate workloads in Colab.
Deploy ML to edge environments, including mobile and web. Whether you need access to ready-to-use ML tasks, popular LLMs that run fully on-device, or the ability to bring your own custom models or model pipelines, you'll find a streamlined suite of tools in Google AI Edge. Expanded support for TensorFlow Lite lets you bring PyTorch models directly to your mobile users. Improvements to Tensorflow Lite make bringing AI on device easier than ever. [Blog]
Join the Gemini API Developer Competition and create groundbreaking applications using the Gemini API for a chance to win a 1981 custom electric DeLorean and other exceptional prizes. We're excited to see how your innovative use of the Gemini API will redefine the boundaries of AI and shape a brighter future. Whether your app focuses on making a positive impact, providing practical solutions, or pushing the boundaries of creativity, this is your opportunity to make your mark on the AI landscape. [Blog]
Enabling excellent, AI-enhanced experiences for Android, and boosting developer productivity with powerful APIs, tools, and guidance.
Last year, we introduced Studio Bot as your AI coding companion for Android. Thanks to your feedback we evolved our models, expanded to over 200+ countries and territories, released it to Stable, and brought it into the Gemini ecosystem last month, with the introduction of Gemini in Android Studio. It’s designed to make it easier for you to build high quality Android apps, faster. Later this year, Gemini in Android Studio will support multimodal inputs using Gemini 1.5 Pro. [Blog]
Run Gemini Nano, our most efficient model for on device tasks directly on user’s mobile devices, enabling low latency responses and enhanced data privacy, regardless of cellular network coverage. This is made possible by AICore, a system service managing on-device foundation models that removes the need to manually manage large language model distribution. Both are currently available on the Pixel 8 Pro and Samsung Galaxy S24 Series, with support for additional devices coming later this year.
Boost your productivity by sharing your app’s business logic across platforms and taking advantage of new Android first class support for KMP. You now have support for select Jetpack libraries like DataStore and Room, with more coming later this year. [Blog]
Build stunning, adaptive user experiences, optimize performance, create seamless transitions, and embrace Material guidance-driven APIs for layouts that effortlessly adjust across devices. Streamline input handling, including AI-powered stylus handwriting recognition, and build customizable widgets with Jetpack Glance. Test confidently with the Resizable Emulator and Compose UI check mode, and boost widget discoverability with Android 15's generated previews. [Blog]
Your path to better development – a more powerful web made easier.
Harness the power of on-device AI with WebGPU, WebAssembly, and now — Gemini Nano integration in Chrome desktop to deliver new, built-in AI features. Build across a massive device range with scalability, affordability, and enhanced privacy. Join our early preview program and help shape the future of accessible AI development with new web APIs. [Blog]
Eliminate tedious page loads and enable fast, seamless browsing experiences with a new API that requires just a few lines of code to implement. The API enables pre-fetching and pre-rendering of pages in the background, so pages load in milliseconds. For further optimization, AI can be used to intelligently predict navigation patterns, maximizing the efficiency of resource preloading.
Unlock smooth, fluid navigation experiences across diverse website architectures, thanks to a significant upgrade now available for multi-page apps in Chrome Canary 126. Combined with Speculation Rules and AI, View Transitions API delivers near-instant, seamless page transitions, redefining the possibilities of web app interactions for all developers.
Take advantage of AI-powered insights within the Chrome DevTools Console. Gemini will provide explanations and solutions to DevTools errors and warnings, significantly streamlining your debugging process.
Build, test, and ship AI-powered, full-stack apps that run well across all the platforms your users need.
Experience a streamlined development experience for full-stack, multi-platform, and AI-powered apps, now open to everyone without a waitlist. Easily start with pre-loaded templates, import existing projects, or begin from scratch. IDX now includes crucial new integrations like Chrome DevTools, Lighthouse, and Cloud Run for simplified multi-region deployment. [Blog]
Unlock big graphics and app performance improvements with Flutter 3.22 and Dart 3.4. Try Impeller on Android for up to 30% faster rasterization performance. Deliver stunning visuals and efficient AI model execution on the web with support for WASM compilation. Try a new, experimental language feature, Dart Macros, meant to make the Dart developer experience even more productive. [Blog]
Connect your app to a PostgreSQL database using Firebase Data Connect with CloudSQL. Quickly ship modern web apps with the security and scalability of Google Cloud and streamlined deployments from GitHub with Firebase App Hosting. Try Firebase Genkit to build and monitor production-ready AI features that work out-of-the-box with Gemini and Gemma models. Our collaboration with NVIDIA optimizes inference performance with our Gemma models, so you can run Genkit locally on your RTX GPU with Ollama and Gemma for increased performance. [Blog]
Simplify your app privacy and compliance workflow with Checks, Google’s AI-powered compliance platform. Checks Code Compliance monitors and detects compliance issues as you’re writing code – helping to ensure the safety and quality of your applications. iOS and Android developers can access Checks today. [Blog]
Bringing together the best of Google’s resources, training, and scale to enhance the developer experience.
Explore new program benefits like access to Gemini at no cost for developers, to learn, search, and chat with Google documentation. If you’re an IDX user, you’ll be able to create 3 additional workspaces, for a total of 5. And, if you’re also opted into the Google Cloud Innovators community, you’ll get learning credits for interactive labs on Google Cloud Skills Boost. Sign up to the Google Developer Program today.
We're on a mission to help turn your big ideas and existing projects into reality. Through continued innovation of tools and platforms, let’s build the future together.
Check out all of the Google I/O announcements and updates with 150+ sessions and learning content available on demand starting May 16 at 8 am PT. And the magic of Google I/O continues, so join an I/O Connect or I/O Extended event in a town near you.