One year ago, we introduced the world to Gemini, our family of frontier multimodal models that set a high bar, achieving state-of-the-art results on major AI benchmarks. In the past 12 months, we've collaborated with ML experts and developers to build amazing things with our AI models. Today, on the first anniversary of Gemini's launch, we’re taking a minute to reflect on the progress we’ve made together.
Millions of developers around the world are using Google AI Studio and the Gemini API to innovate—launching groundbreaking new applications and enhancing existing ones with powerful AI capabilities.
We’re particularly inspired by the ways you’re making AI helpful for your own users. Thousands of you entered our Gemini API Developer Competition and built impactful, creative, and useful apps, like the grand prize-winning Jayu personal assistant and people's choice award Vite Vere app that helps people achieve greater independence.
The evolution of Gemini is a direct result of your feedback and the applications you’ve developed. While Gemini 1.5 Pro delivered impressive performance and long-context windows, we recognized your need for a faster and more cost-effective option for your apps. That’s why we introduced Gemini 1.5 Flash, which has rapidly become our most popular model. We’re also accelerating our learning by continuously releasing experimental models to understand what serves you best. And your response to Gemini Nano’s on-device capabilities has been overwhelmingly positive, with thousands joining our Chrome hackathon and Android preview.
To empower developers, we've improved both our models and our tools. The Gemini API is now even more powerful with the addition of function calling and search grounding. Plus, Google AI Studio now supports a wider range of models, starting with the ability to quickly evaluate Gemma open models, and we’ll be adding more soon.
We're committed to making powerful AI accessible to everyone. That's why we launched Gemma, a family of open models built on the same foundation as Gemini. Gemma gives you the freedom to customize models with your own data and the flexibility to run them on your own hardware, thanks to a range of accessible size options. Gemma 2, available in 2B, 9B, and 27B parameter versions, offers impressive performance, even outperforming larger models, while remaining widely accessible—the 2B model even runs on mobile devices! We're constantly working to make Gemma models even more useful, releasing new innovative research models like DataGemma, GemmaScope, and most recently, PaliGemma 2.
Models like Navarasa, a community Gemma variant tuned on 9 Indic languages, are enabling developers access to language specific models for their users.
Link to Youtube Video (visible only when JS is disabled)
It's heartening to see the vibrant community that's grown around Gemma language models, with over 50,000 model variations now available on Hugging Face. This collaborative spirit is driving innovation across industries, and it's particularly exciting to see how Gemma is helping break down language barriers. The Gemma tokenizer makes it possible to fine-tune the model for the world’s languages, enabling the Kaggle community to create Gemma models to foster global understanding.
In addition to providing access to Gemini and Gemma models directly, we integrated Gemini throughout our developer tools to help you be more productive and build better apps. Android Studio, Chrome Dev Tools, Colab, Firebase, Google Cloud and IDX, all use Gemini models in a variety of ways to help you write high-quality code, answer questions, and more. Additionally, Gemini is available in your favorite IDE with Gemini Code Assist, including now in GitHub CoPilot.
We're expanding the role of AI beyond code assistance to support the entire development lifecycle. For instance, Gemini in Android Studio can automate routine tasks, generate boilerplate code, and even predict potential bugs.
The future of AI in software development is incredibly bright, with more innovation to come in the days, weeks and months ahead. Our experimental data science agent, showcased at I/O this year, offers a glimpse into this future and we're actively working to bring more to you.
We're committed to pushing the boundaries of what's possible with AI, and we have some exciting developments in the pipeline that we can't wait to share with you soon.