54 results
JULY 11, 2024 / Gemma
Google has developed a number of technologies that you can use to start experimenting with and exploring the potential of generative AI to process data that needs to stay private.
MAY 7, 2024 / Kaggle
Now you can publish your fine-tuned models directly from the Keras API to either Kaggle or Hugging Face
MAY 20, 2025 / AI Edge
Google AI Edge advancements, include new Gemma 3 models, broader model support, and features like on-device RAG and Function Calling to enhance on-device generative AI capabilities.
OCT. 22, 2024 / Gemma
KerasHub is a new unified library for pretrained models fostering a more cohesive ecosystem for developers.
DEC. 6, 2024 / Gemini
The range of family of Gemini models has expanded in the past year in response to developer needs, introducing faster and more cost-effective models, and enhancing tools in Google AI Studio.
JUNE 24, 2025 / Kaggle
KerasHub enables users to mix and match model architectures and weights across different machine learning frameworks, allowing checkpoints from sources like Hugging Face Hub (including those created with PyTorch) to be loaded into Keras models for use with JAX, PyTorch, or TensorFlow. This flexibility means you can leverage a vast array of community fine-tuned models while maintaining full control over your chosen backend framework.
MAY 14, 2024 / Mobile
Thank you for joining us at this year's Google I/O. Check out all of the Google I/O announcements and updates with 150+ sessions and learning content available on demand.
JUNE 27, 2024 / Mobile
Here are some highlights from what we announced both on stage in Berlin and elsewhere in the world in celebration of this event!
MAY 20, 2025 / Android
Top announcements from Google I/O 2025 focus on building across Google platforms and innovating with AI models from Google DeepMind, with key focus on new tools, APIs, and features designed to enhance developer productivity and create AI-powered experiences using Gemini, Android, Firebase, and web.
DEC. 5, 2024 / Gemma
PaliGemma 2, the next evolution in tunable vision-language models, comes with new features such as scalable performance, long captioning, and expanded capabilities. Get started with pre-trained models, documentation, and tutorials.