Search

389 results

Clear filters
  • OCT. 1, 2025 / AI

    Gemini for Home: Expanding the Platform for a New Era of Smart Home AI

    Google Home is enabling new Gemini-powered features for our partners’ devices and launching a new program to help them build the next generation of AI cameras.

    Geminicomingtohome_Hero
  • SEPT. 30, 2025 / AI

    Introducing Tunix: A JAX-Native Library for LLM Post-Training

    Tunix is a new JAX-native, open-source library for LLM post-training. It offers comprehensive tools for aligning models at scale, including SFT, preference tuning (DPO), advanced RL methods (PPO, GRPO, GSPO), and knowledge distillation. Designed for TPUs and seamless JAX integration, Tunix emphasizes developer control and shows a 12% relative improvement in pass@1 accuracy on GSM8K.

    Tunix logo
  • SEPT. 29, 2025 / AI

    Gemma explained: EmbeddingGemma Architecture and Recipe

    EmbeddingGemma, built from Gemma 3, transforms text into numerical embeddings for tasks like search and retrieval. It learns through Noise-Contrastive Estimation, Global Orthogonal Regularizer, and Geometric Embedding Distillation. Matryoshka Representation Learning allows flexible embedding dimensions. The development recipe includes encoder-decoder training, pre-fine-tuning, fine-tuning, model souping, and quantization-aware training.

    Building-2-banner
  • SEPT. 26, 2025 / AI

    Apigee Operator for Kubernetes and GKE Inference Gateway integration for Auth and AI/LLM policies

    The GKE Inference Gateway now integrates with Apigee, allowing enterprises to unify AI serving and API governance. This enables GKE users to leverage Apigee's API management, security, and monetization features for their AI workloads, including API keys, quotas, rate limiting, and Model Armor security.

    GfD-Apigee-IG-Blog
  • SEPT. 26, 2025 / AI

    Delight users by combining ADK Agents with Fancy Frontends using AG-UI

    The ADK and AG-UI integration enables developers to build interactive AI applications by combining a powerful backend (ADK) with a flexible frontend protocol (AG-UI). This unlocks features like Generative UI, Shared State, Human-in-the-Loop, and Frontend Tools, allowing for seamless collaboration between AI and human users.

    Screens-2-banner
  • SEPT. 26, 2025 / AI

    Your AI is now a local expert: Grounding with Google Maps is now GA

    Grounding with Google Maps in Vertex AI is now generally available, helping developers build factual and reliable generative AI applications connected to real-world, up-to-date information from Google Maps. This unlocks better, more personal results and is useful across industries like travel, real estate, devices, and social media.

    unnamed
  • SEPT. 25, 2025 / AI

    Building the Next Generation of Physical Agents with Gemini Robotics-ER 1.5

    Gemini Robotics-ER 1.5, now available to developers, is a state-of-the-art embodied reasoning model for robots. It excels in visual, spatial understanding, task planning, and progress estimation, allowing robots to perform complex, multi-step tasks.

    Robotics-ER 1.5_Metadatal_RD6-V01
  • SEPT. 25, 2025 / AI

    Continuing to bring you our latest models, with an improved Gemini 2.5 Flash and Flash-Lite release

    Google is releasing updated Gemini 2.5 Flash and Flash-Lite preview models with improved quality, speed, and efficiency. These releases introduce a "-latest" alias for easy access to the newest versions, allowing developers to test and provide feedback to shape future stable releases.

    Rev21Flash_Metadatal_RD2-V01
  • SEPT. 24, 2025 / Cloud

    Google Colab Adds More Back to School Improvements!

    The new Colab features simplify and enhance notebook-based class materials. Instructors can now freeze runtime versions, and seamlessly present and copy notebooks. This improves reproducibility, stability, and shareability, making Colab ideal for researchers, educators, and developers. Slideshow mode and URL linking are also enhanced.

    ColabCorgi_Hero
  • SEPT. 24, 2025 / Mobile

    On-device GenAI in Chrome, Chromebook Plus, and Pixel Watch with LiteRT-LM

    Google AI Edge provides the tools to run AI features on-device, and its new LiteRT-LM runtime is a significant leap forward for generative AI. LiteRT-LM is an open-source C++ API, cross-platform compatibility, and hardware acceleration designed to efficiently run large language models like Gemma and Gemini Nano across a vast range of hardware. Its key innovation is a flexible, modular architecture that can scale to power complex, multi-task features in Chrome and Chromebook Plus, while also being lean enough for resource-constrained devices like the Pixel Watch. This versatility is already enabling a new wave of on-device generative AI, bringing capabilities like WebAI and smart replies to users.

    Screens-1-banner (1)