Search

2676 results

Clear filters
  • OCT. 2, 2025 / AI

    Gemini 2.5 Flash Image now ready for production with new aspect ratios

    Our state-of-the-art image generation and editing model which has captured the imagination of the wo...

    image2
  • OCT. 1, 2025 / AI

    Unlocking Multi-Spectral Data with Gemini

    Multi-spectral imagery, which captures wavelengths beyond human vision, offers a "superhuman" way to understand the world, and Google's Gemini models make this accessible without specialized training. By mapping invisible bands to RGB channels and providing context in the prompt, developers can leverage Gemini's power for tasks like environmental monitoring and agriculture.

    MultiSpectral-Metadatal_RD1-V01
  • OCT. 1, 2025 / AI

    Gemini for Home: Expanding the Platform for a New Era of Smart Home AI

    Google Home is enabling new Gemini-powered features for our partners’ devices and launching a new program to help them build the next generation of AI cameras.

    Geminicomingtohome_Hero
  • SEPT. 30, 2025 / AI

    Introducing Tunix: A JAX-Native Library for LLM Post-Training

    Tunix is a new JAX-native, open-source library for LLM post-training. It offers comprehensive tools for aligning models at scale, including SFT, preference tuning (DPO), advanced RL methods (PPO, GRPO, GSPO), and knowledge distillation. Designed for TPUs and seamless JAX integration, Tunix emphasizes developer control and shows a 12% relative improvement in pass@1 accuracy on GSM8K.

    Tunix logo
  • SEPT. 29, 2025 / AI

    Gemma explained: EmbeddingGemma Architecture and Recipe

    EmbeddingGemma, built from Gemma 3, transforms text into numerical embeddings for tasks like search and retrieval. It learns through Noise-Contrastive Estimation, Global Orthogonal Regularizer, and Geometric Embedding Distillation. Matryoshka Representation Learning allows flexible embedding dimensions. The development recipe includes encoder-decoder training, pre-fine-tuning, fine-tuning, model souping, and quantization-aware training.

    Building-2-banner
  • SEPT. 26, 2025 / AI

    Delight users by combining ADK Agents with Fancy Frontends using AG-UI

    The ADK and AG-UI integration enables developers to build interactive AI applications by combining a powerful backend (ADK) with a flexible frontend protocol (AG-UI). This unlocks features like Generative UI, Shared State, Human-in-the-Loop, and Frontend Tools, allowing for seamless collaboration between AI and human users.

    Screens-2-banner
  • SEPT. 26, 2025 / AI

    Apigee Operator for Kubernetes and GKE Inference Gateway integration for Auth and AI/LLM policies

    The GKE Inference Gateway now integrates with Apigee, allowing enterprises to unify AI serving and API governance. This enables GKE users to leverage Apigee's API management, security, and monetization features for their AI workloads, including API keys, quotas, rate limiting, and Model Armor security.

    GfD-Apigee-IG-Blog
  • SEPT. 26, 2025 / AI

    Your AI is now a local expert: Grounding with Google Maps is now GA

    Grounding with Google Maps in Vertex AI is now generally available, helping developers build factual and reliable generative AI applications connected to real-world, up-to-date information from Google Maps. This unlocks better, more personal results and is useful across industries like travel, real estate, devices, and social media.

    unnamed
  • SEPT. 25, 2025 / AI

    Continuing to bring you our latest models, with an improved Gemini 2.5 Flash and Flash-Lite release

    Google is releasing updated Gemini 2.5 Flash and Flash-Lite preview models with improved quality, speed, and efficiency. These releases introduce a "-latest" alias for easy access to the newest versions, allowing developers to test and provide feedback to shape future stable releases.

    Rev21Flash_Metadatal_RD2-V01
  • SEPT. 25, 2025 / AI

    Building the Next Generation of Physical Agents with Gemini Robotics-ER 1.5

    Gemini Robotics-ER 1.5, now available to developers, is a state-of-the-art embodied reasoning model for robots. It excels in visual, spatial understanding, task planning, and progress estimation, allowing robots to perform complex, multi-step tasks.

    Robotics-ER 1.5_Metadatal_RD6-V01