Search

105 results

Clear filters
  • SEPT. 29, 2025 / AI

    Gemma explained: EmbeddingGemma Architecture and Recipe

    EmbeddingGemma, built from Gemma 3, transforms text into numerical embeddings for tasks like search and retrieval. It learns through Noise-Contrastive Estimation, Global Orthogonal Regularizer, and Geometric Embedding Distillation. Matryoshka Representation Learning allows flexible embedding dimensions. The development recipe includes encoder-decoder training, pre-fine-tuning, fine-tuning, model souping, and quantization-aware training.

    Building-2-banner
  • SEPT. 25, 2025 / AI

    Building the Next Generation of Physical Agents with Gemini Robotics-ER 1.5

    Gemini Robotics-ER 1.5, now available to developers, is a state-of-the-art embodied reasoning model for robots. It excels in visual, spatial understanding, task planning, and progress estimation, allowing robots to perform complex, multi-step tasks.

    Robotics-ER 1.5_Metadatal_RD6-V01
  • SEPT. 24, 2025 / Mobile

    On-device GenAI in Chrome, Chromebook Plus, and Pixel Watch with LiteRT-LM

    Google AI Edge provides the tools to run AI features on-device, and its new LiteRT-LM runtime is a significant leap forward for generative AI. LiteRT-LM is an open-source C++ API, cross-platform compatibility, and hardware acceleration designed to efficiently run large language models like Gemma and Gemini Nano across a vast range of hardware. Its key innovation is a flexible, modular architecture that can scale to power complex, multi-task features in Chrome and Chromebook Plus, while also being lean enough for resource-constrained devices like the Pixel Watch. This versatility is already enabling a new wave of on-device generative AI, bringing capabilities like WebAI and smart replies to users.

    Screens-1-banner (1)
  • SEPT. 24, 2025 / AI

    Introducing the Data Commons Model Context Protocol (MCP) Server: Streamlining Public Data Access for AI Developers

    Data Commons announces the availability of its MCP Server, which is a major milestone in making all of Data Commons’ vast public datasets instantly accessible and actionable for AI developers worldwide.

    BLOG-HERO-A2
  • SEPT. 16, 2025 / AI

    ADK for Java opening up to third-party language models via LangChain4j integration

    The Agent Development Kit (ADK) for Java 0.2.0 now integrates with LangChain4j, expanding LLM support to include third-party and local models like Gemma and Qwen. This release also enhances tooling with instance-based FunctionTools, improved async support, better loop control, and advanced agent logic with chained callbacks and new memory management.

    adk-langchain4j
  • SEPT. 8, 2025 / AI

    Veo 3 and Veo 3 Fast – new pricing, new configurations and better resolution

    Today's Veo updates include support for vertical format (9:16) and 1080p HD outputs, along with new, lower pricing for Veo 3 ($0.40/second) and Veo 3 Fast ($0.15/second). These models are now stable for production use in the Gemini API. The MediaSim demo app showcases how Gemini's multimodal capabilities combine with Veo 3 for media simulations.

    veo3-generally-available-social
  • SEPT. 5, 2025 / Mobile

    Google AI Edge Gallery: Now with audio and on Google Play

    Google AI Edge has expanded the Gemma 3n preview to include audio support. Users can play with it on their own mobile phone using the Google AI Edge Gallery, which is now available in Open Beta on Play Store.

    GoogleAIEdge_Metadatal_RD2-V01
  • SEPT. 4, 2025 / AI

    From Fine-Tuning to Production: A Scalable Embedding Pipeline with Dataflow

    Learn how to use Google's EmbeddingGemma, an efficient open model, with Google Cloud's Dataflow and vector databases like AlloyDB to build scalable, real-time knowledge ingestion pipelines.

    EG+Dataflow_Metadatal
  • SEPT. 4, 2025 / Gemma

    Introducing EmbeddingGemma: The Best-in-Class Open Model for On-Device Embeddings

    Introducing EmbeddingGemma: a new embedding model designed for efficient on-device AI applications from Google. This open model is the highest-ranking text-only multilingual embedding model under 500M parameters on the MTEB benchmark, enabling powerful features like RAG and semantic search directly on mobile devices without an internet connection.

    EmbeddingGemma_Metadata
  • AUG. 27, 2025 / Gemini

    Beyond the terminal: Gemini CLI comes to Zed

    Google and Zed have partnered to integrate Gemini CLI directly into the Zed code editor, bringing AI capabilities directly into the editor for developers and allowing for faster and more focused coding, enabling tasks like in-place code generation, instant answers, and natural chat within the terminal with a seamless review workflow for AI-generated changes.

    Gemini CLI is now integrated into Zed, bringing AI directly to your code editor