Search

112 results

Clear filters
  • SEPT. 24, 2025 / Mobile

    On-device GenAI in Chrome, Chromebook Plus, and Pixel Watch with LiteRT-LM

    Google AI Edge provides the tools to run AI features on-device, and its new LiteRT-LM runtime is a significant leap forward for generative AI. LiteRT-LM is an open-source C++ API, cross-platform compatibility, and hardware acceleration designed to efficiently run large language models like Gemma and Gemini Nano across a vast range of hardware. Its key innovation is a flexible, modular architecture that can scale to power complex, multi-task features in Chrome and Chromebook Plus, while also being lean enough for resource-constrained devices like the Pixel Watch. This versatility is already enabling a new wave of on-device generative AI, bringing capabilities like WebAI and smart replies to users.

    Screens-1-banner (1)
  • SEPT. 16, 2025 / AI

    ADK for Java opening up to third-party language models via LangChain4j integration

    The Agent Development Kit (ADK) for Java 0.2.0 now integrates with LangChain4j, expanding LLM support to include third-party and local models like Gemma and Qwen. This release also enhances tooling with instance-based FunctionTools, improved async support, better loop control, and advanced agent logic with chained callbacks and new memory management.

    adk-langchain4j
  • SEPT. 9, 2025 / Mobile

    Google AI Edge Gallery: Now with audio and on Google Play

    Google AI Edge has expanded the Gemma 3n preview to include audio support. Users can play with it on their own mobile phone using the Google AI Edge Gallery, which is now available in Open Beta on Play Store.

    GoogleAIEdge_Metadatal_RD2-V01
  • SEPT. 8, 2025 / AI

    Veo 3 and Veo 3 Fast – new pricing, new configurations and better resolution

    Today's Veo updates include support for vertical format (9:16) and 1080p HD outputs, along with new, lower pricing for Veo 3 ($0.40/second) and Veo 3 Fast ($0.15/second). These models are now stable for production use in the Gemini API. The MediaSim demo app showcases how Gemini's multimodal capabilities combine with Veo 3 for media simulations.

    veo3-generally-available-social
  • SEPT. 4, 2025 / Gemma

    Introducing EmbeddingGemma: The Best-in-Class Open Model for On-Device Embeddings

    Introducing EmbeddingGemma: a new embedding model designed for efficient on-device AI applications from Google. This open model is the highest-ranking text-only multilingual embedding model under 500M parameters on the MTEB benchmark, enabling powerful features like RAG and semantic search directly on mobile devices without an internet connection.

    EmbeddingGemma_Metadata
  • SEPT. 4, 2025 / AI

    From Fine-Tuning to Production: A Scalable Embedding Pipeline with Dataflow

    Learn how to use Google's EmbeddingGemma, an efficient open model, with Google Cloud's Dataflow and vector databases like AlloyDB to build scalable, real-time knowledge ingestion pipelines.

    EG+Dataflow_Metadatal
  • AUG. 27, 2025 / Google Labs

    Stop “vibe testing” your LLMs. It's time for real evals.

    Stax, an experimental developer tool, addresses the insufficient nature of "vibe testing" LLMs by streamlining the LLM evaluation lifecycle, allowing users to rigorously test their AI stack and make data-driven decisions through human labeling and scalable LLM-as-a-judge auto-raters.

    Stax
  • AUG. 27, 2025 / Gemini

    Beyond the terminal: Gemini CLI comes to Zed

    Google and Zed have partnered to integrate Gemini CLI directly into the Zed code editor, bringing AI capabilities directly into the editor for developers and allowing for faster and more focused coding, enabling tasks like in-place code generation, instant answers, and natural chat within the terminal with a seamless review workflow for AI-generated changes.

    Gemini CLI is now integrated into Zed, bringing AI directly to your code editor
  • AUG. 18, 2025 / Gemini

    URL context tool for Gemini API now generally available

    The Gemini API's URL Context tool is now generally available, allowing developers to ground prompts using web content instead of manual uploads. This release expands support to PDFs and images.

    URL context tool for Gemini API now generally available
  • JULY 24, 2025 / Google Labs

    Introducing Opal: describe, create, and share your AI mini-apps

    Opal is a new experimental tool from Google Labs that helps you compose prompts into dynamic, multi-step mini-apps using natural language, removing the need for code, allowing users to build and deploy shareable AI apps with powerful features and seamless integration with existing Google tools.

    Opal Metadata card