Search

60 results

Clear filters
  • SEPT. 4, 2025 / Gemma

    Introducing EmbeddingGemma: The Best-in-Class Open Model for On-Device Embeddings

    Introducing EmbeddingGemma: a new embedding model designed for efficient on-device AI applications from Google. This open model is the highest-ranking text-only multilingual embedding model under 500M parameters on the MTEB benchmark, enabling powerful features like RAG and semantic search directly on mobile devices without an internet connection.

    EmbeddingGemma_Metadata
  • AUG. 27, 2025 / Gemini

    Beyond the terminal: Gemini CLI comes to Zed

    Google and Zed have partnered to integrate Gemini CLI directly into the Zed code editor, bringing AI capabilities directly into the editor for developers and allowing for faster and more focused coding, enabling tasks like in-place code generation, instant answers, and natural chat within the terminal with a seamless review workflow for AI-generated changes.

    Gemini CLI is now integrated into Zed, bringing AI directly to your code editor
  • AUG. 27, 2025 / Google Labs

    Stop “vibe testing” your LLMs. It's time for real evals.

    Stax, an experimental developer tool, addresses the insufficient nature of "vibe testing" LLMs by streamlining the LLM evaluation lifecycle, allowing users to rigorously test their AI stack and make data-driven decisions through human labeling and scalable LLM-as-a-judge auto-raters.

    Stax
  • AUG. 18, 2025 / Gemini

    URL context tool for Gemini API now generally available

    The Gemini API's URL Context tool is now generally available, allowing developers to ground prompts using web content instead of manual uploads. This release expands support to PDFs and images.

    URL context tool for Gemini API now generally available
  • JULY 24, 2025 / Google Labs

    Introducing Opal: describe, create, and share your AI mini-apps

    Opal is a new experimental tool from Google Labs that helps you compose prompts into dynamic, multi-step mini-apps using natural language, removing the need for code, allowing users to build and deploy shareable AI apps with powerful features and seamless integration with existing Google tools.

    Opal Metadata card
  • JULY 23, 2025 / Firebase

    Unleashing new AI capabilities for popular frameworks in Firebase Studio

    New AI capabilities for popular frameworks in Firebase Studio include AI-optimized templates, streamlined integration with Firebase backend services, and the ability to fork workspaces for experimentation and collaboration, making AI-assisted app development more intuitive and faster for developers worldwide.

    Unleashing new AI capabilities for popular frameworks in Firebase Studio
  • JULY 22, 2025 / Gemini

    Gemini 2.5 Flash-Lite is now stable and generally available

    Gemini 2.5 Flash-Lite, previously in preview, is now stable and generally available. This cost-efficient model is ~1.5x faster than 2.0 Flash-Lite and 2.0 Flash, offers high quality, and includes 2.5 family features like a 1 million-token context window and multimodality.

    Gemini 2.5 Flash is making it easy to build with the Gemini API in Google AI Studio
  • JULY 21, 2025 / Gemini

    Conversational image segmentation with Gemini 2.5

    Gemini's advanced capability for conversational image segmentation allows intuitive interaction with visual data by understanding complex phrases, conditional logic, and abstract concepts, streamlining developer experience and opening doors for new applications in media editing, safety monitoring, and damage assessment.

    Conversational image segmentation with Gemini 2.5
  • JULY 17, 2025 / Gemini

    Build with Veo 3, now available in the Gemini API

    Veo 3, Google’s latest AI video generation model, is now available in paid preview via the Gemini API and Google AI Studio. Unveiled at Google I/O 2025, Veo 3 can generate both video and synchronized audio, including dialogue, background sounds, and even animal noises. This model delivers realistic visuals, natural lighting, and physics, with accurate lip syncing and sound that matches on-screen action.

    Build with Veo 3, now available in the Gemini API and Google AI Studio
  • JULY 9, 2025 / Gemma

    T5Gemma: A new collection of encoder-decoder Gemma models

    T5Gemma is a new family of encoder-decoder LLMs developed by converting and adapting pretrained decoder-only models based on the Gemma 2 framework, offering superior performance and efficiency compared to its decoder-only counterparts, particularly for tasks requiring deep input understanding, like summarization and translation.

    T5Gemma: A New Collection of Encoder-Decoder Gemma Models