Introducing EmbeddingGemma: The Best-in-Class Open Model for On-Device Embeddings

SEPT. 4, 2025
Min Choi Product Manager Google DeepMind
Sahil Dua Lead Research Engineer Google DeepMind

We're excited to introduce EmbeddingGemma, a new open embedding model that delivers best-in-class performance for its size. Designed specifically for on-device AI, its highly efficient 308 million parameter design enables you to build applications using techniques such as Retrieval Augmented Generation (RAG) and semantic search that run directly on your hardware. It delivers private, high-quality embeddings that work anywhere, even without an internet connection.

MTEB Score
EmbeddingGemma is comparable to popular models nearly twice its size.

EmbeddingGemma is:

  • Best in class: The highest ranking open multilingual text embedding model under 500M on the Massive Text Embedding Benchmark (MTEB). Based on the Gemma 3 architecture, EmbeddingGemma is trained on 100+ languages and is small enough to run on less than 200MB of RAM with quantization.

  • Built to flexibly work offline: Small, fast, and efficient, it offers customizable output dimensions (from 768 to 128 via Matryoshka representation) and a 2K token context window to run on everyday devices like mobile phones, laptops, desktops, and more. Designed to work with Gemma 3n, together they unlock new use cases for mobile RAG pipelines, semantic search, and more.

Link to Youtube Video (visible only when JS is disabled)

How EmbeddingGemma enables mobile-first RAG pipelines

EmbeddingGemma generates embeddings, which are numerical representations - in this case, of text (such as sentences and documents) - by transforming it into a vector of numbers to represent meaning in a high-dimensional space. The better the embeddings, the better the representation of language, with all its nuances and complexities.

When building a RAG pipeline, you have two key stages: retrieving relevant context based on a user’s input and generating answers grounded on that context. To perform the retrieval, you can generate the embedding of a user’s prompt and calculate the similarity with the embeddings of all the documents on your system. This allows you to get the most relevant passages to a user’s query. Then, these passages can be passed to a generative model, such as Gemma 3, alongside the original user query, to generate a contextually relevant answer, such as understanding that you need your carpenter's number for help with damaged floorboards.

For this RAG pipeline to be effective, the quality of the initial retrieval step is critical. Poor embeddings will retrieve irrelevant documents, leading to inaccurate or nonsensical answers. This is where EmbeddingGemma's performance shines, providing the high-quality representations needed to power accurate and reliable on-device applications.


State-of-the-art quality for its size

EmbeddingGemma delivers state-of-the-art text understanding for its size, with particularly strong performance on multilingual embedding generation.

See how EmbeddingGemma compares to other popular embedding models:

MTEB Multilingual v2
At a compact 308M parameters, EmbeddingGemma is strong at tasks like retrieval, classification, and clustering when compared to similarly-sized, popular embedding models.

Small, fast, and efficient

The 308M parameter model is composed of roughly 100M model parameters and 200M embedding parameters. It’s engineered for performance and minimal resource consumption.

  • For ultimate flexibility, EmbeddingGemma leverages Matryoshka Representation Learning (MRL) to provide multiple embedding sizes from one model. Developers can use the full 768-dimension vector for maximum quality or truncate it to smaller dimensions (128, 256, or 512) for increased speed and lower storage costs.

  • We've pushed the boundaries of speed with <15ms embedding inference time (256 input tokens) on EdgeTPU, meaning your AI features can deliver real-time responses for fluid and immediate interactions.

  • Leveraging Quantization-Aware training (QAT), we significantly reduce RAM usage to sub-200MB while preserving the model’s quality.


Offline by design

EmbeddingGemma empowers developers to build on-device, flexible, and privacy-centric applications. It generates embeddings of documents directly on the device's hardware, helping ensure sensitive user data is secure. It utilizes the same tokenizer as Gemma 3n for text processing, reducing memory footprint in RAG applications. Unlock new capabilities with EmbeddingGemma, such as:

  • Searching across your personal files, texts, emails, and notifications at the same time without internet connection.

  • Personalized, industry-specific, and offline enabled chatbots through RAG with Gemma 3n.

  • Classify user queries to relevant function calls to help mobile agent understanding.


And if these examples don’t cover it, fine-tune EmbeddingGemma for a specific domain, task or particular language with our quickstart notebook.

Choosing the right embedding model for your needs

Our goal is to provide the best tools for your needs. With this launch, you now have an embedding model for any application.

  • For on-device, offline use cases: EmbeddingGemma is your best choice, optimized for privacy, speed, and efficiency.

  • For most large-scale, server-side applications: Explore our state-of-the-art Gemini Embedding model via the Gemini API for highest quality and maximum performance.


Get started with EmbeddingGemma today

We’ve prioritized making EmbeddingGemma accessible from day one and have partnered with developers to enable support across popular platforms and frameworks. Start building today with the same technology that will power experiences in Google's first-party platforms like Android with the tools you already use.