Supercharge Your AI Applications: Deploying EmbeddingGemma at Scale with Dataflow

28 DE AGOSTO DE 2025
Danny McCormick Staff Software Engineer Google Cloud Dataflow
Ian Ballantyne Senior Developer Relations Engineer Google DeepMind

The world of AI is moving at a breakneck pace, and at the core of many modern applications like semantic search and Retrieval Augmented Generation (RAG) are embeddings. Today, we're excited to discuss how you can leverage Google's new highly efficient, 300M parameter open model, EmbeddingGemma, to generate powerful text embeddings with Google Cloud's Dataflow and vector databases like AlloyDB to build a scalable, real-time knowledge ingestion pipeline.


The power of embeddings and Dataflow

Embeddings are numerical vector representations of data that capture the underlying relationships between words and concepts. They are the cornerstone of applications that need to understand information on a deeper, conceptual level, from searching for documents that are semantically similar to a query to providing relevant context for Large Language Models (LLMs) in RAG systems.

To power these applications, you need a robust knowledge ingestion pipeline that can process unstructured data, convert it into embeddings, and load it into a specialized vector database. This is where Dataflow can help by encapsulating these steps into a single managed pipeline.

Using a small, highly efficient open model like EmbeddingGemma at the core of your pipeline makes the entire process self-contained, which can simplify management by eliminating the need for external network calls to other services for the embedding step. Because it's an open model, it can be hosted entirely within Dataflow. This provides the confidence to securely process large-scale, private datasets.

Beyond these operational benefits, EmbeddingGemma is also fine-tunable, allowing you to customize it for your specific data embedding needs; you can find a fine-tuning example here. Quality is just as important as scalability, and EmbeddingGemma excels here as well. It is the highest-ranking text-only multilingual embedding model under 500M parameters on the Massive Text Embedding Benchmark (MTEB) Multilingual leaderboard.

Dataflow is a fully managed, autoscaling platform for unified batch and streaming data processing. By including a model like EmbeddingGemma directly into a Dataflow pipeline, you gain several advantages:

  • Efficiency from data locality: Processing happens on the Dataflow workers, eliminating the need for remote procedure calls (RPC) to a separate inference service and avoiding problems from quotas and autoscaling multiple systems together. Your whole workflow can be bundled into a single set of workers, reducing your resource footprint.

  • Unified system: A single system handles autoscaling, observation, and monitoring, simplifying your operational overhead.

  • Scalability and simplicity: Dataflow automatically scales your pipeline up or down based on demand, and Apache Beam's transforms reduce boilerplate code.


Building the ingestion pipeline with Dataflow ML

A typical knowledge ingestion pipeline consists of four phases: reading from a data source, preprocessing the data, generating embeddings, and writing to a vector database.

Dataflow's MLTransform
With Dataflow's MLTransform, a powerful PTransform for data preparation, this entire workflow can be implemented in just a few lines of code.

Generating Gemma Embeddings with MLTransform

Let's walk through how to use the new Gemma model to generate text embeddings. This example, adapted from the EmbeddingGemma notebook, shows how to configure MLTransform to use a Hugging Face model and then write the results to AlloyDB where the embeddings can be used for semantic search. Databases like AlloyDB allow us to combine this semantic search with an additional structured search to provide high quality and relevant results.

First, we define the name of the model we'll use for embeddings along with a transform specifying the columns we want to embed and the type of model we're using.

import tempfile
import apache_beam as beam
from apache_beam.ml.transforms.base import MLTransform
from apache_beam.ml.transforms.embeddings.huggingface import SentenceTransformerEmbeddings
 
# The new Gemma model for generating embeddings
text_embedding_model_name = 'google/embeddinggemma-300m'
 
# Define the embedding transform with our Gemma model
embedding_transform = SentenceTransformerEmbeddings(
    model_name=text_embedding_model_name, columns=['x']
)
Python

Once we've generated embeddings, we'll pipe the output directly into our sink, which will usually be a vector database. To write these embeddings, we will define a config-driven VectorDatabaseWriteTransform.

In this case, we will use AlloyDB as our sink by passing in an AlloyDBVectorWriterConfig object. Dataflow supports writing to many vector databases, including AlloyDB, CloudSQL, and BigQuery, using just configuration objects.

# Define the config used to write to AlloyDB
alloydb_writer_config = AlloyDBVectorWriterConfig(
    connection_config=connection_config,
    table_name=table_name
)
 
# Build and run the pipeline
with beam.Pipeline() as pipeline:
  _ = (
      pipeline
      | "CreateData" >> beam.Create(content) # In production could be replaced by a transform to read from any source
      # MLTransform generates the embeddings
      | "Generate Embeddings" >> MLTransform(
          write_artifact_location=tempfile.mkdtemp()
      ).with_transform(embedding_transform)
      # The output is written to our vector database
      | 'Write to AlloyDB' >> VectorDatabaseWriteTransform(alloydb_writer_config)
  )
Python

This simple yet powerful pattern allows you to process massive datasets in parallel, generate state-of-the-art embeddings with EmbeddingGemma – 300M parameters – and populate your vector database—all within a single, scalable, and managed pipeline.


Get Started Today

By combining the latest Gemma models with the scalability of Dataflow and the vector search power of vector databases like AlloyDB, you can build sophisticated, next-generation AI applications with ease.

To learn more, explore the Dataflow ML documentation, especially documentation on preparing data and generating embeddings. You can also try a simple pipeline using Gemma by following this notebook.

To learn more why EmbeddingGemma is ideal for this type of application, find more details in our launch announcement on the Google Developer blog.