This week, we have extended the Gemini Batch API to support the newly launched Gemini Embedding model as well as offering developers the ability to leverage the OpenAI SDK to submit and process batches.
This builds on the initial launch of the Gemini Batch API - which enables asynchronous processing at 50% lower rates for high volume and latency tolerant use cases.
Our new Gemini Embedding Model is already being used for thousands of production deployments. And now, you can leverage the model with the Batch API at much higher rate limits and at half the price - $0.075 per 1M input tokens - to unlock even more cost sensitive, latency tolerant, or asynchronous use cases.
Get started with Batch Embeddings with only a few lines of code:
# Create a JSONL that contains these lines:
# {"key": "request_1", "request": {"output_dimensionality": 3, "content": {"parts": [{"text": "Explain GenAI"}]}}}
# {"key": "request_2", "request": {"output_dimensionality": 4, "content": {"parts": [{"text": "Explain quantum computing"}]}}}
uploaded_batch_requests = client.files.upload(file='embedding_requests.json')
batch_job = client.batches.create_embeddings(
model="gemini-embedding-001",
src={"file_name": uploaded_batch_requests.name}
)
print(f"Created embedding batch job: {batch_job.name}")
# Wait for up to 24 hours
if batch_job.state.name == 'JOB_STATE_SUCCEEDED':
result_file_name = batch_job.dest.file_name
file_content_bytes = client.files.download(file=result_file_name)
file_content = file_content_bytes.decode('utf-8')
for line in file_content.splitlines():
print(line)
For more informations and examples go to:
Switching to Gemini Batch API is now as easy as updating a view lines of code if you use the OpenAI SDK compatibility layer:
from openai import OpenAI
openai_client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1alphav1beta/openai/"
)
# Upload JSONL file in OpenAI batch input format...
# Create batch
batch = openai_client.batches.create(
input_file_id=batch_input_file_id,
endpoint="/v1/chat/completions",
completion_window="24h"
)
# Wait for up to 24 hours & poll for status
batch = openai_client.batches.retrieve(batch.id)
if batch.status == "completed":
# Download results...
You can read more about the OpenAI Compatibility layer and batch support in our documentation.
We are continuously expanding our batch offering to further optimize the cost of using Gemini API, so keep an eye out for further updates. In the meantime, happy building!