Starting today, developers can access the latest Gemini models via the OpenAI Library and REST API, making it easier to get started with Gemini. We will initially support the Chat Completions API and Embeddings API, with plans for additional compatibility in the weeks and months to come. You can read more in the Gemini API docs, and if you aren't already using the OpenAI libraries, we recommend that you call the Gemini API directly.
python
from openai import OpenAI
client = OpenAI(
api_key="gemini_api_key",
base_url="https://generativelanguage.googleapis.com/v1beta/"
)
response = client.chat.completions.create(
model="gemini-1.5-flash",
n=1,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": "Explain to me how AI works"
}
]
)
print(response.choices[0].message)
nodejs
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "gemini_api_key",
baseURL: "https://generativelanguage.googleapis.com/v1beta/"
});
const response = await openai.chat.completions.create({
model: "gemini-1.5-flash",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{
role: "user",
content: "Explain to me how AI works",
},
],
});
console.log(response.choices[0].message);
bash
curl "https://generativelanguage.googleapis.com/v1beta/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $gemini_api_key" \
-d '{
"model": "gemini-1.5-flash",
"messages": [
{"role": "user", "content": "Explain to me how AI works"}
]
}'
For a list of supported Gemini API parameters, you can read our API Reference. We are excited for more developers to get a chance to start building with Gemini and will have more updates to share soon. If you are a Vertex AI Enterprise customer, we also support OpenAI compatibility. Happy building!