Search for "context window"

46 results

Clear filters
  • APRIL 29, 2025 / Gemini

    How It’s Made: Little Language Lessons uses Gemini’s multilingual capabilities to personalize language learning

    Little Language Lessons, a project leveraging Gemini's API and Cloud services to generate content, translate, and provide text-to-speech functionalities, includes vocabulary lessons, slang practice, and object recognition for language learning.

    How it's made: Little Language Lessons
  • APRIL 23, 2025 / Gemini

    Achieve real-time interaction: Build with the Live API

    Explore real world applications for the Live API for Gemini models, now updated to include enhanced features for real-time audio, video, and text processing, improved session management, control over interactions, and richer output options.

    gemini-live-api-meta
  • APRIL 9, 2025 / Gemini

    Gemini 2.5 Flash and Pro, Live API, and Veo 2 in the Gemini API

    Updates to the Gemini API, including the production readiness of Veo 2 for video generation, the preview of the Live API for real-time interactions, and the upcoming Gemini 2.5 Flash model, alongside the existing Gemini 2.5 Pro aim to enhance developer capabilities in building AI applications with improved thinking models, dynamic interactions, and high-quality video generation.

    Gemini-2.5-meta
  • MARCH 12, 2025 / Gemma

    Introducing Gemma 3: The Developer Guide

    Gemma 3 is a new, advanced version of the Gemma open-model family featuring multimodality, longer context windows, and improved language capabilities, with various sizes and deployment options for developers to experiment.

    Gemma 3
  • FEB. 25, 2025 / Gemini

    Start building with Gemini 2.0 Flash and Flash-Lite

    Gemini 2.0 Flash-Lite is now generally available in the Gemini API for production use in Google AI Studio and for enterprise customers on Vertex AI. 2.0 Flash-Lite offers improved performance over 1.5 Flash across reasoning, multimodal, math and factuality benchmarks. For projects that require long context windows, 2.0 Flash-Lite is an even more cost-effective solution, with simplified pricing for prompts more than 128K tokens.

    Flash Family
  • FEB. 18, 2025 / Gemini

    Build Scalable AI Agents: Langbase and the Gemini API

    Langbase empowers developers to build and deploy powerful, scalable AI agents by leveraging the Google Gemini API, particularly Gemini 1.5 Flash, unlocking a new era of intelligent applications and streamlined workflows.

    Langbase + Gemini API
  • FEB. 5, 2025 / Gemini

    Gemini 2.0: Flash, Flash-Lite and Pro

    The Gemini 2.0 model family is seeing significant updates, including the release of Gemini 2.0 Flash, which is now production-ready and boasts higher rate limits, enhanced performance, and simplified pricing. Developers can also start testing an updated experimental version of Gemini 2.0 Pro today. Additionally, a new variant called Gemini 2.0 Flash-Lite, specifically designed for large-scale workloads, will be made available next week.

    Gemini 2.0 family expands for developers: Flash, Flash Lite and Pro
  • DEC. 6, 2024 / Gemini

    Looking back at the first year of the Gemini era

    The range of family of Gemini models has expanded in the past year in response to developer needs, introducing faster and more cost-effective models, and enhancing tools in Google AI Studio.

    Blog-metadata-gemini-anniversary
  • NOV. 14, 2024 / Gemini

    Enhancing AI Powered Developer Tools with Gemini API

    The integration of Gemini's 1.5 models with Sublayer's Ruby-based AI agent framework enables developer teams to automate their documentation process, streamline workflows, and build AI-driven applications.

    Gemini-API-Sublayer
  • NOV. 7, 2024 / AI

    Supercharging AI Coding Assistants with Gemini Models' Long Context

    Sourcegraph's Cody AI assistant, integrated with Google's Gemini 1.5 Flash, can evaluate the advantages of using long-context windows in AI models for code generation and understanding.

    Gemini_SuperchargingAICodingAssistants_Metadata