Search for "Gemini 2.5"

29 results

Clear filters
  • JULY 7, 2025 / Gemini

    Batch Mode in the Gemini API: Process more for less

    The new batch mode in the Gemini API is designed for high-throughput, non-latency-critical AI workloads, simplifying large jobs by handling scheduling and processing, and making tasks like data analysis, bulk content creation, and model evaluation more cost-effective and scalable, so developers can process large volumes of data efficiently.

    Scale your AI workloads with batch mode in the Gemini API
  • JUNE 25, 2025 / Gemini

    Simulating a neural operating system with Gemini 2.5 Flash-Lite

    A research prototype simulating a neural operating system generates UI in real-time adapting to user interactions with Gemini 2.5 Flash-Lite, using interaction tracing for contextual awareness, streaming the UI for responsiveness, and achieving statefulness with an in-memory UI graph.

    Behind the prototype: Simulating a neural operating system with Gemini
  • JUNE 24, 2025 / Gemini

    Gemini 2.5 for robotics and embodied intelligence

    Gemini 2.5 Pro and Flash are transforming robotics by enhancing coding, reasoning, and multimodal capabilities, including spatial understanding. These models are used for semantic scene understanding, code generation for robot control, and building interactive applications with the Live API, with a strong emphasis on safety improvements and community applications.

    Gemini 2.5 for robotics and embodied intelligence
  • JUNE 17, 2025 / Gemini

    Gemini 2.5: Updates to our family of thinking models

    Google is releasing updates to its Gemini 2.5 model family, including the generally available and stable Gemini 2.5 Pro and Flash, and the new Gemini 2.5 Flash-Lite "thinking models" in preview, offering enhanced performance and accuracy, with Flash-Lite providing a lower-cost option.

    Gemini 2.5: Updates to our family of thinking models
  • MAY 23, 2025 / Gemini

    Gemini API I/O updates

    Announcing new features and models for the Gemini API, with the introduction of Gemini 2.5 Flash Preview with improved reasoning and efficiency, Gemini 2.5 Pro and Flash text-to-speech supporting multiple languages and speakers, and Gemini 2.5 Flash native audio dialog for conversational AI.

    Gemini_API_metadata
  • MAY 21, 2025 / Google AI Studio

    An upgraded dev experience in Google AI Studio

    Google AI Studio has been upgraded to enhance the developer experience, featuring native code generation with Gemini 2.5 Pro, agentic tools, and enhanced multimodal generation capabilities, plus new features like the Build tab, Live API, and improved tools for building sophisticated AI applications.

    google-io-event-meta
  • MAY 20, 2025 / Android

    What you should know from the Google I/O 2025 Developer keynote

    Top announcements from Google I/O 2025 focus on building across Google platforms and innovating with AI models from Google DeepMind, with key focus on new tools, APIs, and features designed to enhance developer productivity and create AI-powered experiences using Gemini, Android, Firebase, and web.

    What you should know from the Google I/O 2025 Developer keynote
  • MAY 20, 2025 / Gemini

    Building agents with Google Gemini and open source frameworks

    Google Gemini models offer several advantages when building AI agents, such as advanced reasoning, function calling, multimodality, and large context window capabilities. Open-source frameworks like LangGraph, CrewAI, LlamaIndex, and Composio can be used with Gemini for agent development.

    Building agents with Google Gemini and open source frameworks
  • MAY 20, 2025 / Gemini

    Fully Reimagined: AI-First Google Colab

    Google Colab is launching a reimagined AI-first version at Google I/O, featuring an agentic collaborator powered by Gemini 2.5 Flash with iterative querying capabilities, an upgraded Data Science Agent, effortless code transformation, and flexible interaction methods, aiming to significantly improve coding workflows.

    Google Colab's reimagined Al-first experience
  • MAY 20, 2025 / Gemini

    From idea to app: Introducing Stitch, a new way to design UIs

    Stitch, a new Google Labs experiment, uses AI to generate UI designs and frontend code from text prompts and images, aiming to streamline the design and development workflow, offering features like UI generation from natural language or images, rapid iteration, and seamless paste to Figma and front-end code.

    From idea to app: Introducing Stitch, a new way to design UIs