29 results
JULY 7, 2025 / Gemini
The new batch mode in the Gemini API is designed for high-throughput, non-latency-critical AI workloads, simplifying large jobs by handling scheduling and processing, and making tasks like data analysis, bulk content creation, and model evaluation more cost-effective and scalable, so developers can process large volumes of data efficiently.
JUNE 25, 2025 / Gemini
A research prototype simulating a neural operating system generates UI in real-time adapting to user interactions with Gemini 2.5 Flash-Lite, using interaction tracing for contextual awareness, streaming the UI for responsiveness, and achieving statefulness with an in-memory UI graph.
JUNE 24, 2025 / Gemini
Gemini 2.5 Pro and Flash are transforming robotics by enhancing coding, reasoning, and multimodal capabilities, including spatial understanding. These models are used for semantic scene understanding, code generation for robot control, and building interactive applications with the Live API, with a strong emphasis on safety improvements and community applications.
JUNE 17, 2025 / Gemini
Google is releasing updates to its Gemini 2.5 model family, including the generally available and stable Gemini 2.5 Pro and Flash, and the new Gemini 2.5 Flash-Lite "thinking models" in preview, offering enhanced performance and accuracy, with Flash-Lite providing a lower-cost option.
MAY 23, 2025 / Gemini
Announcing new features and models for the Gemini API, with the introduction of Gemini 2.5 Flash Preview with improved reasoning and efficiency, Gemini 2.5 Pro and Flash text-to-speech supporting multiple languages and speakers, and Gemini 2.5 Flash native audio dialog for conversational AI.
MAY 21, 2025 / Google AI Studio
Google AI Studio has been upgraded to enhance the developer experience, featuring native code generation with Gemini 2.5 Pro, agentic tools, and enhanced multimodal generation capabilities, plus new features like the Build tab, Live API, and improved tools for building sophisticated AI applications.
MAY 20, 2025 / Android
Top announcements from Google I/O 2025 focus on building across Google platforms and innovating with AI models from Google DeepMind, with key focus on new tools, APIs, and features designed to enhance developer productivity and create AI-powered experiences using Gemini, Android, Firebase, and web.
MAY 20, 2025 / Gemini
Google Gemini models offer several advantages when building AI agents, such as advanced reasoning, function calling, multimodality, and large context window capabilities. Open-source frameworks like LangGraph, CrewAI, LlamaIndex, and Composio can be used with Gemini for agent development.
MAY 20, 2025 / Gemini
Google Colab is launching a reimagined AI-first version at Google I/O, featuring an agentic collaborator powered by Gemini 2.5 Flash with iterative querying capabilities, an upgraded Data Science Agent, effortless code transformation, and flexible interaction methods, aiming to significantly improve coding workflows.
MAY 20, 2025 / Gemini
Stitch, a new Google Labs experiment, uses AI to generate UI designs and frontend code from text prompts and images, aiming to streamline the design and development workflow, offering features like UI generation from natural language or images, rapid iteration, and seamless paste to Figma and front-end code.