391 results
SEPT. 24, 2025 / Cloud
The new Colab features simplify and enhance notebook-based class materials. Instructors can now freeze runtime versions, and seamlessly present and copy notebooks. This improves reproducibility, stability, and shareability, making Colab ideal for researchers, educators, and developers. Slideshow mode and URL linking are also enhanced.
    SEPT. 24, 2025 / Mobile
Google AI Edge provides the tools to run AI features on-device, and its new LiteRT-LM runtime is a significant leap forward for generative AI. LiteRT-LM is an open-source C++ API, cross-platform compatibility, and hardware acceleration designed to efficiently run large language models like Gemma and Gemini Nano across a vast range of hardware. Its key innovation is a flexible, modular architecture that can scale to power complex, multi-task features in Chrome and Chromebook Plus, while also being lean enough for resource-constrained devices like the Pixel Watch. This versatility is already enabling a new wave of on-device generative AI, bringing capabilities like WebAI and smart replies to users.
    SEPT. 24, 2025 / AI
Data Commons announces the availability of its MCP Server, which is a major milestone in making all of Data Commons’ vast public datasets instantly accessible and actionable for AI developers worldwide.
    SEPT. 22, 2025 / AI
Gemini CLI now seamlessly integrates with FastMCP, Python's leading library for building MCP servers. We’re thrilled to announce this integration between two open-source projects that empowers you to effortlessly connect your custom MCP tools and prompts, directly to Gemini CLI!
    SEPT. 16, 2025 / AI
The Agent Development Kit (ADK) for Java 0.2.0 now integrates with LangChain4j, expanding LLM support to include third-party and local models like Gemma and Qwen. This release also enhances tooling with instance-based FunctionTools, improved async support, better loop control, and advanced agent logic with chained callbacks and new memory management.
    SEPT. 10, 2025 / AI
Gemini Batch API now supports Embeddings and OpenAI compatibility, allowing asynchronous processing at 50% lower rates. The new Gemini Embedding Model can be leveraged with the Batch API for cost-sensitive use cases. OpenAI SDK compatibility simplifies switching to Gemini Batch API.
    SEPT. 10, 2025 / AI
We are launching 1.0 stable release of Genkit Go, empowering Go developers to build performant, production-ready AI-powered applications with Genkit. Recent enhancements include support for integrating and building MCP tools, expanding third-party model provider support, and production AI monitoring with Firebase. Additionally, we are announcing a new feature in the Genkit CLI to provide AI development tools, like the Gemini CLI and Cursor, with the latest knowledge of Genkit - supercharging Genkit development experience when using AI assistance.
    SEPT. 9, 2025 / AI
JAX, a framework known for large-scale AI model development, is proving to be a powerful tool in scientific computing, particularly for solving complex Partial Differential Equations (PDEs), now being leveraged by researchers to achieve significant speed-ups and memory reductions in solving high-order PDEs and demonstrating its potential to unlock new frontiers in scientific discovery.
    SEPT. 9, 2025 / AI
A2A Extensions provide a flexible way to add custom functionalities to agent-to-agent communication, going beyond the core A2A protocol. They enable specialized features and are openly defined and implemented.
    SEPT. 8, 2025 / AI
Today's Veo updates include support for vertical format (9:16) and 1080p HD outputs, along with new, lower pricing for Veo 3 ($0.40/second) and Veo 3 Fast ($0.15/second). These models are now stable for production use in the Gemini API. The MediaSim demo app showcases how Gemini's multimodal capabilities combine with Veo 3 for media simulations.