Search

99 results

Clear filters
  • OCT. 15, 2025 / AI

    Introducing Coral NPU: A full-stack platform for Edge AI

    Coral NPU is a full-stack platform for Edge AI, addressing performance, fragmentation, and user trust deficits. It's an AI-first architecture, prioritizing ML matrix engines, and offers a unified developer experience. Designed for ultra-low-power, always-on AI in wearables and IoT, it enables contextual awareness, audio/image processing, and user interaction with hardware-enforced privacy. Synaptics is the first partner to implement Coral NPU.

    blogpost
  • OCT. 2, 2025 / AI

    Meet Jules Tools: A Command Line Companion for Google’s Async Coding Agent

    You can now work with Jules directly in your command line. Jules is our asynchronous coding agent th...

    Thumb_CLI 2096x1182
  • OCT. 1, 2025 / AI

    Unlocking Multi-Spectral Data with Gemini

    Multi-spectral imagery, which captures wavelengths beyond human vision, offers a "superhuman" way to understand the world, and Google's Gemini models make this accessible without specialized training. By mapping invisible bands to RGB channels and providing context in the prompt, developers can leverage Gemini's power for tasks like environmental monitoring and agriculture.

    MultiSpectral-Metadatal_RD1-V01
  • OCT. 1, 2025 / AI

    Gemini for Home: Expanding the Platform for a New Era of Smart Home AI

    Google Home is enabling new Gemini-powered features for our partners’ devices and launching a new program to help them build the next generation of AI cameras.

    Geminicomingtohome_Hero
  • SEPT. 29, 2025 / AI

    Gemma explained: EmbeddingGemma Architecture and Recipe

    EmbeddingGemma, built from Gemma 3, transforms text into numerical embeddings for tasks like search and retrieval. It learns through Noise-Contrastive Estimation, Global Orthogonal Regularizer, and Geometric Embedding Distillation. Matryoshka Representation Learning allows flexible embedding dimensions. The development recipe includes encoder-decoder training, pre-fine-tuning, fine-tuning, model souping, and quantization-aware training.

    Building-2-banner
  • SEPT. 25, 2025 / AI

    Building the Next Generation of Physical Agents with Gemini Robotics-ER 1.5

    Gemini Robotics-ER 1.5, now available to developers, is a state-of-the-art embodied reasoning model for robots. It excels in visual, spatial understanding, task planning, and progress estimation, allowing robots to perform complex, multi-step tasks.

    Robotics-ER 1.5_Metadatal_RD6-V01
  • SEPT. 24, 2025 / Mobile

    On-device GenAI in Chrome, Chromebook Plus, and Pixel Watch with LiteRT-LM

    Google AI Edge provides the tools to run AI features on-device, and its new LiteRT-LM runtime is a significant leap forward for generative AI. LiteRT-LM is an open-source C++ API, cross-platform compatibility, and hardware acceleration designed to efficiently run large language models like Gemma and Gemini Nano across a vast range of hardware. Its key innovation is a flexible, modular architecture that can scale to power complex, multi-task features in Chrome and Chromebook Plus, while also being lean enough for resource-constrained devices like the Pixel Watch. This versatility is already enabling a new wave of on-device generative AI, bringing capabilities like WebAI and smart replies to users.

    Screens-1-banner (1)
  • SEPT. 24, 2025 / AI

    Introducing the Data Commons Model Context Protocol (MCP) Server: Streamlining Public Data Access for AI Developers

    Data Commons announces the availability of its MCP Server, which is a major milestone in making all of Data Commons’ vast public datasets instantly accessible and actionable for AI developers worldwide.

    BLOG-HERO-A2
  • SEPT. 16, 2025 / AI

    ADK for Java opening up to third-party language models via LangChain4j integration

    The Agent Development Kit (ADK) for Java 0.2.0 now integrates with LangChain4j, expanding LLM support to include third-party and local models like Gemma and Qwen. This release also enhances tooling with instance-based FunctionTools, improved async support, better loop control, and advanced agent logic with chained callbacks and new memory management.

    adk-langchain4j
  • SEPT. 8, 2025 / AI

    Veo 3 and Veo 3 Fast – new pricing, new configurations and better resolution

    Today's Veo updates include support for vertical format (9:16) and 1080p HD outputs, along with new, lower pricing for Veo 3 ($0.40/second) and Veo 3 Fast ($0.15/second). These models are now stable for production use in the Gemini API. The MediaSim demo app showcases how Gemini's multimodal capabilities combine with Veo 3 for media simulations.

    veo3-generally-available-social