Search for "Accessible"

263 results

Clear filters
  • FEB. 26, 2026 / Mobile

    On-Device Function Calling in Google AI Edge Gallery

    Google has introduced FunctionGemma, a specialized 270M parameter model designed to bring efficient, action-oriented AI experiences directly to mobile devices through on-device function calling. By leveraging Google AI Edge and LiteRT-LM, the model enables complex tasks—such as managing calendars, controlling device hardware, or executing specific game logic in the "Tiny Garden" demo—to be performed entirely offline with high speed and low latency. Available for testing in the Google AI Edge Gallery app on both Android and iOS, FunctionGemma allows developers to move beyond simple text generation toward building responsive, "agentic" applications that interact seamlessly with the physical and digital world without relying on cloud processing.

    banner2
  • FEB. 19, 2026 / Gemini

    Turn creative prompts into interactive XR experiences with Gemini

    The Android XR team is using Gemini's Canvas feature to make creating immersive extended reality (XR) experiences more accessible. This allows developers to rapidly prototype interactive 3D environments and models on a Samsung Galaxy XR headset using simple creative prompts.

    Gemini_Generated_Image_gasezsgasezsgase
  • FEB. 9, 2026 / AI

    Access public data insights faster: Data Commons MCP is now hosted on Google Cloud

    Data Commons has launched a free, hosted Model Context Protocol (MCP) service on Google Cloud Platform, eliminating the need for users to manage complex local server installations. This update simplifies connecting AI agents and the Gemini CLI to Data Commons, allowing Google to handle security, updates, and resource management while users query data natively.

    DevBlogPostFinal
  • FEB. 4, 2026 / AI

    Introducing the Developer Knowledge API and MCP Server

    Google is launching the Developer Knowledge API and MCP Server in public preview. This new toolset provides a canonical, machine-readable way for AI assistants and agentic platforms to search and retrieve up-to-date documentation across Firebase, Google Cloud, Android, and more. By using the official MCP server, developers can connect tools directly to Google’s documentation corpus, ensuring that AI-generated code and guidance are based on authoritative, real-time context.

    Gemini_Generated_Image_yujtz0yujtz0yujt
  • JAN. 28, 2026 / Mobile

    LiteRT: The Universal Framework for On-Device AI

    LiteRT, the evolution of TFLite, is now the universal framework for on-device AI. It delivers up to 1.4x faster GPU, new NPU support, and streamlined GenAI deployment for models like Gemma.

    LiteERT_banner
  • JAN. 16, 2026 / AI

    A Guide to Fine-Tuning FunctionGemma

    FunctionGemma is a specialized AI model for function calling. This post explains why fine-tuning is key to resolving tool selection ambiguity (e.g., internal vs. Google search) and achieving ultra-specialization, transforming it into a strict, enterprise-compliant agent. A case study demonstrates the improved logic. It also introduces the "FunctionGemma Tuning Lab," a no-code demo on Hugging Face Spaces, which streamlines the entire fine-tuning process for developers.

    Train a GPT2 model with JAX on TPU for free
  • JAN. 5, 2026 / AI

    A Developer's Guide to Debugging JAX on Cloud TPUs: Essential Tools and Techniques

    A practical guide to debugging and profiling JAX on Cloud TPUs. It outlines core components (libtpu, JAX/jaxlib) and essential techniques. Tools covered include: Verbose Logging (via libtpu env vars), TPU Monitoring Library for performance metrics, tpu-info for real-time utilization, XLA HLO Dumps for compiler debugging, and the XProf suite for in-depth performance analysis.

    meta
  • DEC. 17, 2025 / AI

    Gemini 3 Flash is now available in Gemini CLI

    Gemini 3 Flash is now available in Gemini CLI. It delivers Pro-grade coding performance with low latency and a lower cost, matching Gemini 3 Pro's SWE-bench Verified score of 76%. It significantly outperforms 2.5 Pro, improving auto-routing and agentic coding. It's ideal for high-frequency development tasks, handling complex code generation, large context windows (like processing 1,000 comment pull requests), and generating load-testing scripts quickly and reliably.

    output_padded
  • DEC. 15, 2025 / Mobile

    Introducing A2UI: An open project for agent-driven interfaces

    A2UI is an open-source project for agent-driven, cross-platform, and generative UI. It provides a secure, declarative data format for agents to compose bespoke interfaces from a trusted component catalog, allowing for native styling and incremental updates. Designed for the multi-agent mesh (A2A), it offers a framework-agnostic solution to safely render remote agent UIs, with integrations in AG UI, Flutter's GenUI SDK, Opal, and Gemini Enterprise.

    Train a GPT2 model with JAX on TPU for free
  • DEC. 11, 2025 / AI

    Building agents with the ADK and the new Interactions API

    The new Gemini Interactions API enables stateful, multi-turn AI agent workflows, providing a single interface for raw models and the Gemini Deep Research Agent. It can be integrated with existing ADK systems as a superior inference engine with simplified state management, or used as a transparent remote A2A agent via InteractionsApiTransport, allowing seamless expansion of multi-agent systems with minimal refactoring.

    The Agentic experience: Is MCP the right tool for your AI future?