Search

171 results

Clear filters
  • JULY 10, 2025 / Gemini

    Announcing GenAI Processors: Build powerful and flexible Gemini applications

    GenAI Processors is a new open-source Python library from Google DeepMind designed to simplify the development of AI applications, especially those handling multimodal input and requiring real-time responsiveness, by providing a consistent "Processor" interface for all steps from input handling to model calls and output processing, for seamless chaining and concurrent execution.

    Announcing GenAI Processors: Streamline your Gemini application development
  • JULY 10, 2025 / Cloud

    Advancing agentic AI development with Firebase Studio

    Updates in Firebase Studio include new Agent modes, foundational support for the Model Context Protocol (MCP), and Gemini CLI integration, all designed to redefine AI-assisted development allow developers to create full-stack applications from a single prompt and integrate powerful AI capabilities directly into their workflow.

    Advancing agentic AI development with Firebase Studio
  • JULY 9, 2025 / Gemma

    T5Gemma: A new collection of encoder-decoder Gemma models

    T5Gemma is a new family of encoder-decoder LLMs developed by converting and adapting pretrained decoder-only models based on the Gemma 2 framework, offering superior performance and efficiency compared to its decoder-only counterparts, particularly for tasks requiring deep input understanding, like summarization and translation.

    T5Gemma: A New Collection of Encoder-Decoder Gemma Models
  • JULY 7, 2025 / Gemini

    Batch Mode in the Gemini API: Process more for less

    The new batch mode in the Gemini API is designed for high-throughput, non-latency-critical AI workloads, simplifying large jobs by handling scheduling and processing, and making tasks like data analysis, bulk content creation, and model evaluation more cost-effective and scalable, so developers can process large volumes of data efficiently.

    Scale your AI workloads with batch mode in the Gemini API
  • JUNE 26, 2025 / Gemma

    Introducing Gemma 3n: The developer guide

    The Gemma 3n model has been fully released, building on the success of previous Gemma models and bringing advanced on-device multimodal capabilities to edge devices with unprecedented performance. Explore Gemma 3n's innovations, including its mobile-first architecture, MatFormer technology, Per-Layer Embeddings, KV Cache Sharing, and new audio and MobileNet-V5 vision encoders, and how developers can start building with it today.

    Introducing Gemma 3n: The Developer Guide
  • JUNE 26, 2025 / AI

    Unlock deeper insights with the new Python client library for Data Commons

    Google has released a new Python client library for Data Commons – an open-source knowledge graph that unifies public statistical data, and enhances how data developers can leverage Data Commons by offering improved features, support for custom instances, and easier access to a vast array of statistical variables – developed with contributions from The ONE Campaign.

    data-commons-python-library-meta
  • JUNE 25, 2025 / Gemini

    Simulating a neural operating system with Gemini 2.5 Flash-Lite

    A research prototype simulating a neural operating system generates UI in real-time adapting to user interactions with Gemini 2.5 Flash-Lite, using interaction tracing for contextual awareness, streaming the UI for responsiveness, and achieving statefulness with an in-memory UI graph.

    Behind the prototype: Simulating a neural operating system with Gemini
  • JUNE 24, 2025 / Gemini

    Supercharge your notebooks: The new AI-first Google Colab is now available to everyone

    The new AI-first Google Colab enhances productivity with improvements powered by features like iterative querying for conversational coding, a next-generation Data Science Agent for autonomous workflows, and effortless code transformation. Early adopters report a dramatic productivity boost, accelerating ML projects, debugging code faster, and effortlessly creating high-quality visualizations.

    Supercharge your notebooks: The new AI-first Google Colab is now available to everyone
  • JUNE 24, 2025 / Kaggle

    Using KerasHub for easy end-to-end machine learning workflows with Hugging Face

    KerasHub enables users to mix and match model architectures and weights across different machine learning frameworks, allowing checkpoints from sources like Hugging Face Hub (including those created with PyTorch) to be loaded into Keras models for use with JAX, PyTorch, or TensorFlow. This flexibility means you can leverage a vast array of community fine-tuned models while maintaining full control over your chosen backend framework.

    How to load model weights from SafeTensors into KerasHub for multi-framework machine learning
  • JUNE 24, 2025 / Gemini

    Gemini 2.5 for robotics and embodied intelligence

    Gemini 2.5 Pro and Flash are transforming robotics by enhancing coding, reasoning, and multimodal capabilities, including spatial understanding. These models are used for semantic scene understanding, code generation for robot control, and building interactive applications with the Live API, with a strong emphasis on safety improvements and community applications.

    Gemini 2.5 for robotics and embodied intelligence