Search

57 results

Clear filters
  • SEPT. 26, 2025 / AI

    Apigee Operator for Kubernetes and GKE Inference Gateway integration for Auth and AI/LLM policies

    The GKE Inference Gateway now integrates with Apigee, allowing enterprises to unify AI serving and API governance. This enables GKE users to leverage Apigee's API management, security, and monetization features for their AI workloads, including API keys, quotas, rate limiting, and Model Armor security.

    GfD-Apigee-IG-Blog
  • SEPT. 24, 2025 / AI

    Introducing the Data Commons Model Context Protocol (MCP) Server: Streamlining Public Data Access for AI Developers

    Data Commons announces the availability of its MCP Server, which is a major milestone in making all of Data Commons’ vast public datasets instantly accessible and actionable for AI developers worldwide.

    BLOG-HERO-A2
  • SEPT. 22, 2025 / AI

    Gemini CLI 🤝 FastMCP: Simplifying MCP server development

    Gemini CLI now seamlessly integrates with FastMCP, Python's leading library for building MCP servers. We’re thrilled to announce this integration between two open-source projects that empowers you to effortlessly connect your custom MCP tools and prompts, directly to Gemini CLI!

    Gemini CLI - FastMCP metadata image
  • SEPT. 10, 2025 / AI

    Gemini Batch API now supports Embeddings and OpenAI Compatibility

    Gemini Batch API now supports Embeddings and OpenAI compatibility, allowing asynchronous processing at 50% lower rates. The new Gemini Embedding Model can be leveraged with the Batch API for cost-sensitive use cases. OpenAI SDK compatibility simplifies switching to Gemini Batch API.

    GeminiBatchAPI_16x9_RD2-V01
  • SEPT. 9, 2025 / AI

    A2A Extensions: Empowering Custom Agent Functionality

    A2A Extensions provide a flexible way to add custom functionalities to agent-to-agent communication, going beyond the core A2A protocol. They enable specialized features and are openly defined and implemented.

    GfD_evergreen_meta
  • AUG. 12, 2025 / Google Labs

    Meet Jules’ sharpest critic and most valuable ally

    Jules' critic functionality addresses potential issues like subtle bugs and missed edge cases in AI-generated code by acting as a peer reviewer within the generation process. This "critic-augmented generation" means proposed code changes undergo adversarial review, allowing Jules to improve its output and ultimately deliver higher-quality, pre-reviewed code.

    Jules critic agent
  • JULY 30, 2025 / Gemini

    Gemini Embedding: Powering RAG and context engineering

    The Gemini Embedding model enhances AI applications, particularly through context engineering, which is being successfully adopted by various organizations across industries to power context-aware systems, leading to significant improvements in performance, accuracy, and efficiency.

    Gemini Embedding: Powering RAG and context engineering
  • JUNE 26, 2025 / AI

    Unlock deeper insights with the new Python client library for Data Commons

    Google has released a new Python client library for Data Commons – an open-source knowledge graph that unifies public statistical data, and enhances how data developers can leverage Data Commons by offering improved features, support for custom instances, and easier access to a vast array of statistical variables – developed with contributions from The ONE Campaign.

    data-commons-python-library-meta
  • JUNE 24, 2025 / Gemini

    Supercharge your notebooks: The new AI-first Google Colab is now available to everyone

    The new AI-first Google Colab enhances productivity with improvements powered by features like iterative querying for conversational coding, a next-generation Data Science Agent for autonomous workflows, and effortless code transformation. Early adopters report a dramatic productivity boost, accelerating ML projects, debugging code faster, and effortlessly creating high-quality visualizations.

    Supercharge your notebooks: The new AI-first Google Colab is now available to everyone
  • JUNE 24, 2025 / Kaggle

    Using KerasHub for easy end-to-end machine learning workflows with Hugging Face

    KerasHub enables users to mix and match model architectures and weights across different machine learning frameworks, allowing checkpoints from sources like Hugging Face Hub (including those created with PyTorch) to be loaded into Keras models for use with JAX, PyTorch, or TensorFlow. This flexibility means you can leverage a vast array of community fine-tuned models while maintaining full control over your chosen backend framework.

    How to load model weights from SafeTensors into KerasHub for multi-framework machine learning