Search

103 results

Clear filters
  • SEPT. 9, 2025 / AI

    A2A Extensions: Empowering Custom Agent Functionality

    A2A Extensions provide a flexible way to add custom functionalities to agent-to-agent communication, going beyond the core A2A protocol. They enable specialized features and are openly defined and implemented.

    GfD_evergreen_meta
  • SEPT. 4, 2025 / AI

    From Fine-Tuning to Production: A Scalable Embedding Pipeline with Dataflow

    Learn how to use Google's EmbeddingGemma, an efficient open model, with Google Cloud's Dataflow and vector databases like AlloyDB to build scalable, real-time knowledge ingestion pipelines.

    EG+Dataflow_Metadatal
  • AUG. 28, 2025 / AI

    How to prompt Gemini 2.5 Flash Image Generation for the best results

    Detailed prompting techniques and best practices for various applications, including photorealistic scenes, stylized illustrations, product mockups, and more using Google's newly released Gemini 2.5 Flash Image; a natively multimodal model capable of generating, editing, and composing images using text, supporting capabilities like text-to-image, image editing, style transfer, and multi-image composition.

    Gemini 2.5 Flash Image
  • AUG. 21, 2025 / Gemini

    What's new in Gemini Code Assist

    Gemini Code Assist's Agent Mode, now available in VS Code (Preview) and IntelliJ (Stable), streamlines complex coding tasks by proposing detailed plans for user review and approval. This intelligent, collaborative approach, enhanced with features like inline diffs and persistent chat history, aims to boost developer productivity and efficiency.

    New in Gemini Code Assist: Agent Mode more widely available, IDE improvements and Gemini CLI updates
  • AUG. 13, 2025 / Gemini

    Gemini CLI + VS Code: Native diffing and context-aware workflows

    The latest Gemini CLI update provides a deep IDE integration within VS Code for intelligent, context-aware suggestions, and native in-editor diffing, allowing developers to review and modify proposed changes directly within the diff view for a more efficient workflow.

    Gemini CLI + VS Code integration
  • AUG. 12, 2025 / Kaggle

    Train a GPT2 model with JAX on TPU for free

    Build and train a GPT2 model from scratch using JAX on Google TPUs, with a complete Python notebook for free-tier Colab or Kaggle. Learn how to define a hardware mesh, partition model parameters and input data for data parallelism, and optimize the model training process.

    Train a GPT2 model with JAX on TPU for free
  • JULY 24, 2025 / AI

    The agentic experience: Is MCP the right tool for your AI future?

    Apigee helps enterprises integrate large language models (LLMs) into existing API ecosystems securely and scalably, addressing challenges like authentication and authorization not fully covered by the evolving Model Context Protocol (MCP), and offering an open-source MCP server example that demonstrates how to implement enterprise-ready API security for AI agents.

    The Agentic experience: Is MCP the right tool for your AI future?
  • JULY 24, 2025 / AI

    People of AI podcast Season 5 is here: Meet the builders shaping the future

    Co-hosted by Ashley Oldacre and Christina Warren, People of AI podcast's Season 5 will focus on the builders in the space of AI, highlighting the unique journeys, challenges, and triumphs of these innovators.

    People of AI Podcast – Season 5
  • JULY 16, 2025 / Gemini

    Simplify your Agent "vibe building" flow with ADK and Gemini CLI

    The updated Agent Development Kit (ADK) simplifies and accelerates the process of building AI agents by providing the CLI with a deep, cost-effective understanding of the ADK framework, allowing developers to quickly ideate, generate, test, and improve functional agents through conversational prompts, eliminating friction and keeping them in a productive "flow" state.

    ADK + Gemini CLI: Supercharge Your Agent Building Vibe
  • JULY 16, 2025 / Cloud

    Stanford’s Marin foundation model: The first fully open model developed using JAX

    The Marin project aims to expand the definition of 'open' in AI to include the entire scientific process, not just the model itself, by making the complete development journey accessible and reproducible. This effort, powered by the JAX framework and its Levanter tool, allows for deep scrutiny, trust in, and building upon foundation models, fostering a more transparent future for AI research.

    Stanford Marin project in JAX