Search

24 results

Clear filters
  • FEB. 19, 2026 / Gemini

    Turn creative prompts into interactive XR experiences with Gemini

    The Android XR team is using Gemini's Canvas feature to make creating immersive extended reality (XR) experiences more accessible. This allows developers to rapidly prototype interactive 3D environments and models on a Samsung Galaxy XR headset using simple creative prompts.

    Gemini_Generated_Image_gasezsgasezsgase
  • FEB. 3, 2026 / AI

    Easy FunctionGemma finetuning with Tunix on Google TPUs

    Finetuning the FunctionGemma model is made fast and easy using the lightweight JAX-based Tunix library on Google TPUs, a process demonstrated here using LoRA for supervised finetuning. This approach delivers significant accuracy improvements with high TPU efficiency, culminating in a model ready for deployment.

    Building-1-banner
  • JAN. 16, 2026 / AI

    A Guide to Fine-Tuning FunctionGemma

    FunctionGemma is a specialized AI model for function calling. This post explains why fine-tuning is key to resolving tool selection ambiguity (e.g., internal vs. Google search) and achieving ultra-specialization, transforming it into a strict, enterprise-compliant agent. A case study demonstrates the improved logic. It also introduces the "FunctionGemma Tuning Lab," a no-code demo on Hugging Face Spaces, which streamlines the entire fine-tuning process for developers.

    Train a GPT2 model with JAX on TPU for free
  • OCT. 7, 2025 / AI

    Building High-Performance Data Pipelines with Grain and ArrayRecord

    To avoid data bottlenecks when training large models, this guide introduces Grain and ArrayRecord for building high-performance data pipelines.

    The Agentic experience: Is MCP the right tool for your AI future?
  • SEPT. 16, 2025 / AI

    ADK for Java opening up to third-party language models via LangChain4j integration

    The Agent Development Kit (ADK) for Java 0.2.0 now integrates with LangChain4j, expanding LLM support to include third-party and local models like Gemma and Qwen. This release also enhances tooling with instance-based FunctionTools, improved async support, better loop control, and advanced agent logic with chained callbacks and new memory management.

    adk-langchain4j
  • SEPT. 4, 2025 / AI

    From Fine-Tuning to Production: A Scalable Embedding Pipeline with Dataflow

    Learn how to use Google's EmbeddingGemma, an efficient open model, with Google Cloud's Dataflow and vector databases like AlloyDB to build scalable, real-time knowledge ingestion pipelines.

    EG+Dataflow_Metadatal
  • AUG. 12, 2025 / Kaggle

    Train a GPT2 model with JAX on TPU for free

    Build and train a GPT2 model from scratch using JAX on Google TPUs, with a complete Python notebook for free-tier Colab or Kaggle. Learn how to define a hardware mesh, partition model parameters and input data for data parallelism, and optimize the model training process.

    Train a GPT2 model with JAX on TPU for free
  • JULY 16, 2025 / AI

    Unlock Gemini’s reasoning: A step-by-step guide to logprobs on Vertex AI

    The `logprobs` feature has been officially introduced in the Gemini API on Vertex AI, provides insight into the model's decision-making by showing probability scores for chosen and alternative tokens. This step-by-step guide will walk you through how to enable and interpret this feature and apply it to powerful use cases such as confident classification, dynamic autocomplete, and quantitative RAG evaluation.

    logprobs_meta
  • JUNE 24, 2025 / Kaggle

    Using KerasHub for easy end-to-end machine learning workflows with Hugging Face

    KerasHub enables users to mix and match model architectures and weights across different machine learning frameworks, allowing checkpoints from sources like Hugging Face Hub (including those created with PyTorch) to be loaded into Keras models for use with JAX, PyTorch, or TensorFlow. This flexibility means you can leverage a vast array of community fine-tuned models while maintaining full control over your chosen backend framework.

    How to load model weights from SafeTensors into KerasHub for multi-framework machine learning
  • JAN. 15, 2025 / AI

    Vertex AI RAG Engine: A developers tool

    Vertex AI RAG Engine, a managed orchestration service, streamlines the process of retrieving and feeding relevant information to Large Language Models. This enables developers to build robust, grounded generative AI apps that ensure responses are factually grounded.

    Cloud-Vertex-AI-Sequence-Light