Posts by Wei Wei

3 results

Clear filters
  • APRIL 16, 2026 / AI

    MaxText Expands Post-Training Capabilities: Introducing SFT and RL on Single-Host TPUs

    MaxText has introduced new support for Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) on single-host TPU configurations, leveraging JAX and the Tunix library for high-performance model refinement. These features enable developers to easily adapt pre-trained models for specialized tasks and complex reasoning using efficient algorithms like GRPO and GSPO. This update streamlines the post-training workflow, offering a scalable path from single-host setups to larger multi-host configurations.

    Building-1-banner
  • FEB. 3, 2026 / AI

    Easy FunctionGemma finetuning with Tunix on Google TPUs

    Finetuning the FunctionGemma model is made fast and easy using the lightweight JAX-based Tunix library on Google TPUs, a process demonstrated here using LoRA for supervised finetuning. This approach delivers significant accuracy improvements with high TPU efficiency, culminating in a model ready for deployment.

    Building-1-banner
  • AUG. 12, 2025 / Kaggle

    Train a GPT2 model with JAX on TPU for free

    Build and train a GPT2 model from scratch using JAX on Google TPUs, with a complete Python notebook for free-tier Colab or Kaggle. Learn how to define a hardware mesh, partition model parameters and input data for data parallelism, and optimize the model training process.

    Train a GPT2 model with JAX on TPU for free