Search

9 results

Clear filters
  • AUG. 12, 2025 / Kaggle

    Train a GPT2 model with JAX on TPU for free

    Build and train a GPT2 model from scratch using JAX on Google TPUs, with a complete Python notebook for free-tier Colab or Kaggle. Learn how to define a hardware mesh, partition model parameters and input data for data parallelism, and optimize the model training process.

    Train a GPT2 model with JAX on TPU for free
  • JULY 30, 2025 / Gemini

    Introducing LangExtract: A Gemini powered information extraction library

    LangExtract is a new open-source Python library powered by Gemini models for extracting structured information from unstructured text, offering precise source grounding, reliable structured outputs using controlled generation, optimized long-context extraction, interactive visualization, and flexible LLM backend support.

    LangExtract_meta
  • JULY 9, 2025 / Gemma

    T5Gemma: A new collection of encoder-decoder Gemma models

    T5Gemma is a new family of encoder-decoder LLMs developed by converting and adapting pretrained decoder-only models based on the Gemma 2 framework, offering superior performance and efficiency compared to its decoder-only counterparts, particularly for tasks requiring deep input understanding, like summarization and translation.

    T5Gemma: A New Collection of Encoder-Decoder Gemma Models
  • JULY 7, 2025 / Gemini

    Batch Mode in the Gemini API: Process more for less

    The new batch mode in the Gemini API is designed for high-throughput, non-latency-critical AI workloads, simplifying large jobs by handling scheduling and processing, and making tasks like data analysis, bulk content creation, and model evaluation more cost-effective and scalable, so developers can process large volumes of data efficiently.

    Scale your AI workloads with batch mode in the Gemini API
  • JUNE 23, 2025 / Kaggle

    Multilingual innovation in LLMs: How open models help unlock global communication

    Developers adapt LLMs like Gemma for diverse languages and cultural contexts, demonstrating AI's potential to bridge global communication gaps by addressing challenges like translating ancient texts, localizing mathematical understanding, and enhancing cultural sensitivity in lyric translation.

    Multilingual innovation in LLMs: How open models help unlock global communication
  • APRIL 29, 2025 / Cloud

    Announcing the general availability of Llama 4 as MaaS on Vertex AI

    Llama 4, Meta's advanced large language model, is now generally available as a fully managed API on Vertex AI, simplifying deployment and management. The Llama 3.3 70B managed API is also generally available, offering users greater flexibility.

    Announcing the general availability of Llama 4 MaaS on Vertex AI
  • JAN. 15, 2025 / AI

    Vertex AI RAG Engine: A developers tool

    Vertex AI RAG Engine, a managed orchestration service, streamlines the process of retrieving and feeding relevant information to Large Language Models. This enables developers to build robust, grounded generative AI apps that ensure responses are factually grounded.

    Cloud-Vertex-AI-Sequence-Light
  • DEC. 20, 2024 / Gemma

    Beyond English: How Gemma open models are bridging the language gap

    AI Singapore and INSAIT teams have leveraged Gemma, a family of open-source language models, to create LLMs tailored to the unique needs of their communities, in a show of innovation and inclusivity in AI.

    Gemma-SEALION
  • NOV. 8, 2024 / Gemini

    Gemini is now accessible from the OpenAI Library

    Developers can now access and build with the latest Gemini models through the OpenAI Library and REST API. Update your code with three lines and get started.

    Gemini-Social-2