Search

260 results

Clear filters
  • AUG. 14, 2025 / Gemma

    Introducing Gemma 3 270M: The compact model for hyper-efficient AI

    Google's new Gemma 3 270M is a compact, 270-million parameter model offering energy efficiency, production-ready quantization, and strong instruction-following, making it a powerful solution for task-specific fine-tuning in on-device and research settings.

    Gemma 3 270M
  • AUG. 13, 2025 / Gemini

    Gemini CLI + VS Code: Native diffing and context-aware workflows

    The latest Gemini CLI update provides a deep IDE integration within VS Code for intelligent, context-aware suggestions, and native in-editor diffing, allowing developers to review and modify proposed changes directly within the diff view for a more efficient workflow.

    Gemini CLI + VS Code integration
  • AUG. 12, 2025 / Kaggle

    Train a GPT2 model with JAX on TPU for free

    Build and train a GPT2 model from scratch using JAX on Google TPUs, with a complete Python notebook for free-tier Colab or Kaggle. Learn how to define a hardware mesh, partition model parameters and input data for data parallelism, and optimize the model training process.

    Train a GPT2 model with JAX on TPU for free
  • JULY 30, 2025 / Gemini

    Introducing LangExtract: A Gemini powered information extraction library

    LangExtract is a new open-source Python library powered by Gemini models for extracting structured information from unstructured text, offering precise source grounding, reliable structured outputs using controlled generation, optimized long-context extraction, interactive visualization, and flexible LLM backend support.

    LangExtract_meta
  • JULY 30, 2025 / Gemini

    Gemini Embedding: Powering RAG and context engineering

    The Gemini Embedding model enhances AI applications, particularly through context engineering, which is being successfully adopted by various organizations across industries to power context-aware systems, leading to significant improvements in performance, accuracy, and efficiency.

    Gemini Embedding: Powering RAG and context engineering
  • JULY 23, 2025 / Firebase

    Unleashing new AI capabilities for popular frameworks in Firebase Studio

    New AI capabilities for popular frameworks in Firebase Studio include AI-optimized templates, streamlined integration with Firebase backend services, and the ability to fork workspaces for experimentation and collaboration, making AI-assisted app development more intuitive and faster for developers worldwide.

    Unleashing new AI capabilities for popular frameworks in Firebase Studio
  • JULY 22, 2025 / Gemini

    Gemini 2.5 Flash-Lite is now stable and generally available

    Gemini 2.5 Flash-Lite, previously in preview, is now stable and generally available. This cost-efficient model is ~1.5x faster than 2.0 Flash-Lite and 2.0 Flash, offers high quality, and includes 2.5 family features like a 1 million-token context window and multimodality.

    Gemini 2.5 Flash is making it easy to build with the Gemini API in Google AI Studio
  • JULY 21, 2025 / Gemini

    Conversational image segmentation with Gemini 2.5

    Gemini's advanced capability for conversational image segmentation allows intuitive interaction with visual data by understanding complex phrases, conditional logic, and abstract concepts, streamlining developer experience and opening doors for new applications in media editing, safety monitoring, and damage assessment.

    Conversational image segmentation with Gemini 2.5
  • JULY 17, 2025 / Gemini

    Build with Veo 3, now available in the Gemini API

    Veo 3, Google’s latest AI video generation model, is now available in paid preview via the Gemini API and Google AI Studio. Unveiled at Google I/O 2025, Veo 3 can generate both video and synchronized audio, including dialogue, background sounds, and even animal noises. This model delivers realistic visuals, natural lighting, and physics, with accurate lip syncing and sound that matches on-screen action.

    Build with Veo 3, now available in the Gemini API and Google AI Studio
  • JULY 16, 2025 / Cloud

    Stanford’s Marin foundation model: The first fully open model developed using JAX

    The Marin project aims to expand the definition of 'open' in AI to include the entire scientific process, not just the model itself, by making the complete development journey accessible and reproducible. This effort, powered by the JAX framework and its Levanter tool, allows for deep scrutiny, trust in, and building upon foundation models, fostering a more transparent future for AI research.

    Stanford Marin project in JAX