48 results
OCT 03, 2024 / Gemma
Google is building AI models, focusing on Gemma, to bridge communication gaps across languages.
OCT 03, 2024 / Gemini
Google is rolling out (in GA) a new Gemini model, 1.5 Flash-8B, our smallest production model with SOTA performance for its size.
OCT 02, 2024 / Gemma
The season showcases new applications of Gemma, including a personal AI code assistant and projects for non-English tasks and business email processing.
SEP 05, 2024 / Gemma
PaliGemma, a lightweight open vision-language model (VLM), is able to take both image and text inputs and produce a text response, adding an additional vision model to the BaseGemma model.
SEP 04, 2024 / Google AI Edge
TensorFlow Lite, now named LiteRT, is still the same high-performance runtime for on-device AI, but with an expanded vision to support models authored in PyTorch, JAX, and Keras.
SEP 03, 2024 / DeepMind
Controlled Generation for Gemini 1.5 Pro and Flash improves the handoff from data science teams to developers, enhancing the integration of AI output and ensuring AI-generated responses adhere to a defined schema.
AUG 29, 2024 / Gemma
RecurrentGemma architecture showcases a hybrid model that mixes gated linear recurrences with local sliding window attention; a highly valuable feature when you're concerned about exhausting your LLM's context window.
AUG 22, 2024 / Gemma
Gemma 2 is a new suite of open models that sets a new standard for performance and accessibility, outperforming popular models more than twice its size.
AUG 16, 2024 / Gemma
Use the Gemma language model to gauge customer sentiment, summarize conversations, and assist with crafting responses in near real-time with minimal latency.
AUG 15, 2024 / Gemma
Learn more about the different variations of Gemma models, how they are designed for different use cases, and the core parameters of their architecture.