2519 results
AUG 29, 2024 / Gemma
RecurrentGemma architecture showcases a hybrid model that mixes gated linear recurrences with local sliding window attention; a highly valuable feature when you're concerned about exhausting your LLM's context window.
AUG 22, 2024 / Gemma
Gemma 2 is a new suite of open models that sets a new standard for performance and accessibility, outperforming popular models more than twice its size.
AUG 16, 2024 / Gemma
Use the Gemma language model to gauge customer sentiment, summarize conversations, and assist with crafting responses in near real-time with minimal latency.
AUG 15, 2024 / Gemma
Learn more about the different variations of Gemma models, how they are designed for different use cases, and the core parameters of their architecture.
AUG 15, 2024 / Play
Google Play's Indie Games Fund is back in Latin America for 2024, offering up to $2 million in funding and hands-on support to small game studios.
AUG 14, 2024 / Gemma
Create a text-based adventure game using Gemma 2. Here are code snippets and tips for designing the game world, enhancing interactivity and replayability, and more.
AUG 13, 2024 / Mobile
XNNPack, the default TensorFlow Lite CPU inference engine, has been updated to improve performance and memory management, allow cross-process collaboration, and simplify the user-facing API.
AUG 08, 2024 / Flutter
Purrfect Code is a sokoban style box pushing puzzler Purrfect based on Google tech (Flutter, IDX, FLAME, Firebase) and designed to be playable on web browsers. Open the game in Project IDX, earn badges for each level, and showcase your badges on your Developer Profile.
AUG 08, 2024 / Gemini
Gemini 1.5 Flash is now available to developers at more than 70% lower prices. Set up billing for Gemini API in Google AI Studio and access other new features like 1.5 Flash tuning.
JUL 31, 2024 / Gemma
ShieldGemma is a suite of safety content classifiers models built upon Gemma2 designed to keep users safe. GemmaScope is a new model interpretability tool that offers unparalleled insight into our models' inner workings.