34 results
MAY 20, 2025 / AI Edge
LiteRT has been improved to boost AI model performance and efficiency on mobile devices by effectively utilizing GPUs and NPUs, now requiring significantly less code, enabling simplified hardware accelerator selection, and more for optimal on-device performance.
MARCH 31, 2025 / Gemini
The Gemini API and ESP32 microcontroller simplify custom voice commands for IoT devices, leveraging speech recognition for devices to understand and react to custom commands, bridging the gap between digital and physical worlds.
MARCH 13, 2025 / Gemini
The Google I/O 2025 puzzle used the Gemini API to generate dynamic riddles for bonus worlds, enhancing player engagement and scalability. Here's what our developers learned on using the Gemini API effectively, including creativity, design, and implementation strategies.
MARCH 12, 2025 / Gemma
Gemma 3 1B, a new small language model for mobile and web applications via Google AI Edge, is now available, with increased efficiency, improved performance, and offline availability.
MARCH 6, 2025 / Gemini
This blog post introduces Gemini's code execution feature, which allows the AI model to generate and run Python code for tasks like solving equations, data analysis, and creating visualizations.
FEB. 13, 2025 / Gemma
A practical guide to constructing a Gemma 2-based Agentic AI system – a type of AI that can make its own decisions and use external tools to achieve goals – that can generate dynamic content for a fictional game world.
DEC. 18, 2024 / Gemini
Explore three Gemini starter apps that provide developers with production ready tools to build AI-powered projects with open source functionalities like spatial analysis, and video interactions.
DEC. 18, 2024 / Gemini
Learn how to build Go applications using Project IDX, an AI-assisted workspace for full-stack app development.
NOV. 19, 2024 / Firebase
Explore Firebase's new AI-powered app development tools and resources, including demos, documentation, and best practices at Firebase Demo Day 2024.
NOV. 13, 2024 / Gemma
vLLM's continuous batching and Dataflow's model manager optimizes LLM serving and simplifies the deployment process, delivering a powerful combination for developers to build high-performance LLM inference pipelines more efficiently.