96 results
JAN 15, 2025 / AI
Vertex AI RAG Engine, a managed orchestration service, streamlines the process of retrieving and feeding relevant information to Large Language Models. This enables developers to build robust, grounded generative AI apps that ensure responses are factually grounded.
JAN 07, 2025 / Matter
The public beta launch of Home APIs for Android allows developers to create better smart home experiences. This launch emphasizes investments in Matter to improve device connectivity and interoperability, and increase smart home accessibility.
JAN 07, 2025 / Matter
The Google Home APIs are now in public developer beta for Android, allowing developers to build innovative smart home experiences for over 600M devices using Google's hubs, Matter infrastructure, and automation engine.
DEC 18, 2024 / Gemini
Learn how to build Go applications using Project IDX, an AI-assisted workspace for full-stack app development.
DEC 17, 2024 / Cloud
Apigee API hub is a centralized repository for your entire API ecosystem, providing a single source of truth.
DEC 12, 2024 / Cloud
Google Cloud Next 2025, happening April 9-11 in Las Vegas, will feature expanded developer content, interactive experiences, and opportunities to connect with peers and Google experts.
NOV 21, 2024 / Mobile
The winners of the Gemini API Developer Competition showcased the potential of the Gemini API in creating impactful solutions, from AI-powered personal assistants to tools for accessibility and creativity.
NOV 19, 2024 / Firebase
Explore Firebase's new AI-powered app development tools and resources, including demos, documentation, and best practices at Firebase Demo Day 2024.
NOV 14, 2024 / Gemini
The integration of Gemini's 1.5 models with Sublayer's Ruby-based AI agent framework enables developer teams to automate their documentation process, streamline workflows, and build AI-driven applications.
NOV 13, 2024 / Gemma
vLLM's continuous batching and Dataflow's model manager optimizes LLM serving and simplifies the deployment process, delivering a powerful combination for developers to build high-performance LLM inference pipelines more efficiently.