49 results
JAN 15, 2025 / AI
Vertex AI RAG Engine, a managed orchestration service, streamlines the process of retrieving and feeding relevant information to Large Language Models. This enables developers to build robust, grounded generative AI apps that ensure responses are factually grounded.
DEC 23, 2024 / AI
The Multimodal Live API for Gemini 2.0 enables real-time multimodal interactions between humans and computers, and can be used to build real-time virtual assistants and adaptive educational tools.
DEC 20, 2024 / Gemma
AI Singapore and INSAIT teams have leveraged Gemma, a family of open-source language models, to create LLMs tailored to the unique needs of their communities, in a show of innovation and inclusivity in AI.
DEC 18, 2024 / Gemini
Explore three Gemini starter apps that provide developers with production ready tools to build AI-powered projects with open source functionalities like spatial analysis, and video interactions.
DEC 18, 2024 / Gemini
Learn how to build Go applications using Project IDX, an AI-assisted workspace for full-stack app development.
DEC 17, 2024 / Cloud
Apigee API hub is a centralized repository for your entire API ecosystem, providing a single source of truth.
DEC 09, 2024 / Web
A free Coursera course on quantum error correction, developed by Google Quantum AI, explains the importance of error correction in quantum computing and provides an overview of quantum errors.
NOV 21, 2024 / Mobile
The winners of the Gemini API Developer Competition showcased the potential of the Gemini API in creating impactful solutions, from AI-powered personal assistants to tools for accessibility and creativity.
NOV 19, 2024 / Firebase
Explore Firebase's new AI-powered app development tools and resources, including demos, documentation, and best practices at Firebase Demo Day 2024.
NOV 13, 2024 / Gemma
vLLM's continuous batching and Dataflow's model manager optimizes LLM serving and simplifies the deployment process, delivering a powerful combination for developers to build high-performance LLM inference pipelines more efficiently.