搜索

280 结果

清除过滤器
  • 2025年5月20日 / AI Edge

    On-device small language models with multimodality, RAG, and Function Calling

    Google AI Edge advancements, include new Gemma 3 models, broader model support, and features like on-device RAG and Function Calling to enhance on-device generative AI capabilities.

    Google AI Edge: Small Language Models with Multimodality, RAG, and Function Calling
  • 2025年5月20日 / Cloud

    What's new with Agents: ADK, Agent Engine, and A2A Enhancements

    Updates to Google's agent technologies include the Agent Development Kit (ADK) with new Python and Java versions, an improved Agent Engine UI for management, and enhancements to the Agent2Agent (A2A) protocol for better agent communication and security.

    What's New with Agents ADK, Agent Engine, and A2A Enhancements- Google I/O
  • 2025年5月20日 / Gemma

    Announcing Gemma 3n preview: powerful, efficient, mobile-first AI

    Gemma 3n is a cutting-edge open model designed for fast, multimodal AI on devices, featuring optimized performance, unique flexibility with a 2-in-1 model, and expanded multimodal understanding with audio, empowering developers to build live, interactive applications and sophisticated audio-centric experiences.

    Gemma 3n
  • 2025年5月20日 / Gemini

    Building agents with Google Gemini and open source frameworks

    Google Gemini models offer several advantages when building AI agents, such as advanced reasoning, function calling, multimodality, and large context window capabilities. Open-source frameworks like LangGraph, CrewAI, LlamaIndex, and Composio can be used with Gemini for agent development.

    Building agents with Google Gemini and open source frameworks
  • 2025年5月20日 / Android

    What you should know from the Google I/O 2025 Developer keynote

    Top announcements from Google I/O 2025 focus on building across Google platforms and innovating with AI models from Google DeepMind, with key focus on new tools, APIs, and features designed to enhance developer productivity and create AI-powered experiences using Gemini, Android, Firebase, and web.

    What you should know from the Google I/O 2025 Developer keynote
  • 2025年5月20日 / Gemini

    From idea to app: Introducing Stitch, a new way to design UIs

    Stitch, a new Google Labs experiment, uses AI to generate UI designs and frontend code from text prompts and images, aiming to streamline the design and development workflow, offering features like UI generation from natural language or images, rapid iteration, and seamless paste to Figma and front-end code.

    From idea to app: Introducing Stitch, a new way to design UIs
  • 2025年5月13日 / TensorFlow

    使用 Keras 和 JAX,在 10 分钟内即可完成 Recommender 系统的构建和训练

    Keras Recommenders (KerasRS) 是新推出的库,旨在帮助开发者使用带有排名和检索基本模块的 API 构建推荐系统。该库不仅可以通过 pip 安装,还支持 JAX、TensorFlow 或 PyTorch 后端。

    Build and train a Recommender System in 10 minutes using Keras and JAX
  • 2025年5月12日 / Cloud

    Google Cloud 宣布正式推出适用于 Apigee 的 APIM Operator

    Apigee APIM Operator 现已正式发布,使用类似 Kubernetes 的 YAML 配置将 API 管理和网关功能引入 GKE 环境,可提供开发者本地工具,减少摩擦,并提供与 Apigee Hybrid 相当的策略管理。

    Apigee-API-Hub-Feature
  • 2025年5月9日 / Cloud

    面向游戏开发者的 Google AI

    重温今年游戏开发者大会 (GDC) 的公告,探索 Gemma 和 Gemini 模型如何通过推出的 Gemma 3、Unity 插件及其在示例游戏中的应用程序,来帮助在游戏中构建 AI 体验以及在 Google Cloud 中使用生成式 AI 扩展游戏。

    Google AI for Game Developers
  • 2025年5月9日 / DeepMind

    使用 Gemini 2.5 推进视频理解的前沿

    Gemini 2.5 实现视频理解的重大飞跃,不仅在多项关键基准测试中表现出众,还能无缝结合使用视听信息、代码以及其他数据格式。

    2.5Pro_Metadata_VideoUnderstanding