Search for "What can I do with Google Gemini"

117 results

Clear filters
  • MARCH 14, 2024 / Android

    Tune in for Google I/O on May 14

    Register to stay informed about I/O and other related events coming soon. The live-streamed keynotes start May 14 at 10am. Mark your calendar.

    Google I/O 2024 promo image
  • MARCH 10, 2025 / Cloud

    Developer-focused sessions and talks at Google Cloud Next '25

    Google Cloud Next 2025, happening April 9-11 in Las Vegas, will feature expanded developer content, interactive experiences, and opportunities to connect with peers and Google experts.

    Google Cloud Next '25 Developer Sessions
  • APRIL 23, 2025 / Mobile

    Get ready for Google I/O: Program lineup revealed

    Google I/O's agenda is live, with keynotes and sessions scheduled for May 20-21, focusing on AI advancements, Android development, and web technologies. Register now to explore the full program, join us during the event for livestreams, on-demand sessions, and codelabs.

    Google I/O 2025 program lineup
  • DEC. 17, 2024 / Cloud

    Apigee API hub is now generally available

    Apigee API hub is a centralized repository for your entire API ecosystem, providing a single source of truth.

    Apigee-API-Hub-Feature
  • JULY 7, 2025 / Gemini

    Batch Mode in the Gemini API: Process more for less

    The new batch mode in the Gemini API is designed for high-throughput, non-latency-critical AI workloads, simplifying large jobs by handling scheduling and processing, and making tasks like data analysis, bulk content creation, and model evaluation more cost-effective and scalable, so developers can process large volumes of data efficiently.

    Scale your AI workloads with batch mode in the Gemini API
  • MAY 20, 2025 / Gemini

    From idea to app: Introducing Stitch, a new way to design UIs

    Stitch, a new Google Labs experiment, uses AI to generate UI designs and frontend code from text prompts and images, aiming to streamline the design and development workflow, offering features like UI generation from natural language or images, rapid iteration, and seamless paste to Figma and front-end code.

    From idea to app: Introducing Stitch, a new way to design UIs
  • MAY 20, 2025 / Gemini

    Fully Reimagined: AI-First Google Colab

    Google Colab is launching a reimagined AI-first version at Google I/O, featuring an agentic collaborator powered by Gemini 2.5 Flash with iterative querying capabilities, an upgraded Data Science Agent, effortless code transformation, and flexible interaction methods, aiming to significantly improve coding workflows.

    Google Colab's reimagined Al-first experience
  • JULY 10, 2025 / Gemini

    Announcing GenAI Processors: Build powerful and flexible Gemini applications

    GenAI Processors is a new open-source Python library from Google DeepMind designed to simplify the development of AI applications, especially those handling multimodal input and requiring real-time responsiveness, by providing a consistent "Processor" interface for all steps from input handling to model calls and output processing, for seamless chaining and concurrent execution.

    Announcing GenAI Processors: Streamline your Gemini application development
  • JUNE 17, 2024 / Gemini

    How It’s Made: AI Roadtrip, a Pixel Campaign Powered by Generative AI and Fans

    Best Phones Forever: AI Roadtrip is our first experiment in using generative AI to put fans in the driver's seat and bring characters to life.

    Best Phones Forever - Featured
  • JULY 23, 2025 / Firebase

    Unleashing new AI capabilities for popular frameworks in Firebase Studio

    New AI capabilities for popular frameworks in Firebase Studio include AI-optimized templates, streamlined integration with Firebase backend services, and the ability to fork workspaces for experimentation and collaboration, making AI-assisted app development more intuitive and faster for developers worldwide.

    Unleashing new AI capabilities for popular frameworks in Firebase Studio