Generative UI allows AI agents to generate tailored UI widgets in real-time, matching the interface to the user’s specific interaction. But to move from demos to production, we need a clean separation of concerns. A2UI v0.9 is our answer; a framework-agnostic standard for declaring UI intent. It allows local or remote agents to communicate with any client application using a common language, ensuring your agent can generate your UI using your existing component catalog on any device.
A2UI is designed to work on web, mobile, and anywhere else your users are.
This release focuses on making it easier than ever to build agents and integrate with your existing frontends. This release hardens our internal abstractions, simplifies streaming, and improves developer experience.
Adding A2UI to any python agent is now a simple pip install or uv add away (go and kotlin coming soon).
pip install a2ui-agent-sdk
Integrating A2UI into your existing agent is a straightforward 5-step process. Here’s the "Hello World" of A2UI integration:
# Step 1: Define your catalog (basic or bring your own) with optional examples
my_catalog = CatalogConfig.from_path(
name="<MY_CATALOG_NAME>",
catalog_path=("file:///path/to/catalog.json"),
# Optional: help LLM with "few-shot" learning
examples_path="path/to/examples/folder/*.json"
),
# Step 2: Initialize the Schema Manager to manage A2UI Spec versions
schema_manager = A2uiSchemaManager(
version="0.9",
catalogs=[my_catalog],
)
# 3. Generate the System Prompt, handles A2UI instructions
system_instruction = schema_manager.generate_system_prompt(
role_description="You are a helpful assistant great at generating UI...",
)
# Step 4: Initialize your LLM Agent with the generated instructions
my_agent = AnyAgentFrameworkLLMAgent(instruction=system_instruction, ...)
# Step 5. Execute and Stream the UI
def handle_turn(user_query):
llm_response = my_agent.respond(user_query)
# In your executor the SDK helps parse, fix, and validate the LLM's JSON on the fly
selected_catalog = schema_manager.get_selected_catalog()
final_parts = parse_response_to_parts(llm_response, selected_catalog.validator)
yield {
"is_task_complete": True,
"parts": final_parts,
}
Go Beyond the Basics
While the example above shows a simple static integration, the A2UI agent SDK is built for production-grade complexity. Out of the box, it supports:
Explore our Agent Samples to see these advanced features in action.
We are working on some neat things like better MCP Apps integrations, progressive disclosure “skills” for A2UI, human intent abstractions, PII support, and a lot more. Take a look at our updated roadmap and be sure to show us what you are working on.
A standard is only as good as the ecosystem around it, and the landscape is evolving rapidly.
Link to Youtube Video (visible only when JS is disabled)
We're seeing incredible implementations of A2UI across the industry. Here are a few recent sightings:
The GenUI Personal Health Companion is an open-source app designed to eliminate "data silos" and "navigation fatigue" by replacing static dashboards with a modular, AI-driven interface. Developed by Rebel App Studio, Codemate’s specialist Flutter team, this solution leverages real-time data orchestration to bridge the gap between fragmented medical records and wearable telemetry. Rather than forcing users to dig through sub-menus, the app utilizes a central LLM-powered chat that can dynamically generate UI widgets on the fly, surfacing critical lab results, vaccine expirations, or clinic locations based on immediate context. By grounding AI insights directly in the user’s unique health data, the app transforms passive health tracking into a proactive, intent-driven assistant built for the modern digital patient.
Dive deeper into how the Health Companion was built on Codemate’s blog—and explore the open-source demo on GitHub.
The Life Goal Simulator is an interactive demonstration of how Generative UI bridges the gap between consumer expectations and the static experiences currently offered by the financial services industry. Built by Very Good Ventures (VGV)—a Flutter and GenUI consultancy trusted by brands like Toyota and GEICO—the app moves beyond traditional, one-size-fits-all interfaces by putting the user’s life at the center of the experience. By selecting a persona and a goal, such as saving for retirement or a first home, users hand the wheel to Gemini, which utilizes the Flutter GenUI SDK to generate a native-feeling, real-time UI from a curated catalog of interactive widgets like sliders, bar charts, and multi-selects.
Check out the open-source code for this demo, and you can also see a live interactive demo of this experience.
Any agent that already speaks AG-UI can drive A2UI v0.9 on day zero. No custom integration is required. This works through AG-UI's middleware system: a small piece of code that plugs into your existing agent pipeline. It teaches your agent how to speak A2UI, wires the responses correctly, and handles streaming, converting the agent's output into components your UI can render immediately, using A2UI's built-in renderers or your own custom components.
Get this starter template running in your machine
npx copilotkit@latest create my-app --framework a2ui
Ready to unshackle your agents and let them drive your front end with whatever components you have?
Check out our new A2UI Theater for a replay or dive into the A2UI.org for docs, samples and dev guides to start building flexible, portable generative UIs today.