New for I/O: Assistant tools and features for Android apps and Smart Displays

May 18, 2021


Link copied to clipboard

Posted by Rebecca Nathenson, Director of Product for the Google Assistant Developer Platform

New Assistant tools at Google IO header

Today at I/O, we shared some exciting new product announcements to help you more easily bring Google Assistant to your Android apps and create more engaging content on smart displays.

Assistant development made easy with new Android APIs

App Actions helps you easily bring Google Assistant to your Android app and complete user queries of all kinds, from booking a ride to posting a message on social media. Companies such as MyFitnessPal and Twitter are already using App Actions to help their users get things done, just by using their voice. You can enable App Actions in Android Studio by mapping built-in intents to specific features and experiences within your apps. Here are new ways you can help users easily navigate your content through voice queries and proactive suggestions.

Better support for Assistant built-in intents with Capabilities

Capabilities is a new framework API available in beta today that lets you declare support for common tasks defined by built-in intents. By leveraging pre-built requests from our catalog of intents, you can offer users ways to jump to specific activities within your app.

For example, the Yahoo Finance app uses Capabilities to let users jump directly to the Verizon stock page just by saying “Hey Google, show me Verizon’s stock on Yahoo Finance.” Similarly, Snapchat users can use their voice to add filters and send them to friends: “Hey Google, send a snap with my Curry sneakers.”

Improved user discoverability with Shortcuts in Android 12

App shortcuts are already a popular way to automate most common tasks on Android. Thanks to the new APIs for Shortcuts in Android 12, it’s now easier to find all the Assistant queries that are supported with apps. If you build an Android Shortcut, it will automatically show up in the Assistant Shortcuts gallery, so users can choose to set up a personal voice command in your app, when they say “Hey Google, shortcuts.”

3 phones showing shortcuts from Assistant

Google Assistant can also suggest relevant shortcuts to help drive traffic to your app. For example, when using the eBay app, people will see a suggested Google Assistant Shortcut appear on the screen and have the option to create a shortcut for "show my bids."

We also introduced the Google Shortcuts Integration library, which identifies shortcuts pushed by Shortcuts Jetpack Module and makes them available to Assistant for use in managing related voice queries. By doing so, Google Assistant can suggest relevant shortcuts to users and help drive traffic to your app.

Get immediate answers and updates right from Assistant using Widgets, coming soon

Improvements to Android 12 also makes it easier to discover glanceable content with Widgets by mapping them to specific built-in intents using the Capabilities API. We're also looking at how to easily bring driving optimized widgets to Android Auto in the future. The integration with Assistant will enable one shot answers, quick updates and multi-step interactions with the same widget.

For example, with Dunkin’s widget implementation, you can say “Hey Google, reorder from Dunkin’ to select from previous drinks and place the order. Strava’s widget helps a user track how many miles they ran in a week by saying “Hey Google, check my miles on Strava”, and it will show up right on the lock screen.

Strava widget showing how many miles ran in a week

Build high quality Conversational Actions for smart displays

Last year, we introduced a number of improvements to the Assistant platform for smart displays, such as Actions Builder, Actions SDK and new built-in intents to improve the experience for both developers and users. Here are more improvements rolling out soon to make building conversational actions on smart displays even better.

New features to improve the developer experience

Interactive Canvas helps you build touch- and voice-controlled games and storytelling experiences for the Assistant using web technologies like HTML, CSS, and JavaScript. Companies such as CoolGames, Zynga, and GC Turbo have already used Canvas to build games for smart displays.

Since launch, we've gotten great feedback from developers that it would be simpler and faster to implement core logic in web code. To enable this, the Interactive Canvas API will soon provide access to text-to-speech (TTS), natural language understanding (NLU), and storage APIs that will allow developers to trigger these capabilities from client-side code. These APIs will provide experienced web developers with a familiar development flow and enable more responsive Canvas actions.

We’re also giving you a wider set of options around how to release your actions. Coming soon, in the Actions Console, you will be able to manage your releases by launching in stages. For example, you can launch to one country first and then expand to more later, or you can launch to just a smaller percentage and gradually roll out over time.

Improving the user experience on smart displays

You'll also see improvements that will enhance visual experiences on the smart display. For example, you can now remove the persistent header, which allows you to utilize full real estate of the device and provide users with fully immersive experiences.

Before Interactive Canvas brought customized touch interfaces to the Smart Display, we provided a simple way to stop TTS from playing by tapping anywhere on the screen of the device. However, with more multi-modal experiences being released on Smart Displays, there are use cases where it is important to continue playing TTS while the user touches the display. Developers will soon have the option to enable persistent TTS for their actions.

We’ve also added support for long-form media sessions with updates to the Media API so you can start playback from a specific moment, resume where a previous session stopped, and adapt conversational responses based on media playback context.

Easier transactions for your voice experiences

We know how important it is to have the tools you need to build a successful business on our platform. In October of last year, we made a commitment to make it easier for you to add seamless voice-based and display-based monetization capabilities to your experience. On-device CVC and credit card entry will soon be available on smart displays. Both of these features make on-device transactions much easier reducing the need to redirect users to their mobile devices.

We hope you are able to leverage all these new features to build engaging experiences and reach your users easily, both on mobile and at home. Check out our technical sessions, workshops and more from Google I/O on YouTube and get started with App Actions and Conversational Actions today!