Posted by Payam Shodjai, Director of Product Management, Google Assistant
Today at VOICE Global, we shared our vision for Google Assistant to truly be the best way to get things done - and the important role that developers play in that vision, especially as voice experiences continue to evolve.
Google Assistant helps more than 500 million people every month in over 30 languages across 90 countries get things done at home and on the go. What’s at the heart of this growth is the simple insight that people want a more natural way to get what they need. That’s why we’ve invested heavily in making sure Google Assistant works seamlessly across devices and services and offers quick and accurate help.
Over the last few months, we’ve seen people’s needs shifting, and this is reflected in how Google Assistant is being used and the role that it can play to help navigate these changes. For example, to help people get accurate information on Search and Maps - like modified store hours or information on pick-up and delivery - we have been using Duplex conversational technology to contact businesses and update over half a million business listings.
We’ve also been working with our partners to bring great educational experiences into the home, so that families can continue learning in a communal setting. Bamboo Learning is bringing their voice-forward education platform to Google Assistant, with fun, new ways to learn history, math, and reading. Our hand-washing songs continue to be popular. The songs leverage WaveNet's natural expressiveness – allowing us to train Google Assistant to sing across numerous generated voices users can pick from.
Great experiences are at the core of what makes Google Assistant truly helpful. To help existing and aspiring developers build new experiences with ease, we are making some major improvements to our core platform and development tools. Rather than needing to hop back and forth between Actions Console and Dialogflow to build an Action, wouldn’t it be great if there were one integrated platform for building on Google Assistant?
Starting today, we’re releasing Actions Builder, a new web-based IDE that provides a graphical interface to show the entire conversation flow. It allows you to manage Natural Language Understanding (NLU) training data and provides advanced debugging tools. And, it is fully integrated into the Actions Console so you can now build, debug, test, release, and analyze your Actions - all in one place.
If you prefer to work in your own tools, you can use the updated Actions SDK. For the first time, you’ll have a file based representation of your action and the ability to use a local IDE. The SDK not only enables local authoring of NLU and conversation schemas, but it also allows bulk import and export of training data to improve conversation quality. The Actions SDK is accompanied by a command line interface, so you can build and manage an action fully in code using your favorite source control and continuous integration tools.
With these two releases, we are also introducing a new conversation model and improvements to the runtime engine. Now, it’s easier to design and build conversations and users will get faster and more accurate responses. We’re very excited about this suite of products which replaces Dialogflow as the preferred way to develop conversational actions on Google Assistant.
Based on feedback from developers, we’re also adding new functionality to build more interactive experience on Google Assistant with Home Storage, updated Media API and Continuous Match Mode.
One of the exciting things about speakers and smart displays is that they’re communal. Home Storage is a new feature that provides a communal storage solution for devices connected on the home graph and allows developers to save context for all individual users, such as the last saved point from a puzzle game.
Our updated Media APIs now support longer-form media sessions and lets users resume playback of content across surfaces. For example, users can start playback from a specific moment or resume where they dropped out of their previous session.
Sometimes you want to build experiences that enable users to speak more naturally with your action, without waiting for a change in mic states. Rolling out in the next few months, Continuous Match Mode allows Assistant to respond immediately to a user’s speech for more fluid experiences by recognizing defined words and phrases set by you. This is done transparently so that before the mic opens, Assistant will announce the mic will stay open temporarily so users know they can speak freely without waiting for additional prompts. For example, CoolGames is launching a game in a few weeks called, “Guess The Drawing” that uses Continuous Match Mode to allow the user to continuously guess what the drawing is until they get it right. The game is also built with Interactive Canvas for a more visual and immersive experience on smart displays.
In addition to making it easy for you to build new experiences for Google Assistant, we also want to bring the depth of great web content together with the simple and robust AMP framework to deliver new experiences on Smart Displays. AMP allows you to create compelling, smooth websites that have a great user experience. AMP compliant articles are coming to smart displays later this summer with News. Stay tuned for more updates in the coming months as we expand to enable more web content categories for Smart Displays.
With these tools, we want to empower developers to build helpful experiences of the future with Google Assistant, enabling people to get what they need more simply, while giving them time back to focus on what matters most.