Posted by Zachary Senzer, Product Manager
A couple months ago at Google I/O, we announced a redesigned Actions console that makes developing your Actions easier than ever before. The new Actions console features a more seamless development experience that tailors your workflow from onboarding through deployment, with tailored analytics to manage your Actions post-launch. Simply select your use case during onboarding and the Actions console will guide you through the different stages of development.
Here are 5 tips to help you create the best Actions for your content using our new console.
Part of what makes the Actions on Google ecosystem so special is the vast array of devices that people can use to interact with your Actions. Some of these devices, including phones and our new smart displays, allow users to have rich visual interactions with your content. To help your Actions stand out, you can customize how these visual experiences appear to users of these devices. Simply visit the "Build" tab and go to theme customization in the Actions console where you can specify background images, typography, colors, and more for your Actions.
Conversational experiences can introduce complexity in how people ask to complete a task related to your Action--a user could ask for a game in thousands of different ways ("play a game for me", "find a maps quiz", "I want some trivia"). Figuring out all of the ways a user might ask for your Action is difficult. To make this process much easier, we're beginning to map the ways users might ask for your Action into a taxonomy of built-in intents to abstract away this difficulty.
We'll start to use the built-in intent you associated with your Action to help users more easily discover your content as we begin testing them with user's queries. We'll continue to add many more built-in intents over the coming months to cover a variety of use cases. In the Actions console, go to the "Build" tab, click "Actions", then "Add Action" and select one to get started.
While we'll continue to improve the ways users find your Actions within the Assistant, we've also made it easier for users to find your Actions outside the Assistant. Driving new traffic to your Actions is as easy as a click with Action Links. You now have the ability to define hyperlinks for each of your Actions to be used on your website, social media, email newsletters, and more. These links will launch users directly into your Action. If used on a desktop, the link will take users to the directory page for your Action, where they'll have the ability to choose the device they want to try your Action on. To configure Action Links in the console, visit the "Build" tab, choose "Actions", and select the Action for which you would like to create a link. That's it!
The best way to make sure that your Actions are working as intended is to test them using our updated web simulator. In the simulator, you can run through conversational user flows on phone, speaker, and even smart display device types. After you issue a request, you can see the visual response, request, and response JSON, with any potential errors. For further assistance with debugging errors, you also have the ability to view logs for your Actions.
Another great opportunity to test your Actions is by deploying to limited audiences in alpha and beta environments. By deploying to the alpha environment, your Actions do not need to go through the review process, meaning you can quickly test with your users. After deploying to the beta environment, you can launch your Actions to production whenever you like without additional review. To use alpha and beta environments, go to the "Deploy" tab and click "Release" in the Actions console.
After you deploy your Actions, it's equally important to measure their performance. By visiting the "Measure" tab and clicking "Analytics" in the Actions console, you will be able to view rich analytics on usage, health, and discovery. You can easily see how many people are using and returning to your Actions, how many errors users are encountering, the phrases users are saying to discover your Actions, and much, much, more. These insights can help you improve your Actions.
If you're new to the Actions console and looking for a quick way to get started, watch this video for an overview of the development process.
We're so excited to see how you will use the new Actions console to create even more Actions for more use cases, with additional tools to improve and iterate. Happy building!
Posted by Saba Zaidi, Senior Interaction Designer, Google Assistant
Earlier this year we announced Smart Displays, a new category of devices with the Google Assistant built in, that augment voice experiences with immersive visuals. These new, highly visual devices can make it easier to convey complex information, suggest Actions, support transactions, and express your brand. Starting today, Smart Displays are available for purchase in major US retailers, both in-store and online.
Interacting through voice is fast and easy, because speaking comes naturally to people, and language doesn't constrain people to predefined paths, unlike traditional visual interfaces. However in audio-only interfaces, it can be difficult to communicate detailed information like lists or tables, and nearly impossible to represent rich content like images, charts or a visual brand identity. Smart Displays allow you to create Actions for the Assistant that can respond to natural conversation, and also display information and represent your brand in an immersive, visual way.
Today we're announcing consumer availability of rich responses optimized for Smart Displays. With rich responses, developers can use basic cards, lists, tables, carousels and suggestion chips, which give you an array of visual interactions for your Action, with more visual components coming soon. In addition, developers can also create custom themes to more deeply customize your Action's look and feel.
If you've already built a voice-centric Action for the Google Assistant, not to worry, it'll work automatically on Smart Displays. But we highly recommend adding rich responses and custom themes to make your Action even more visually engaging and useful to your users on Smart Displays. Here are a few tips to get you started:
Smart Displays offer several visual formats for displaying information and facilitating user input. A carousel of images, a list or a table can help users scan information efficiently and then interact with a quick tap or swipe.
For example, consider a long, spoken prompt like: "Welcome to National Anthems! You can play the national anthems from 20 different countries, including the United States, Canada and the United Kingdom. Which would you like to hear?"
Instead of merely showing the transcript of that whole spoken prompt on the screen, a carousel of country flags makes it easy for users to scroll and tap the anthem they want to hear.
Suggestion chips are a great way to surface recommendations, aid feature discovery and keep the conversation moving on Smart Displays.
In this example, suggestion chips can help users find the "surprise me" feature, find the most popular anthems, or filter anthems by region.
You can take advantage of new custom themes to differentiate your experience and represent your brand's persona, choosing a custom voice, background image or color, font style, or the shape of your cards to match your branding.
For example, an Action like California Surf Report, could be themed in a more immersive and customized way.
We offer more tips on designing and building for Smart Displays and other visual devices in our conversation design site and in our talk from I/O about how to design Actions across devices.
Then head to our documentation to learn how to customize the visual appearance of your Actions with rich responses. You can also test and tinker with customizations for Smart Displays in the Actions Console simulator.
Don't forget that once you publish your first Action you can join our community program* and receive your exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit.
We can't wait to see—quite literally—what you build next! Thanks for being a part of our community, and as always, if you have ideas or requests that you'd like to share with our team, don't hesitate to join the conversation.
*Some countries are not eligible to participate in the developer community program, please review the terms and conditions
Posted by Billy Rutledge, Director of AIY Projects
Over the past year and a half, we've seen more than 200K people build, modify, and create with our Voice Kit and Vision Kit products. Today at Cloud Next we announced two new devices to help professional engineers build new products with on-device machine learning(ML) at their core: the AIY Edge TPU Dev Board and the AIY Edge TPU Accelerator. Both are powered by Google's Edge TPU and represent our first steps towards expanding AIY into a platform for experimentation with on-device ML.
The Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite ML models on your device. We've learned that performance-per-watt and performance-per-dollar are critical benchmarks when processing neural networks within a small footprint. The Edge TPU delivers both in a package that's smaller than the head of a penny. It can accelerate ML inferencing on device, or can pair with Google Cloud to create a full cloud-to-edge ML stack. In either configuration, by processing data directly on-device, a local ML accelerator increases privacy, removes the need for persistent connections, reduces latency, and allows for high performance using less power.
The AIY Edge TPU Dev Board is an all-in-one development board that allows you to prototype embedded systems that demand fast ML inferencing. The baseboard provides all the peripheral connections you need to effectively prototype your device — including a 40-pin GPIO header to integrate with various electrical components. The board also features a removable System-on-module (SOM) daughter board can be directly integrated into your own hardware once you're ready to scale.
The AIY Edge TPU Accelerator is a neural network coprocessor for your existing system. This small USB-C stick can connect to any Linux-based system to perform accelerated ML inferencing. The casing includes mounting holes for attachment to host boards such as a Raspberry Pi Zero or your custom device.
On-device ML is still in its early days, and we're excited to see how these two products can be applied to solve real world problems — such as increasing manufacturing equipment reliability, detecting quality control issues in products, tracking retail foot-traffic, building adaptive automotive sensing systems, and more applications that haven't been imagined yet.
Both devices will be available online this fall in the US with other countries to follow shortly.
For more product information visit g.co/aiy and sign up to be notified as products become available.
Posted by Mary Chen, Product Marketing Manager, and Ralfi Nahmias, Product Manager, Dialogflow
Today at Google Cloud Next '18, Dialogflow is introducing several new beta features to expand conversational capabilities for customer support and contact centers. Let's take a look at how three of these features can be used with the Google Assistant to improve the customer care experience for your Actions.
Building conversational Actions for content-heavy use cases, such as FAQ or knowledge base answers, is difficult. Such content is often dense and unstructured, making accurate intent modeling time-consuming and prone to error. Dialogflow's Knowledge Connectors feature simplifies the development process by understanding and automatically curating questions and responses from the content you provide. It can add thousands of extracted responses directly to your conversational Action built with Dialogflow, giving you more time for the fun parts – building rich and engaging user experiences.
Try out Knowledge Connectors in this bike shop sample
When users interact with the Google Assistant through text, it's common and natural to make spelling and grammar mistakes. When mistypes occur, Actions may not understand the user's intent, resulting in a poor followup experience. With Dialogflow's Automatic Spelling Correction, Actions built with Dialogflow can automatically correct spelling mistakes, which significantly improves intent and entity matching. Automatic Spelling Correction uses similar technology to what's used in Google Search and other Google products.
Enable Automatic Spelling Correction to improve intent and entity matching
Your Action can now be used as a virtual phone agent with Dialogflow's new Phone Gateway integration. Assign a working phone number to your Action built with Dialogflow, and it can start taking calls immediately. Phone Gateway allows you to easily implement virtual agents without needing to stitch together multiple services required for building phone applications.
Set up Phone Gateway in 3 easy steps
Dialogflow's Knowledge Connectors, Automatic Spelling Correction, and Phone Gateway are free for Standard Edition agents up to certain limits; for enterprise needs, see here for more options.
We look forward to the Actions you'll build with these new Dialogflow features. Give the features a try with the Cloud Next FAQ Action we made:
And if you're new to developing for the Google Assistant, join our Cloud Next talk this Thursday at 9am – see you on the livestream or in person!
Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud
Google Cloud Next '18 is only a few days away, and this year, there are over 500 sessions covering all aspects of cloud computing, from G Suite to the Google Cloud Platform. This is your chance to learn first-hand how to build custom solutions in G Suite alongside other developers from Independent Software Vendors (ISVs), systems integrators (SIs), and industry enterprises.
G Suite's intelligent productivity apps are secure, smart, and simple to use, so why not integrate your apps with them? If you're planning to attend the event and are wondering which sessions you should check out, here are some sessions to consider:
I look forward to meeting you in person at Next '18. In the meantime, check out the entire session schedule to find out everything it has to offer. Don't forget to swing by our "Meet the Experts" office hours (Tue-Thu), G Suite "Collaboration & Productivity" showcase demos (Tue-Thu), the G Suite Birds-of-a-Feather meetup (Wed), and the Google Apps Script & G Suite Add-ons meetup (just after the BoF on Wed). I'm excited at how we can use "all the tech" to change the world. See you soon!
Google Developers is proud to announce DevFest 2018, the largest annual community event series for the Google Developer Groups (GDG) program. Hundreds of GDG chapters around the world will host their biggest and most exciting developer event of the year. These are often all-day or multi-day events with many speakers and workshops, highlighting a wide range of Google developer products. DevFest season runs from August to November 2018.
Our GDG organizers and communities are getting ready for the season, and are excited to host an event near you!
Whether you are an established developer, new to tech, or just curious about the community - come and check out #DevFest18. Everyone is invited!
For more information on DevFest 2018 and to find an event near you, visit the site.