Posted by the Google Fonts team
The Google Fonts catalog now includes Japanese web fonts. Since shipping Korean in February, we have been working to optimize the font slicing system and extend it to support Japanese. The optimization efforts proved fruitful—Korean users now transfer on average over 30% fewer bytes than our previous best solution. This type of on-going optimization is a major goal of Google Fonts.
Japanese presents many of the same core challenges as Korean:
The impact of the large character set made up of complex glyph contours is multiplicative, resulting in very large font files. Meanwhile, the complex writing system and character interactions forced us to refine our analysis process.
To begin supporting Japanese, we gathered character frequency data from millions of Japanese webpages and analyzed them to inform how to slice the fonts. Users download only the slices they need for a page, typically avoiding the majority of the font. Over time, as they visit more pages and cache more slices, their experience becomes ever faster. This approach is compatible with many scripts because it is based on observations of real-world usage.
Frequency of the popular Japanese and Korean characters on the web
As shown above, Korean and Japanese have a relatively small set of characters that are used extremely frequently, and a very long tail of rarely used characters. On any given page most of the characters will be from the high frequency part, often with a few rarer characters mixed in.
We tried fancier segmentation strategies, but the most performant method for Korean turned out to be simple:
A user of Google Fonts viewing a webpage will download only the slices needed for the characters on the page. This yielded great results, as clients downloaded 88% fewer bytes than a naive strategy of sending the whole font. While brainstorming how to make things even faster, we had a bit of a eureka moment, realizing that:
In combination, these features mean we no longer have to worry about queuing delays as we would have under HTTP/1.1, and therefore we can do much more fine-grained slicing.
Our analyses of the Japanese and Korean web shows most pages tend to use mostly common characters, plus a few rarer ones. To optimize for this, we tested a variety of finer-grained strategies on the common characters for both languages.
We concluded that the following is the best strategy for Korean, with clients downloading 38% fewer bytes than our previous best strategy:
For Japanese, we found that segmenting the first 3,000 characters into 20 slices was best, resulting in clients downloading 80% fewer bytes than they would if we just sent the whole font. Having sufficiently reduced transfer sizes, we now feel confident in offering Japanese web fonts for the first time!
Now that both Japanese and Korean are live on Google Fonts, we have even more ideas for further optimization—and we will continue to ship updates to make things faster for our users. We are also looking forward to future collaborations with the W3C to develop new web standards and go beyond what is possible with today's technologies (learn more here).
PS - Google Fonts is hiring :)
Posted by Clayton Wilkinson, Developer Platforms Engineer
Today, we're releasing updates to ARCore, Google's platform for building augmented reality experiences, and to Sceneform, the 3D rendering library for building AR applications on Android. These updates include algorithm improvements that will let your apps consume less memory and CPU usage during longer sessions. They also include new functionality that give you more flexibility over content management.
Here's what we added:
Sceneform will now include an API to enable apps to load gITF models at runtime. You'll no longer need to convert the gITF files to SFB format before rendering. This will be particularly useful for apps that have a large number of gITF models (like shopping experiences).
To take advantage of this new function -- and load models from the cloud or local storage at runtime -- use RenderableSource as the source when building a ModelRenderable.
private static final String GLTF_ASSET = "https://github.com/KhronosGroup/glTF-Sample-Models/raw/master/2.0/Duck/glTF/Duck.gltf"; // When you build a Renderable, Sceneform loads its resources in the background while returning // a CompletableFuture. Call thenAccept(), handle(), or check isDone() before calling get(). ModelRenderable.builder() .setSource(this, RenderableSource.builder().setSource( this, Uri.parse(GLTF_ASSET), RenderableSource.SourceType.GLTF2).build()) .setRegistryId(GLTF_ASSET) .build() .thenAccept(renderable -> duckRenderable = renderable) .exceptionally( throwable -> { Toast toast = Toast.makeText(this, "Unable to load renderable", Toast.LENGTH_LONG); toast.setGravity(Gravity.CENTER, 0, 0); toast.show(); return null; });
Sceneform has a UX library of common elements like plane detection and object transformation. Instead of recreating these elements from scratch every time you build an app, you can save precious development time by taking them from the library. But what if you need to tailor these elements to your specific app needs? Today we're publishing the source code of the UX library so you can customize whichever elements you need.
An example of interactive object transformation, powered by an element in the Sceneform UX Library.
Several developers have told us that when it comes to point clouds, they'd like to be able to associate points between frames. Why? Because when a point is present in multiple frames, it is more likely to be part of a solid, stable structure rather than an object in motion.
To make this possible, we're adding an API to ARCore that will assign IDs to each individual dot in a point cloud.
These new point IDs have the following elements:
Last but not least, we continue to add ARCore support to more devices so your AR experiences can reach more users across more surfaces. These include smartphones as well as -- for the first time -- a Chrome OS device, the Acer Chromebook Tab 10.
You can get the latest information about ARCore and Sceneform on https://developers.google.com/ar/develop
Ready to try out the samples or have issues, then visit our projects hosted on GitHub:
Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud
Google Cloud Platform (GCP) provides infrastructure, serverless products, and APIs that help you build, innovate, and scale. G Suite provides a collection of productivity tools, developer APIs, extensibility frameworks and low-code platforms that let you integrate with G Suite applications, data, and users. While each solution is compelling on its own, users can get more power and flexibility by leveraging both together.
In the latest episode of the G Suite Dev Show, I'll show you one example of how you can take advantage of powerful GCP tools right from G Suite applications. BigQuery, for example, can help you surface valuable insight from massive amounts of data. However, regardless of "the tech" you use, you still have to justify and present your findings to management, right? You've already completed the big data analysis part, so why not go that final mile and tap into G Suite for its strengths? In the sample app covered in the video, we show you how to go from big data analysis all the way to an "exec-ready" presentation.
The sample application is meant to give you an idea of what's possible. While the video walks through the code a bit more, let's give all of you a high-level overview here. Google Apps Script is a G Suite serverless development platform that provides straightforward access to G Suite APIs as well as some GCP tools such as BigQuery. The first part of our app, the runQuery() function, issues a query to BigQuery from Apps Script then connects to Google Sheets to store the results into a new Sheet (note we left out CONSTANT variable definitions for brevity):
runQuery()
CONSTANT
function runQuery() { // make BigQuery request var request = {query: BQ_QUERY}; var queryResults = BigQuery.Jobs.query(request, PROJECT_ID); var jobId = queryResults.jobReference.jobId; queryResults = BigQuery.Jobs.getQueryResults(PROJECT_ID, jobId); var rows = queryResults.rows; // put results into a 2D array var data = new Array(rows.length); for (var i = 0; i < rows.length; i++) { var cols = rows[i].f; data[i] = new Array(cols.length); for (var j = 0; j < cols.length; j++) { data[i][j] = cols[j].v; } } // put array data into new Sheet var spreadsheet = SpreadsheetApp.create(QUERY_NAME); var sheet = spreadsheet.getActiveSheet(); var headers = queryResults.schema.fields; sheet.appendRow(headers); // header row sheet.getRange(START_ROW, START_COL, rows.length, headers.length).setValues(data); // return Sheet object for later use return spreadsheet; }
It returns a handle to the new Google Sheet which we can then pass on to the next component: using Google Sheets to generate a Chart from the BigQuery data. Again leaving out the CONSTANTs, we have the 2nd part of our app, the createColumnChart() function:
CONSTANTs
createColumnChart()
function createColumnChart(spreadsheet) { // create & put chart on 1st Sheet var sheet = spreadsheet.getSheets()[0]; var chart = sheet.newChart() .setChartType(Charts.ChartType.COLUMN) .addRange(sheet.getRange(START_CELL + ':' + END_CELL)) .setPosition(START_ROW, START_COL, OFFSET, OFFSET) .build(); sheet.insertChart(chart); // return Chart object for later use return chart; }
The chart is returned by createColumnChart() so we can use that plus the Sheets object to build the desired slide presentation from Apps Script with Google Slides in the 3rd part of our app, the createSlidePresentation() function:
createSlidePresentation()
function createSlidePresentation(spreadsheet, chart) { // create new deck & add title+subtitle var deck = SlidesApp.create(QUERY_NAME); var [title, subtitle] = deck.getSlides()[0].getPageElements(); title.asShape().getText().setText(QUERY_NAME); subtitle.asShape().getText().setText('via GCP and G Suite APIs:\n' + 'Google Apps Script, BigQuery, Sheets, Slides'); // add new slide and insert empty table var tableSlide = deck.appendSlide(SlidesApp.PredefinedLayout.BLANK); var sheetValues = spreadsheet.getSheets()[0].getRange( START_CELL + ':' + END_CELL).getValues(); var table = tableSlide.insertTable(sheetValues.length, sheetValues[0].length); // populate table with data in Sheets for (var i = 0; i < sheetValues.length; i++) { for (var j = 0; j < sheetValues[0].length; j++) { table.getCell(i, j).getText().setText(String(sheetValues[i][j])); } } // add new slide and add Sheets chart to it var chartSlide = deck.appendSlide(SlidesApp.PredefinedLayout.BLANK); chartSlide.insertSheetsChart(chart); // return Presentation object for later use return deck; }
Finally, we need a driver application that calls all three one after another, the createColumnChart() function:
function createBigQueryPresentation() { var spreadsheet = runQuery(); var chart = createColumnChart(spreadsheet); var deck = createSlidePresentation(spreadsheet, chart); }
We left out some detail in the code above but hope this pseudocode helps kickstart your own project. Seeking a guided tutorial to building this app one step-at-a-time? Do our codelab at g.co/codelabs/bigquery-sheets-slides. Alternatively, go see all the code by hitting our GitHub repo at github.com/googlecodelabs/bigquery-sheets-slides. After executing the app successfully, you'll see the fruits of your big data analysis captured in a presentable way in a Google Slides deck:
This isn't the end of the story as this is just one example of how you can leverage both platforms from Google Cloud. In fact, this was one of two sample apps featured in our Cloud NEXT '18 session this summer exploring interoperability between GCP & G Suite which you can watch here:
Stay tuned as more examples are coming. We hope these videos plus the codelab inspire you to build on your own ideas.
Posted by Jonathan Huang, Senior Product Manager, Google AR/VR
Since we first launched Daydream, developers have responded by creating virtual reality (VR) experiences that are entertaining, educational and useful. Today, we're announcing a new set of experimental features for developers to use on the Lenovo Mirage Solo—our standalone Daydream headset—to continue to push the platform forward. Here's what's coming:
First, we're adding APIs to support positional controller tracking with six degrees of freedom—or 6DoF—to the Mirage Solo. With 6DoF tracking, you can move your hands more naturally in VR, just like you would in the physical world. To date, this type of experience has been limited to expensive PC-based VR with external tracking.
We've also created experimental 6DoF controllers that use a unique optical tracking system to help developers start building with 6DoF features on the Mirage Solo. Instead of using expensive external cameras and sensors that have to be carefully calibrated, our system uses machine learning and off-the-shelf parts to accurately estimate the 3D position and orientation of the controllers. We're excited about this approach because it can reduce the need for expensive hardware and make 6DoF experiences more accessible to more people.
We've already put these experimental controllers in the hands of a few developers and we're excited for more developers to start testing them soon.
Experimental 6DoF controllers
We're also introducing what we call see-through mode, which gives you the ability to see what's right in front of you in the physical world while you're wearing your VR headset.
See-through mode takes advantage of our WorldSense technology, which was built to provide accurate, low latency tracking. And, because the tracking cameras on the Mirage Solo are positioned at approximately eye-distance apart, you also get precise depth perception. The result is a see-through mode good enough to let you play ping pong with your headset on.
Playing ping pong with see-through mode on the Mirage Solo.
The combination of see-through mode and the Mirage Solo's tracking technology also opens up the door for developers to blend the digital and physical worlds in new ways by building Augmented Reality (AR) prototypes. Imagine, for example, an interior designer being able to plan a new layout for a room by adding virtual chairs, tables and decorations on top of the actual space.
Experimental app using objects from Poly, see-through mode and 6DoF Controllers to design a space in our office.
Finally, we're introducing the capability to open any smartphone Android app on your Daydream device, so you can use your favorite games, tools and more in VR. For example, you can play the popular indie game Mini Metro on a virtual big screen, so you have more space to view and plan your own intricate public transit system.
Playing Mini Metro on a virtual big screen in VR.
With support for Android Apps in VR, developers will be able to add Daydream VR support to their existing 2D applications without having to start from scratch. The Chrome team re-used the existing 2D interfaces for Chrome Browser Sync, settings and more to provide a feature-rich browsing experience in Daydream.
The Chrome app on Daydream uses the 2D settings within VR.
We've really loved building with these tools and can't wait to see what you do with them. See-through mode and Android Apps in VR will be available for all developers to try soon.
If you're a developer in the U.S., click here to learn more and apply now for an experimental 6DoF controller developer kit.
Flutter is Google's new mobile app toolkit for crafting beautiful native interfaces on iOS and Android in record time. Today, during the keynote of Google Developer Days in Shanghai, we are announcing Flutter Release Preview 2: our last major milestone before Flutter 1.0.
This release continues the work of completing core scenarios and improving quality, beginning with our initial beta release in February through to the availability of our first Release Preview earlier this summer. The team is now fully focused on completing our 1.0 release.
The theme for this release is pixel-perfect iOS apps. While we designed Flutter with highly brand-driven, tailored experiences in mind, we heard feedback from some of you who wanted to build applications that closely follow the Apple interface guidelines. So in this release we've greatly expanded our support for the "Cupertino" themed controls in Flutter, with an extensive library of widgets and classes that make it easier than ever to build with iOS in mind.
A reproduction of the iOS Settings home page, built with Flutter
Here are a few of the new iOS-themed widgets added in Flutter Release Preview 2:
CupertinoApp
CupertinoTimerPicker
CupertinoSegmentedControl
CupertinoActionSheet
And more have been updated, too:
CupertinoNavigationBar
CupertinoSliverNavigationBar
CupertinoPageRoute.title
CupertinoPageScaffold
CupertinoScrollbar
CupertinoPicker
As ever, the Flutter documentation is the place to go for detailed information on the Cupertino* classes. (Note that at the time of writing, we were still working to add some of these new Cupertino widgets to the visual widget catalog).
Cupertino*
We've made progress to complete other scenarios also. Taking a look under the hood, support has been added for executing Dart code in the background, even while the application is suspended. Plugin authors can take advantage of this to create new plugins that execute code upon an event being triggered, such as the firing of a timer, or the receipt of a location update. For a more detailed introduction, read this Medium article, which demonstrates how to use background execution to create a geofencing plugin.
Another improvement is a reduction of up to 30% in our application package size on both Android and iOS. Our minimal Flutter app on Android now weighs in at just 4.7MB when built in release mode, a savings of 2MB since we started the effort — and we're continuing to identify further potential optimizations. (Note that while the improvements affect both iOS and Android, you may see different results on iOS because of how iOS packages are built).
As many new developers continue to discover Flutter, we're humbled to note that Flutter is now one of the top 50 active software repositories on GitHub:
We declared Flutter "production ready" at Google I/O this year; with Flutter getting ever closer to the stable 1.0 release, many new Flutter applications are being released, with thousands of Flutter-based apps already appearing in the Apple and Google Play stores. These include some of the largest applications on the planet by usage, such as Alibaba (Android, iOS), Tencent Now (Android, iOS), and Google Ads (Android, iOS). Here's a video on how Alibaba used Flutter to build their Xianyu app (Android, iOS), currently used by over 50 million customers in China:
We take customer satisfaction seriously and regularly survey our users. We promised to share the results back with the community, and our most recent survey shows that 92% of developers are satisfied or very satisfied with Flutter and would recommend Flutter to others. When it comes to fast development and beautiful UIs, 79% found Flutter extremely helpful or very helpful in both reaching their maximum engineering velocity and implementing an ideal UI. And 82% of Flutter developers are satisfied or very satisfied with the Dart programming language, which recently celebrated hitting the release milestone for Dart 2.
Flutter's strong community growth can be felt in other ways, too. On StackOverflow, we see fast growing interest in Flutter, with lots of new questions being posted, answered and viewed, as this chart shows:
Number of StackOverflow question views tagged with each of four popular UI frameworks over time
Flutter has been open source from day one. That's by design. Our goal is to be transparent about our progress and encourage contributions from individuals and other companies who share our desire to see beautiful user experiences on all platforms.
How do you upgrade to Flutter Release Preview 2? If you're on the beta channel already, it just takes one command:
$ flutter upgrade
You can check that you have Release Preview 2 installed by running flutter --version from the command line. If you have version 0.8.2 or later, you have everything described in this post.
flutter --version
If you haven't tried Flutter yet, now is the perfect time, and flutter.io has all the details to download Flutter and get started with your first app.
When you're ready, there's a whole ecosystem of example apps and code snippets to help you get going. You can find samples from the Flutter team in the flutter/samples repo on GitHub, covering things like how to use Material and Cupertino, approaches for deserializing data encoded in JSON, and more. There's also a curated list of samples that links out to some of the best examples created by the Flutter community.
You can also learn and stay up to date with Flutter through our hands-on videos, newsletters, community articles and developer shows. There are discussion groups, chat rooms, community support, and a weekly online hangout available to you to help you along the way as you build your application. Release Preview 2 is our last release preview. Next stop: 1.0!
As we shared in May, people create and consume photos and videos in many different ways, and we think it should be easier to do more with the photos people take, across more of the apps and devices we all use. That's why we created the Google Photos Library API: to give you the ability to build photo and video experiences in your products that are smarter, faster, and more helpful.
After a successful developer preview over the past few months, the Google Photos Library API is now generally available. If you want to build and test your own experience, you can visit our developer documentation to get started. You can also express your interest in joining the Google Photos partner program if you are planning a larger integration.
Here's a quick overview of the Google Photos Library API and what you can do:
Whether you're a mobile, web, or backend developer, you can use this REST API to utilize the best of Google Photos and help people connect, upload, and share from inside your app. We are also launching client libraries in multiple languages that will help you get started quicker.
Users have to authorize requests through the API, so they are always in the driver's seat. Here are a few things you can help your users do:
Putting machine learning to work in your app is simple too. You can use smart filters, like content categories, to narrow down or exclude certain types of photos and videos and make it easier for your users to find the ones they're looking for.
Thanks to everyone who provided feedback throughout our developer preview, your contributions helped make the API better. You can read our release notes to follow along with any new releases of our API. And, if you've been using the Picasa Web Albums API, here's a migration guide that will help you move to the Google Photos Library API.
Posted by Cathy Pearl, Head of Conversation Design Outreach Illustrations by Kimberly Harvey
Hi all! I'm Cathy Pearl, head of conversation design outreach at Google. I've been building conversational systems for a while now, starting with IVRs (phone systems) and moving on to multi-modal experiences. I'm also the author of the O'Reilly book Designing Voice User Interfaces. These days, I'm keen to introduce designers and developers to our conversation design best practices so that Actions will provide the best possible user experience. Today, I'll be talking about a fundamental first step when thinking about creating an Action: writing sample dialogs.
So, you've got a cool idea for Actions on Google you want to build. You've brushed up on Dialogflow, done some codelabs, and figured out which APIs you want to use. You're ready to start coding, right?
Not so fast!
Creating an Action always needs to start with designing an Action. Don't panic; it's not going to slow you down. Planning out the design first will save you time and headaches later, and ultimately produces a better, more usable experience.
In this post, I'll talk about the first and most important component for designing a good conversational system: sample dialogs. Sample dialogs are potential conversational paths a user might take while conversing with your Action. They look a lot like film scripts, with dialog exchanges between your Action and the user. (And, like film scripts, they should be read aloud!) Writing sample dialogs comes before writing code, and even before creating flows.
When I talk to people about the importance of sample dialogs, I get a lot of nods and agreement. But when I go back later and say, "Hey, show me your sample dialogs," I often get a sheepish smile and an excuse as to why they weren't written. Common ones include:
First off, there is a misconception that "conversation design" (or voice user interface design) is just the top layer of the experience: the words, and perhaps the order of words, that the user will see/hear.
But conversation design goes much deeper. It drives the underlying structure of the experience, which includes:
In the end, these things manifest as words, to be sure. But thinking of them as "stuff you worry about later" will set you up for failure when it comes time for your user to interact with your Action. For example, without a sample dialog, you might not realize that your prompts all start with the word "Next", making them sound robotic and stilted. Sample dialogs will also show you where you need "glue" words such as "first" and "by the way".
Google has put together design guidelines for building conversational systems. They include an introduction to sample dialogs and why they're important:
Sample dialogs will give you a quick, low-fidelity sense of the "sound-and-feel" of the interaction you're designing. They convey the flow that the user will actually experience, without the technical distractions of code notation, complex flow diagrams, recognition-grammar issues, etc.
By writing sample dialogs, you can informally experiment with and evaluate different design strategies, such as how to promote the discoverability of new features or how to confirm a user's request (for example: should you use an implicit confirmation, an explicit confirmation, or no confirmation at all?).
Check out the Google I/O 2018 Action sample dialogs to see an example. (You can also take a look at the Google I/O 2018 Action code.)
Still not sure if you really need them? Let's hear from a developer who works on Actions, Jessica Dene Earley-Cha, who said in her recent Medium post:
Let's cover how this was built. Before any coding can happen, we need to build a Conversational Design. I originally had skipped this step because I thought that I could build the structure first and then fill in the content (like building a website). However, the structure is tied in with the content. I realized this when I was hitting walls that I thought were technical, but they were there because I didn't have a design.
She makes the great point that designing for conversational systems is different than designing for the web. With a conversational interface, the content itself is part of the structure, so design becomes even more important.
So now that you're (hopefully) convinced, let's discuss four of the common pitfalls developers can avoid by using sample dialogs:
Suppose you're writing an Action for a bike shop: it can make repair and service appointments, give store hours, and list the latest deals. It's tempting to just start listing out options so the user will know everything they can do. Let's see what a sample dialog looks like using that strategy:
USERHey Google, talk to Alyssa's Bike Shop.
ACTIONHi! Welcome to Alyssa's Bike Shop. Would you like to make an appointment for a repair, cancel an appointment, hear store hours, hear this week's deals, or hear next month's deals?
If you read this prompt out loud to someone else, it will quickly become apparent that too much information is being presented. Humans have a limited capacity for taking in audio, especially if it's the first time they're hearing it.
Here is a better way:
ACTIONHi! Welcome to Alyssa's Bike Shop. I can help you make or cancel an appointment, get store hours, or tell you the latest deals. Which would you like?
Pro tip: an even better experience would be to leave out the "cancel" option if the user doesn't have any.
Here's a sample dialog representing a common mistake in many conversational experiences. In this example, the user is returning to an Action they've tried previously:
USERHey Google, talk to Cathy's Astronomy Trivia Game.
ACTIONThis is Cathy's Astronomy Trivia Game. Welcome back!
USERUhhh… what can I do??
When we become immersed in our design, we often forget that the user will be coming to the Action without the background we have. What they can do will not always be obvious.
Here's an improvement:
ACTIONThis is Cathy's Astronomy Trivia Game. You can play solo, or join a team. What would you like to do?
USERUh, I think I'll join a team.
Remember, you don't have to list all your options up front. Instead, try to guide the user through the process in a clear manner, and end with a call to action.
Another common issue is not making it obvious when it's the user's turn to speak. As humans, we have many ways to let someone know it's their turn: words, intonation, eye gaze, and body language. For things like the Assistant, it's important to end the turn with a clear question or instruction. Here's an example of an Action that leaves the user hanging:
USEROkay Google, talk to Cool Animal Facts.
ACTIONWelcome to Cool Animal Facts! What animal would you like to know about?
USERI love penguins!
ACTIONCool, penguins. Would you like to know about their habitat, or what they like to eat?
USEROh, what do they like to eat!
ACTIONPenguins can eat up to 13 pounds of fish, krill and squid a day! And did you know the fathers sit on the eggs?
USERYes, I saw…
ACTIONMeanwhile, the mothers go out to get fish.
USER(thinking) Wait.. am I supposed to say something?
In this example we see two problems. First, the Action asked a rhetorical question. When the user tried to answer, the Action ignored them and kept on talking.
The second problem is that after the fact was given, nothing happened. There was no indication of what the user could do next.
Here's a better way:
ACTIONPenguins can eat up to 13 pounds of fish, krill and squid a day! By the way, male penguins spend time sitting on the eggs, while the females go out to get fish. Now, would you like to hear more about penguins, or ask about another animal?
USERI'm done, thanks.
ACTIONSure, thanks for talking to Cool Animal Facts. Goodbye!
Pro tip: Users often want to have an explicit way out, such as saying "goodbye".
Pro tip 2: Make your grammar robust! For the question "Would you like to hear more about penguins, or ask about another animal?" be prepared to handle responses such as "yes", "no", and names of animals.
Gracefully handling the conversation when things go wrong is critical for any conversational system. Writing multiple error prompts for every turn in the conversation can seem daunting. Using a default "I'm sorry, I didn't understand" will not be helpful to the user. Instead, it should be based on what's been happening up until that point and where the user is currently. To make it easier, write sample dialogs for error conditions, which will make more clear what should be in the prompts.
USEROkay Google, talk to the Zombie Quiz.
ACTIONTime for the Zombie Quiz! First question: which one of these should you do if you're being chased by a zombie: lock the door; run for the hills, or plant a garden?
USER(thinking)
ACTIONSorry, I didn't get that. Which one do you choose?
USERWhat? I don't remember them...
ACTIONSorry, I'm still having trouble. Which one do you choose?
USERArgh!
ACTIONSorry, that's incorrect. Question two…
Context-sensitive error handling would have gotten the user back on track:
ACTIONWhich one of these should you do if you're being chased by a zombie: lock the door; run for the hills, or plant a garden?
USERUhh.. I'm not sure… let me see...maybe the..
ACTIONYou can also say the number of the answer, like one, two, or three. Which one do you choose?
USEROh, number three.
ACTIONCorrect!
I know you're itching to take off and starting drawing flows and writing code, but take time to write sample dialogs first. In the long run, it will make your coding easier, and you'll have fewer bugs to fix.
Here's a list of "Dos" to keep in mind when writing sample dialogs:
Happy writing!
Posted by Rich Hyndman, Global Tech Lead, Google Launchpad
Launchpad Studio is an acceleration program for the world's top startups. Founders work closely with Google and Alphabet product teams and experts to solve specific technical challenges and optimize their businesses for growth with machine learning. Last year we introduced our first applied-ML cohort focused on healthcare.
Today, we are excited to welcome the new cohort of Finance startups selected to participate in Launchpad Studio:
These Studio startups have been invited from across nine countries and four continents to discuss how machine learning can be utilized for financial inclusion, stable currencies, and identification services. They are defining how ML and blockchain can supercharge efforts to include everyone and ensure greater prosperity for all. Together, data and user behavior are enabling a truly global economy with inclusive and differentiated products for banking, insurance, and credit.
Each startup is paired with a Google product manager to accelerate their product development, working alongside Google's ML research and development teams. Studio provides 1:1 mentoring and access to Google's people, network, thought leadership, and technology.
"Two of the biggest barriers to the large-scale adoption of cryptocurrencies as a means of payment are ease-of-use and purchasing-power volatility. When we heard about Studio and the opportunity to work with Google's AI teams, we were immediately excited as we believe the resulting work can be beneficial not just to Celo but for the industry as a whole." - Rene Reinsberg, Co-Founder and CEO of Celo
"Our technology has accelerated economic growth across Indonesia by raising the standard of living for millions of micro-entrepreneurs including ojek drivers, restaurant owners, small businesses and other professionals. We are very excited to work with Google, and explore more on how artificial intelligence and machine learning can help us strengthen our capabilities to drive even more positive social change not only to Indonesia, but also for the region." - Kevin Aluwi, Co-Founder and CIO of GO-JEK
"At Starling, we believe that data is the key to a healthy financial life. We are excited about the opportunity to work with Google to turn data into insights that will help consumers make better and more-informed financial decisions." - Anne Boden, Founder and CEO of Starling Bank
"At GuiaBolso, we use machine learning in different workstreams, but now we are doubling down on the technology to make our users' experience even more delightful. We see Studio as a way to speed that up." - Marcio Reis, CDO of GuiaBolso
Since launching in 2015, Google Developers Launchpad has become a global network of accelerators and partners with the shared mission of accelerating innovation that solves for the world's biggest challenges. Join us at one of our Regional Accelerators and follow Launchpad's applied ML best practices by subscribing to The Lever.