You might have seen that we announced new features in G Suite to help teams transform how they work, including Hangouts Chat, a new messaging platform for enterprise collaboration on web and mobile. Perhaps more interesting is that starting today you'll be able to craft your own bot integrations using the Hangouts Chat developer platform and API.
Now, you can create bots to streamline work—automate manual tasks or give your users new ways to connect with your application, all with commands issued from chat rooms or direct messages (DMs). Here are some ideas you might consider:
For example, a bot can take a location from a user, look it up using the Google Maps API, and display the resulting map right within the same message thread in Hangouts Chat. The bot output you see in the image below is generated from the Apps Script bot integration. It returns the JSON payload just below the same image shown on this page in the documentation.
When messages are sent to an Apps Script bot, the onMessage() function is called and passed an event object. The code below extracts the bot name as well as the location requested by the user. The location is then passed to Google Maps to create the static map as well as an openLink URL that takes the user directly to Google Maps if either the map or "Open in Google Maps" link is clicked.
onMessage()
openLink
function onMessage(e) { var bot = e.message.annotations[0].userMention.user.displayName; var loc = encodeURI(e.message.text.substring(bot.length+2)); var mapClick = { "openLink": { "url": "https://google.com/maps/search/?api=1&query=" + loc } }; return { // see JSON payload in the documentation link above }; }
Finally, this function returns everything Hangouts Chat needs to render a UI card assuming the appropriate links, data and Google Maps API key were added to the response JSON payload. It may be surprising, but this is the entire bot and follows this common formula: get the user request, collate the results and respond back to the user.
When results are returned immediately like this, it's known as a synchronous bot. Using the API isn't necessary because you're just responding to the HTTP request. If your bot requires additional processing time or must execute a workflow out-of-band, return immediately then post an asynchronous response when the background jobs have completed with data to return. Learn more about bot implementation, its workflow, as well as synchronous vs. asynchronous responses.
Developers are not constrained to using Apps Script, although it is perhaps one of the easiest ways to create and deploy bots. Overall, you can write and host bots on a variety of platforms:
No longer are chat rooms just for conversations. With feature-rich, intelligent bots, users can automate tasks, get critical information or do other heavy-lifting with a simple message. We're excited at the possibilities that await both developers and G Suite users on the new Hangouts Chat platform and API.
Today, as part of Mobile World Congress 2018, we are excited to announce the first beta release of Flutter. Flutter is Google's new mobile UI framework that helps developers craft high-quality native interfaces for both iOS and Android. Get started today at flutter.io to build beautiful native apps in record time.
Flutter targets the sweet spot of mobile development: performance and platform integrations of native mobile, with high-velocity development and multi-platform reach of portable UI toolkits.
Designed for both new and experienced mobile developers, Flutter can help you build beautiful and successful apps in record time with benefits such as:
Since our alpha release last year, we delivered, with help from our community, features such as screen reader support and other accessibility features, right-to-left text, localization and internationalization, iPhone X and iOS 11 support, inline video, additional image format support, running Flutter code in the background, and much more.
Our tools also improved significantly, with support for Android Studio, Visual Studio Code, new refactorings to help you manage your widget code, platform interop to expose the power of mobile platforms to Flutter code, improved stateful hot reloads, and a new widget inspector to help you browse the widget tree.
Thanks to the many new features across the framework and tools, teams across Google (such as AdWords) and around the world have been successful with Flutter. Flutter has been used in production apps with millions of installs, apps built with Flutter have been featured in the App Store and Play Store (for example, Hamilton: The Musical), and startups and agencies have been successful with Flutter.
For example, Codemate, a development agency in Finland, attributes Flutter's high-velocity dev cycle and customizable UI toolkit to their ability to quickly build a beautiful app for Hookle. "We now confidently recommend Flutter to help our clients perform better and deliver more value to their users across mobile," said Toni Piirainen, CEO of Codemate.
Apps built with Flutter deliver quality, performance, and customized designs across platforms.
Flutter's beta also works with a pre-release of Dart 2, with improved support for declaring UI in code with minimal language ceremony. For example, Dart 2 infers new and const to remove boilerplate when building UI. Here is an example:
// Before Dart 2 Widget build(BuildContext context) { return new Container( height: 56.0, padding: const EdgeInsets.symmetric(horizontal: 8.0), decoration: new BoxDecoration(color: Colors.blue[500]), child: new Row( ... ), ); } // After Dart 2 Widget build(BuildContext context) => Container( height: 56.0, padding: EdgeInsets.symmetric(horizontal: 8.0), decoration: BoxDecoration(color: Colors.blue[500]), child: Row( ... ), );
widget.dart on GitHub
We're thrilled to see Flutter's ecosystem thriving. There are now over 1000 packages that work with Flutter (for example: SQLite, Firebase, Facebook Connect, shared preferences, GraphQL, and lots more), over 1700 people in our chat, and we're delighted to see our community launch new sites such as Flutter Institute, Start Flutter, and Flutter Rocks. Plus, you can now subscribe to the new Flutter Weekly newsletter, edited and published by our community.
As we look forward to our 1.0 release, we are focused on stabilization and scenario completion. Our roadmap, largely influenced by our community, currently tracks features such as making it easier to embed Flutter into an existing app, inline WebView, improved routing and navigation APIs, additional Firebase support, inline maps, a smaller core engine, and more. We expect to release new betas approximately every four weeks, and we highly encourage you to vote (👍) on issues important to you and your app via our issue tracker.
Now is the perfect time to try Flutter. You can go from zero to your first running Flutter app quickly with our Getting Started guide. If you already have Flutter installed, you can switch to the beta channel using these instructions.
We want to extend our sincere thanks for your support, feedback, and many contributions. We look forward to continuing this journey with everyone, and we can't wait to see what you build!
While Actions on the Google Assistant are available to users on more than 400 million devices, we're focused on expanding the availability of the developer platform even further. At Mobile World Congress, we're sharing some good news for our international developer community.
Starting today, you can build Actions for the Google Assistant in seven new languages:
These new additions join English, French, German, Japanese, Korean, Spanish, Brazilian Portuguese, Italian and Russian. That brings our total count of supported languages to 16! You can develop for all of them using Dialogflow and its natural language processing capabilities, or directly with the Actions SDK. And we're not stopping here–expect more languages to be added later this year.
If you localize your apps in these new languages you won't just be among the first Actions available in the new locales, you'll also earn rewards while you do it! And if you're new to Actions on Google, check out our community program* to learn how you can snag an exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit by publishing your first Action. Already we've seen partners take advantage of other languages we've launched in the past like Bring!, which is now available in both English and German.
Besides supporting new languages, we're also making it easier to build your Action for global audiences. First, we recently added support for building with templates—creating an Action by filling in a Google Sheet without a single line of code—for French, German, and Japanese. For example, TF1 built Téléfoot, using templates in French to create an engaging World Cup-themed trivia game with famous commentators included as sound effects.
Additionally, we've made it a little easier for you to localize your Actions into different languages by enabling you to export your directory listing information as a file. With the file in hand, you can translate offline and upload the translations to your console, making localization quicker and more organized.
But before you run off and start building Actions in new languages, take a quick tour of some of the useful developer features rolling out this week…
By the end of the year the Assistant will reach 95 percent of all eligible Android phones worldwide, and Actions are a great way for you to reach those users to help them get things done easily over voice. Sometimes, however, users may benefit from the versatility of your Android app for particularly complex or highly interactive tasks.
So today, we're introducing a new feature that lets you deep link from your Actions in the Google Assistant to a specific intent in your Android app. Here's an example of SpotHero linking from their Action to their Android app after a user purchased a parking reservation. The Android app allows the user to see more details about the reservation or redeem their spot.
As you integrate these links in your Action, you'll make it easier for your users to find what they're looking for and to move seamlessly to your Android app to complete their user journey. This new feature will roll out over the coming weeks, but you can check out our developer documentation for more information on how to get started.
We're also introducing askForPlace, a new conversation helper that integrates the Google Places API to enable developers to use the Google Assistant to understand location-based user queries mid-conversation. Using the new helper, the Assistant leverages Google Maps' location and points of interest (POI) expertise to provide fast, accurate places for all your users' location queries. Once the location details have been sorted out with the user, the Assistant returns the conversation back to your Action so the user can finish the interaction.
So whether your business specializes in delivering a beautiful bouquet of flowers or a piping hot pepperoni pizza, you no longer need to spend time designing models for gathering users' location requests, instead you can focus on your Action's core experience.
Let's take a look at an example of how Uber uses the askForPlace helper to help their users book a ride:
We joined halfway through the interaction above, but it's worth pointing out that once the Uber action asked the user "Where would you like to go?" the developer triggered the askForPlace helper to handle location disambiguation. The user is still speaking with Uber, but the Assistant handled all user inputs on the back end until a drop-off location was resolved. From there, Uber was able to wrap up the interaction and dispatch a driver.
Head over to the askForPlace docs to learn how to create a better user experience for your customers.
And to wrap up our new feature announcements, today we're introducing an improved experience for users who use your app regularly—without any work required on your end. Specifically, if users consistently come back to your app, we'll cut back on the introductory lead-in to get users into your Actions as quickly as possible.
Today's updates are part of our commitment to improving the platform for developers, and making the Google Assistant and Actions on Google more widely available around the globe. If you have ideas or requests that you'd like to share with our team, don't hesitate to join the conversation.
*Some countries are not eligible to participate in the developer community program, please review the terms and conditions
With ARCore and Google Lens, we're working to make smartphone cameras smarter. ARCore enables developers to build apps that can understand your environment and place objects and information in it. Google Lens uses your camera to help make sense of what you see, whether that's automatically creating contact information from a business card before you lose it, or soon being able to identify the breed of a cute dog you saw in the park. At Mobile World Congress, we're launching ARCore 1.0 along with new support for developers, and we're releasing updates for Lens and rolling it out to more people.
ARCore, Google's augmented reality SDK for Android, is out of preview and launching as version 1.0. Developers can now publish AR apps to the Play Store, and it's a great time to start building. ARCore works on 100 million Android smartphones, and advanced AR capabilities are available on all of these devices. It works on 13 different models right now (Google's Pixel, Pixel XL, Pixel 2 and Pixel 2 XL; Samsung's Galaxy S8, S8+, Note8, S7 and S7 edge; LGE's V30 and V30+ (Android O only); ASUS's Zenfone AR; and OnePlus's OnePlus 5). And beyond those available today, we're partnering with many manufacturers to enable their upcoming devices this year, including Samsung, Huawei, LGE, Motorola, ASUS, Xiaomi, HMD/Nokia, ZTE, Sony Mobile, and Vivo.
Making ARCore work on more devices is only part of the equation. We're bringing developers additional improvements and support to make their AR development process faster and easier. ARCore 1.0 features improved environmental understanding that enables users to place virtual assets on textured surfaces like posters, furniture, toy boxes, books, cans and more. Android Studio Beta now supports ARCore in the Emulator, so you can quickly test your app in a virtual environment right from your desktop.
Everyone should get to experience augmented reality, so we're working to bring it to people everywhere, including China. We'll be supporting ARCore in China on partner devices sold there— starting with Huawei, Xiaomi and Samsung—to enable them to distribute AR apps through their app stores.
We've partnered with a few great developers to showcase how they're planning to use AR in their apps. Snapchat has created an immersive experience that invites you into a "portal"—in this case, FC Barcelona's legendary Camp Nou stadium. Visualize different room interiors inside your home with Sotheby's International Realty. See Porsche's Mission E Concept vehicle right in your driveway, and explore how it works. With OTTO AR, choose pieces from an exclusive set of furniture and place them, true to scale, in a room. Ghostbusters World, based on the film franchise, is coming soon. In China, place furniture and over 100,000 other pieces with Easyhome Homestyler, see items and place them in your home when you shop on JD.com, or play games from NetEase, Wargaming and Game Insight.
With Google Lens, your phone's camera can help you understand the world around you, and, we're expanding availability of the Google Lens preview. With Lens in Google Photos, when you take a picture, you can get more information about what's in your photo. In the coming weeks, Lens will be available to all Google Photos English-language users who have the latest version of the app on Android and iOS. Also over the coming weeks, English-language users on compatible flagship devices will get the camera-based Lens experience within the Google Assistant. We'll add support for more devices over time.
And while it's still a preview, we've continued to make improvements to Google Lens. Since launch, we've added text selection features, the ability to create contacts and events from a photo in one tap, and—in the coming weeks—improved support for recognizing common animals and plants, like different dog breeds and flowers.
Smarter cameras will enable our smartphones to do more. With ARCore 1.0, developers can start building delightful and helpful AR experiences for them right now. And Lens, powered by AI and computer vision, makes it easier to search and take action on what you see. As these technologies continue to grow, we'll see more ways that they can help people have fun and get more done on their phones.
Tech entrepreneurs are changing the world through their own creativity and passion. To celebrate Europe's thriving developers and the entrepreneurial scene and honor the most promising tech companies, in 2016 we founded the Digital Top 50 Awards, in association with McKinsey and Rocket Internet.
The 2018 edition of the awards are now open for applications and companies with a digital product or service from the EU and from EFTA countries can apply on the Digital Top 50 website until April 1, 2018.
All top 50 companies will receive free tickets and showcase space at Tech Open Berlin on June 20-21 2018, where the final winners in each category will be announced. The winner in the Tech for Social Impact category will be granted a cash prize of 50,000 euros, and all five winners will be provided with support from the founding partners to scale their businesses further—through leading professional advice, structured consulting and coaching programs, as well as access to a huge network of relevant industry contacts.
Helping people embrace new digital opportunities is at the heart of our Grow with Google initiative in Europe. With the DT50 awards, we hope to recognize a new generation of startups and scale-ups, and help them grow further and realize their dreams.
We're excited to announce the three new startups joining Launchpad Studio, our 6-month mentorship program tailored to help applied-machine learning startups build great products using the most advanced tools and technologies available. We intend to support these startups by leveraging some of our platforms like Google Cloud Platform, TensorFlow, and Android, while also providing one-on-one support from product and research experts from several Google teams including Google Cloud, Verily, X, Brain, and ML Research. Launchpad Studio has also enlisted the expertise of a number of top industry practitioners and thought leaders to ensure Studio startups are successful over the long-term. These three startups were selected based on the novel ways they've applied ML to important challenges in the Healthcare industry:
The annual cost of treating heart failures in the US is currently estimated to be ~$40bn annually. With the continued aging of the US population, the impact of Congestive Heart Failure is expected to increase substantially.
Through light-weight, low-cost cloth-based form factors, Nanowear can capture and transmit medical-grade data directly from the skin enabling deep analytics and prescriptive recommendations. As a first product application, Nanowear's SimpleSense aims to transform Congestive Heart Failure management.
Nanowear intends to develop predictive models that provide both physicians and patients with leading indicators and data to anticipate potential hospitalizing events. Combining these datasets with deep machine learning capabilities will position Nanowear at the epicenter of the next generation of telemedicine and connected-self healthcare.
With the big data revolution, the medical and scientific communities have more information to work with than in all of history combined. However, with such a wealth of information, it is increasingly difficult to differentiate productive leads from dead ends.
Artificial intelligence and machine learning powered by systems biology can organize, validate, predict and compare the overabundance of information. Owkin builds mathematical models and algorithms that can interpret omics, visual data, biostatistics and patient profiles.
Owkin is focused on federated learning in healthcare to overcome the data sharing problem, building collective intelligence from distributed data.
A low percentage of healthcare specialists per patient and no interoperability between medical devices causes exam results in Brazil to take an average of 60 days to be ready, cost hundreds of dollars, and leaves millions of people with no access to quality healthcare.
The standard solution for such a problem is Telemedicine, but the lack of direct automatic communication with medical devices and pre processing AI behind it hurts its scalability, resulting in very low adoption worldwide.
Portal Telemedicina is a digital healthcare platform that provides reliable, fast, low-cost online diagnostics to hundreds of cities in Brazil. Thanks to revolutionary communication protocols and AI automation, the solution enables interoperability across systems and patients. Exams are handled seamlessly from medical devices to diagnostics. The company counts on a huge proprietary dataset and uses Google's TensorFlow to train machine learning algorithms in millions of images and correlated health records to predict pathologies at human level accuracy.
Leveraging artificial intelligence to empower doctors, the startup helps millions of lives in Brazil and wants to expand and provide universal access to healthcare.
Each startup will get tailored, equity-free support, with the goal of successfully completing a ML-focused project during the term of the program. To support this process, we provide resources, including deep engagement with engineers in Google Cloud, Google X, and other product teams, as well as Google Cloud credits. We also include both Google Cloud Platform and GSuite training in our engagement with all Studio startups.
Based in San Francisco, Launchpad Studio is a fully tailored product development acceleration program that matches top ML startups and experts from Silicon Valley with the best of Google - its people, network, and advanced technologies - to help accelerate applied ML and AI innovation. The program's mandate is to support the growth of the ML ecosystem, and to develop product methodologies for ML.
Launchpad Studio is looking to work with the best and most game-changing ML startups from around the world. While we're currently focused on working with startups in the Healthcare and Biotech space, we'll soon be announcing other industry verticals, and any startup applying AI / ML technology to a specific industry vertical can apply on a rolling-basis.
The AMP story format is a recently launched addition to the AMP Project that provides content publishers with a mobile-focused format for delivering news and information as visually rich, tap-through stories.
Some stories are best told with text while others are best expressed through images and videos. On mobile devices, users browse lots of articles, but engage with few in-depth. Images, videos and graphics help publishers to get their readers' attention as quickly as possible and keep them engaged through immersive and easily consumable visual information.
Recently, as with many new or experimental features within AMP, contributors from multiple companies — in this case, Google and a group of publishers — came together to work toward building a story-focused format in AMP. The collective desire was that this format offer new, creative and visually rich ways of storytelling specifically designed for mobile.
The mobile web is great for distributing and sharing content, but mastering performance can be tricky. Creating visual stories on the web with the fast and smooth performance that users have grown accustomed to in native apps can be challenging. Getting these key details right often poses prohibitively high startup costs, particularly for small publishers.
AMP stories are built on the technical infrastructure of AMP to provide a fast, beautiful experience on the mobile web. Just like any web page, a publisher hosts an AMP story HTML page on their site and can link to it from any other part of their site to drive discovery. And, as with all content in the AMP ecosystem, discovery platforms can employ techniques like pre-renderable pages, optimized video loading and caching to optimize delivery to the end user.
AMP stories aim to make the production of stories as easy as possible from a technical perspective. The format comes with preset but flexible layout templates, standardized UI controls, and components for sharing and adding follow-on content.
Yet, the design gives great editorial freedom to content creators to tell stories true to their brand. Publishers involved in the early development of the AMP stories format — CNN, Conde Nast, Hearst, Mashable, Meredith, Mic, Vox Media, and The Washington Post — have brought together their reporters, illustrators, designers, producers, and video editors to creatively use this format and experiment with novel ways to tell immersive stories for a diverse set of content categories.
Today AMP stories are available for everyone to try on their websites. As part of the AMP Project, the AMP story format is free and open for anyone to use. To get started, check out the tutorial and documentation. We are looking forward to feedback from content creators and technical contributors alike.
Also, starting today, you can see AMP stories on Google Search. To try it out, search for publisher names (like the ones mentioned above) within g.co/ampstories using your mobile browser. At a later point, Google plans to bring AMP stories to more products across Google, and expand the ways they appear in Google Search.
"Write Once, Run Anywhere." That was the promise of Java back in the 1990s. You could write your Java code on one platform, and it would run on any CPU implementing a Java Virtual Machine.
But for developers who need to squeeze every bit of performance out of their applications, that's not enough. Since the dawn of computing, performance-minded programmers have used insights about hardware to fine tune their code.
Let's say you're working on code for which speed is paramount, perhaps a new video codec or a library to process tensors. There are individual instructions that will dramatically improve performance, like fused multiply-add, as well as entire instruction sets like SSE2 and AVX, that can give the critical portions of your code a speed boost.
Here's the problem: there's no way to know a priori which instructions your CPU supports. Identifying the CPU manufacturer isn't sufficient. For instance, Intel's Haswell architecture supports the AVX2 instruction set, while Sandy Bridge doesn't. Some developers resort to desperate measures like reading /proc/cpuinfo to identify the CPU and then consulting hardcoded mappings of CPU IDs to instructions.
Enter cpu_features, a small, fast, and simple open source library to report CPU features at runtime. Written in C99 for maximum portability, it allocates no memory and is suitable for implementing fundamental functions and running in sandboxed environments.
The library currently supports x86, ARM/AArch64, and MIPS processors, and we'll be adding to it as the need arises. We also welcome contributions from others interested in making programs "write once, run fast everywhere."
Today, we're launching a new interactive education program for Universal App campaigns (UAC). UAC makes it easy for you to reach users and grow your app business at scale. It uses Google's machine learning technology to help find the customers that matter most to you, based on your business goals — across Google Play, Google.com, YouTube and the millions of sites and apps in the Display Network.
UAC is a shift in the way you market your mobile apps, so we designed the program's first course to help you learn how to get the best results from UAC. Here are a few reasons we encourage you take the course:
So, take the course today and let us know what you think. You can also read more about UAC best practices here and here.
Happy New Year and hope to see you in class!