Posted by Erica Hanson, Developer Student Clubs Program Manager, Google
Google Developer Student Clubs (DSC) are university based community groups for students who are interested in Google’s developer technologies. Each year, Google puts a call out to the entire DSC global community, asking students to answer one simple question: Can you solve a local problem in your community by building with Google’s technologies?
This event is known as the DSC Solution Challenge and this year’s winners went above and beyond to answer the call - so much so that we couldn’t just pick one winner, we chose 10.
While we initially thought we were the ones sending out the challenge, these young developers instead flipped the script back on us. Through their innovative designs and uncompromised creative spirit, they’ve pushed our team here at Google to stretch our thinking about how developers can build a more hopeful future.
With this, we’re welcoming these passionate students and anyone interested to the virtual Solution Challenge Demo Day on August 26th, where the students will present their winning ideas in detail.
Ahead of the event, learn more about this incredible group of thinkers and their solutions below.
Maria Pospelova, Wei Wen Qing, and Almo Gunadya Sutedjo developed FreeSpeak, a software that uses modern machine learning and video/audio analyzing tools by leveraging TensorFlow and Google Cloud’s Natural Language to analyze presentations and give individual feedback and tips as a “virtual coach.”
“We’ve loved connecting with talented people from around the world and exchanging ideas with them. We see that we can provide impact not only to our local neighborhood, but also around the world and help people. This motivates us to work a lot harder.”
Anushka Purohit, Anupam Tiwari, and Neel Desai created CoronaAI, a TensorFlow based technology that helps examine COVID-19 data. Specifically, the device is made up of a band worn around a patient's chest that uses electrodes to extract real-time images of the lungs. From here, the band connects to a monitor that allows doctors to examine patients in real time without being near them.
“We're honestly huge fans of the Google Cloud Platform because of its simplicity, familiarity, and the large number of resources available. Developing this project was the best learning experience.”
Syed Moazzam Maqsood, Krinza Momin, Muhammad Ahmed Gul, and Hussain Zuhair built Worthy Walk: an Android and iOS app that provides its users a platform to achieve health goals by walking, running, or cycling. To encourage users, Worthy Walk provides an inbuilt currency called Knubs that can be redeemed as discounts from local businesses, shops, and startups.
“Being a part of DSC means friendship - sharing knowledge and resources - all while developing a social infrastructure that gives people the power to build a global community that works for all of us.”
Yuna Kim, Young hoon Jo, Jeong yoon Joo, and Sangbeom Hwang created Simhae, a platform created with Flutter and Google Cloud that allows users to access basic information and activities to inspire them to attend self-help gatherings run by suicide prevention centers. They believe that this experience is an important point that can lead to solidarity of suicide survivors.
“It's so nice to have a chance to meet more diverse people. Through these communities, I can make up for my shortcomings and share more information with those who have different experiences than me - all while developing my own potential.”
Elvis Antwi Sarfo, Yaw Barnieh Anane, Ampomah Ata Acheampong Prince, and Perditha Abena Acheampong constructed Emergency Response Assistance, an Android application to help health authorities post the latest first aid steps to educate the public and also help the victims report emergency cases with a click of a button. The Emergency Response team will also be able to track the exact location of the victims on the map.
“DSC is not just a community, it’s an inspiration. It’s outstanding how the platform has brought all of these students, lecturers, and teaching assistants, who are all so passionate about using tech to solve problems, together.”
Muhammad Alan Nur, Pravasta Caraka Bramastagiri, Eva Rahmadanti, and Namira Rizqi Annisa created Tulibot: an integrated assistive technology, built with the Google Speech API, that’s made to bridge communication between deaf people and society. The group made two main devices, Smart Glasses and Smart Gloves. Smart Glasses help with communication for the hearing impaired by showing real time answers directly on the glasses from its interlocutors. Smart Gloves transcribe gesture input into audio output by implementing gesture to text technology.
“This has been an amazing opportunity for us because with this challenge, we can learn many things like coding, management, business, and more. The special materials we can get directly from Google is so helpful.”
Sze Yuk Yin, Kwok Ue Nam, Ng Chi Ting, Chong Cheuk Hei, and Silver Ng developed Picare, a healthcare matching platform built with Flutter and Google Machine Learning to help elderly people in Hong Kong. Users will be able to use the app to research, schedule, and pay caregivers directly through the app.
“Our community hosted several workshops ranging from design thinking to coding techniques. This boosted our development by introducing us to various state-of-the-art technologies, such as Machine Learning and Cloud computing, which helped us reach our development goals.”
Vo Ngoc Khanh Linh, Tran Lam Bao Khang, Nguyen Dang Huy, and Nguyen Thanh Nhan built Shareapy: a digitized support group app created with Android that helps bring people together who share similar problems regardless of their age, gender, religion, financial status, etc. After conducting an extremely rigorous user testing phase, this team had the chance to see all that TensorFlow and Firebase could do.
“My team loves Firebase so much. One of our team members now uses it to help do some of his homework problems.”
Victor Chinyavada, Marvellous Humphery Chirunga, and Lavender Zandile Tshuma started Capstone, a service hosted on the Google Cloud Platform that aims to combat plagiarism among students, authors, and researchers. In particular, the technology aims to develop more effective algorithms that will incorporate the latest in big data, artificial intelligence, and data mining. As a team, the group bonded over applying technologies from Google to their project, but their real takeaway was working together to solve problems.
“To submit our project on time, we started all night hackathons, which helped us finish all of our work while having fun and getting to know each other better.”
Praveen Agrawal built MiCamp, an Android app that holds all the info students from his campus need. Features include a calendar with upcoming campus events, student profiles, a used book marketplace, hostel management, online food ordering, and more. As a team of one, Praveen needed to speed up his development, so he applied his new knowledge of Flutter to finish.
“I’d heard of technologies like Flutter, but never used them until joining DSC; they inspired us to use those technologies, which really improved my solution.”
Posted by the Coral Team
Summer has arrived along with a number of Coral updates. We're happy to announce a new partnership with balena that helps customers build, manage, and deploy IoT applications at scale on Coral devices. In addition, we've released a series of updates to expand platform compatibility, make development easier, and improve the ML capabilities of our devices.
First up, our Edge TPU runtime is now open-source and available on GitHub, including scripts and instructions for building the library for Linux and Windows. Customers running a platform that is not officially supported by Coral, including ARMv7 and RISC-V can now compile the Edge TPU runtime themselves and start experimenting. An open source runtime is easier to integrate into your customized build pipeline, enabling support for creating Yocto-based images as well as other distributions.
Coral customers can now also use the Mini PCIe and M.2 accelerators on the Microsoft Windows platform. New Windows drivers for these products complement the previously released Windows drivers for the USB accelerator and make it possible to start prototyping with the Coral USB Accelerator on Windows and then to move into production with our Mini PCIe and M.2 products.
We’ve also made a number of new updates to our ML tools:
sudo apt-get update && sudo apt-get install edgetpu
We are excited to share that the Balena fleet management platform now supports Coral products!
Companies running a fleet of ML-enabled devices on the edge need to keep their systems up-to-date with the latest security patches in order to protect data, model IP and hardware from being compromised. Additionally, ML applications benefit from being consistently retrained to recognize new use cases with maximum accuracy. Coral + balena together, bring simplicity and ease to the provisioning, deployment, updating, and monitoring of your ML project at the edge, moving early prototyping seamlessly towards production environments with many thousands of devices.
Read more about all the benefits of Coral devices combined with balena container technology or get started deploying container images to your Coral fleet with this demo project.
Mendel Linux (5.0 release Eagle) is now available for the Coral Dev Board and SoM and includes a more stable package repository that provides a smoother updating experience. It also brings compatibility improvements and a new version of the GPU driver.
Last but not least, we’ve recently released BodyPix, a Google person-segmentation model that was previously only available for TensorFlow.JS, as a Coral model. This enables real-time privacy preserving understanding of where people (and body parts) are on a camera frame. We first demoed this at CES 2020 and it was one of our most popular demos. Using BodyPix we can remove people from the frame, display only their outline, and aggregate over time to see heat maps of population flow.
Here are two possible applications of BodyPix: Body-part segmentation and anonymous population flow. Both are running on the Dev Board.
We’re excited to add BodyPix to the portfolio of projects the community is using to extend our models far beyond our demos—including tackling today’s biggest challenges. For example, Neuralet has taken our MobileNet V2 SSD Detection model and used it to implement Smart Social Distancing. Using the bounding box of person detection, they can compute a region for safe distancing and let a user know if social distance isn’t being maintained. The best part is this is done without any sort of facial recognition or tracking, with Coral we can accomplish this in real-time in a privacy preserving manner.
We can’t wait to see more projects that the community can make with BodyPix. Beyond anonymous population flow there’s endless possibilities with background and body part manipulation. Let us know what you come up with at our community channels, including GitHub and StackOverflow.
We are excited to share all that Coral has to offer as we continue to evolve our platform. For a list of worldwide distributors, system integrators and partners, including balena, visit the Coral partnerships page. Please visit Coral.ai to discover more about our edge ML platform and share your feedback at coral-support@google.com.
UPDATE: August 19, 2020: Revised guidance for Google Cloud client libraries and Apache Beam/Cloud Dataflow August 12, 2020: Added guidance for Dataproc hadoop connector July 13, 2020: Enumerate endpoints for JSON-RPC and Global HTTP Batch. Include examples of Non-Global HTTP Batch endpoints for contrast. July 8, 2020: Limit usage of JSON-RPC and Global HTTP batch endpoints to existing projects only. Starting July 15 (JSON-RPC) and July 16 (Global HTTP Batch) we will no longer allow new projects to call these two endpoints. Projects with calls in the last 4 weeks will continue to work until the deadline of Aug 12, 2020. Apr 23, 2020: gcloud min version has been updated Apr 22, 2020: error injection planned for Apr 28 is CANCELLED, JSON-RPC and Global HTTP batch endpoint will perform normally. Next error injection window will be on May 26 as scheduled.
August 19, 2020: Revised guidance for Google Cloud client libraries and Apache Beam/Cloud Dataflow
August 12, 2020: Added guidance for Dataproc hadoop connector
July 13, 2020: Enumerate endpoints for JSON-RPC and Global HTTP Batch. Include examples of Non-Global HTTP Batch endpoints for contrast.
July 8, 2020: Limit usage of JSON-RPC and Global HTTP batch endpoints to existing projects only. Starting July 15 (JSON-RPC) and July 16 (Global HTTP Batch) we will no longer allow new projects to call these two endpoints. Projects with calls in the last 4 weeks will continue to work until the deadline of Aug 12, 2020.
Apr 23, 2020: gcloud min version has been updated
Apr 22, 2020: error injection planned for Apr 28 is CANCELLED, JSON-RPC and Global HTTP batch endpoint will perform normally. Next error injection window will be on May 26 as scheduled.
We have invested heavily in our API and service infrastructure to improve performance and security and to add features developers need to build world-class APIs. As we make changes we must address features that are no longer compatible with the latest architecture and business requirements.
The JSON-RPC protocol (http://www.jsonrpc.org/specification) and Global HTTP Batch (example) are two such features. Our support for these features was based on an architecture using a single shared proxy to receive requests for all APIs. As we move towards a more distributed, high performance architecture where requests go directly to the appropriate API server we can no longer support these global endpoints.
Posted by Soc Sieng, Developer Advocate
We are pleased to announce the launch of the official Google Pay plugin for Magento 2. The Google Pay plugin can help increase conversions by enabling a simpler and more secure checkout experience in your Magento website. When you integrate with Google Pay, your customers can complete their purchases quickly using the payment methods they’ve securely saved to their Google Accounts.
Google Pay in action.
The Google Pay plugin is built in collaboration with Unbound Commerce, is free to use, and integrates with popular payment service providers including Adyen, BlueSnap, Braintree, FirstData - Payeezy & Ucom, Moneris, Stripe, and Vantiv.
The Google Pay plugin can be installed from the Magento Marketplace using this link or by searching the Magento Marketplace for “Google Pay”.
Refer to the Magento Marketplace User Guide for more installation instructions.
To get started with the Google Pay plugin, you will need your Google Pay merchant identifier which can be found in the Google Pay Business Console.
Your Merchant ID can be found in the Google Pay Business Console.
Once installed, you can configure the plugin in your site’s Magento administration console by navigating to Stores > Configuration > Sales > Payment Methods and selecting the Configure button next to Google Pay.
Click on the Configure button to start the setup process.
Testing out Google Pay can be achieved in three easy steps:
You can optionally try out some of the advanced settings that provide the ability to customize the color and type of Google Pay button as well as enabling Minicart integration, which is recommended.
Checkout the Advanced Settings to further customize how and where the Google Pay button is presented in your store.
If your payment provider isn’t listed as an option in the payment gateway list, check to see if your payment provider’s plugin has built-in support for Google Pay.
When you’ve completed your testing, submit your website integration in the Google Pay Business Console. You will need to provide your website’s URL and screenshots to complete the submission.
Integrating Google Pay into your website is a great way to increase conversions and to improve the purchasing experience for your customers.
Find out more about Google Pay and the Google Pay plugin for Magento.
Do you have any questions? Let us know in the comments below or tweet using #AskGooglePayDev.
Posted by Kylie Poppen, Senior Interaction Designer, G Suite and Akshay Potnis, Interaction Designer, G Suite
You’ve just scoped out an awesome new way to solve for your customer’s next challenge, but wait, what about the design? Building an integration between your software platform and another comes with a laundry list of things to think about: your vision, your users, their experience, your partners, APIs, developer docs, and so on. Caught between two different platforms, many constraints, and limited time, you're probably wondering: how might we build the most intuitive and powerful user experience?
Imagine making a presentation, with Google Slides you have all sorts of templates to get you started, and you can build a great deck easily. But, to build a seamless integration between two software platforms, those pre-built templates don’t exist and you basically have to start from scratch. In the best case scenario, you’d create your own components and layer them on top of each other with the goal of making the UI seem just about right. But this takes time. Hours longer than you want it to. Without design guidelines, you're stuck guessing what is or is not possible, looking to other apps and emulating what they've already done. Which leads us to the reality that some add-ons have a suboptimal experience, because time is limited, and you're left to build only for what you know you can do, rather than what's actually possible.
To simplify all of this, we’re introducing the G Suite Add-ons UI Design Kit, now live on Figma. With it you can browse all of the components of G Suite Add-ons’ card-based interface, learn best practices, and simply drag-and-drop to create your own unique designs. Save the time spent recreating what an add-on will look like, so that you can spend that time thinking about how your add-on will work .
While the UI Design Kit has only been live for a little over a month, we’ve already been hearing feedback from our partners about its impact.
“Zapier connects more than 2,000 apps, allowing businesses to automate repetitive, time-consuming tasks. When building these integrations, we want to ensure a seamless experience for our customers,” said Ryan Powell, Product Manager at Zapier. “However, a partner’s UI can be difficult to navigate when starting from scratch. G Suite’s UI Design Kit allows us to build, test and optimize integrations because we know from the start what is and is not possible inside of GSuite’s UI.”
Find and duplicate design kit
Choose a template to begin
Copy the template and detach from symbols to start editing
Helpful Hints: Features to help you iterate quickly
Build with auto layout, you don’t need to worry about the details.
Visualize your design against G-Suite surfaces easily.
Documentation built right into the template.
With G Suite Add-ons, users and admins can seamlessly get their work done, across their favorite workplace applications, without needing to leave G Suite. With this UI Design Kit, you too can focus your time on building a great user experience inside of G Suite, while simplifying and accelerating the design process. Follow these steps to get started today:
Download the UI Design Kit
Get started with G Suite Add-ons
Hopefully this will inspire you to build more add-ons using the Cards Framework! To learn more about building for G Suite, check out the developer page, and please register for Next OnAir, which kicks off July 14th.
Posted by James Scott, Technical writer
Technical writing is simple - you merely have to explain brutally complex technologies to relentlessly unforgiving audiences. It's unsurprising that so many engineers find writing documentation is the most painful part of their job. If you would like to teach your colleagues to become writers, the good news is Google's fun and interactive technical writing course materials are free and available for everyone to use! Alternatively, if you're a developer who would like to learn how to write more clearly, you can read through the course work for yourself or convince a colleague to teach the course at your organisation!
We researched documentation extensively, and it turns out that the best sentences in the world consist primarily of words. Our self-paced and facilitator-led courses will not only help software engineers choose the right words but also help to make the whole writing process a lot less scary. Perhaps software engineers won't become William Shakespeare or even William Shatner overnight, but hopefully they will gain the confidence to write something worth publishing. As working from home becomes more common, good documentation has never been more important in enabling software engineers to work independently.
Google introduced the technical writing courses, Technical Writing One and Technical Writing Two, in 2015. Since then, thousands of Google software engineers and product managers have taken and enjoyed the courses. In February 2020, we released the courses to the world.
The classes have the following structure:
Organizations can choose to host the live classes virtually or in-person.
The first course, Technical Writing One, covers the basics of technical writing. Students learn to start thinking about their audience before even putting pen to paper. For example, in one exercise, students are challenged to write instructions for putting toothpaste on a toothbrush. That might sound relatively simple, but here's the catch - your audience has never brushed their teeth before. That's not to say they have bad oral hygiene, but they don't even know what a toothbrush is. The exercise aims to get students to think about documenting a completely new technology.
Another important lesson that Technical Writing One teaches you is how to shorten the sentence length in your documentation and how to edit unnecessarily long sentences. Hopefully once you have taken the course, you might edit the preceding sentence down to something like the following: Another important lesson that Technical Writing One teaches you is to shorten sentences length in your documentation and how to edit unnecessarily long sentences.
The course also advocates using lists instead of walls of text, so here, in list form, are some other topics it covers:
Technical Writing Two builds on the techniques from the first course and is for those who already know verbs from adverbs. The course encourages students to express their creative side. For example, in one exercise, students find the best way to illustrate technical concepts. Spoiler alert: can you spot any issues with the following diagram?
Figure 1: Finding a website through DNS
Other intermediate techniques the course covers include:
Students take part in interactive exercises and peer review with a lab partner. Technical Writing Two also includes class discussions on documentation types and how to write the dreaded first draft.
If you would like to teach the courses at your own organization, see the facilitator guides. To review the pre-work and read through the training materials, see the course overviews.
Posted by Michele Turner, Director of Product and Smart Home Ecosystem for Google Nest
To create a helpful home experience, we have focused on foundational features necessary to make it easier for people to manage their smart devices. But as people spend more and more time at home during these challenging times, it’s important that we invest in additional ways to work with developers to build a more useful connected home.
Today, at the "Hey Google" Smart Home Virtual Summit, we gave updates on our latest smart home initiatives, talked more in-depth about the new smart home controls in Android 11, and previewed some platform tools that we're investing in to make devices easier to set up and work with Google Assistant.
As many of us continue to stay home, smart devices are being used a lot more. With the biggest growth coming from entertainment devices, we’re increasing our support in this area with our Smart Home API.
Last year, we launched Google Assistant support for Smart Home for Entertainment Device (SHED) device types and traits, including TVs, remotes, set-top boxes, speakers, soundbars, and even game consoles from top brands like Xbox, Roku, Dish, and LG. And now, we are making these APIs public for any Smart TV, set-top box or game developers to use. SHED gives users the ability to control their favorite entertainment devices from any Assistant-enabled smart display, smart speaker or mobile device.
With the release of Android 11, coming out later this year, we are introducing a dedicated space for Smart Home controls that users can find quickly, and access any time. We’ve redesigned the power menu to make devices linked to Google Assistant just a button-press away.
Users with the Home App can choose all, or just their favorite controls to be in the space. For partners, you get this for free - there’s no new development work required. We’ll have sliders which will allow you to adjust specific settings, like the temperature of your thermostat in the morning, or how far to open the blinds. You can also customize what devices are visible from the control space and whether these devices can be accessible in your lockscreen.
With Android 11, we want to give users a quick and easy way to check or adjust the devices in their home. And as we continue to add new surfaces for device control, it becomes more critical to ensure we have accurate state. In the coming months, we’ll be introducing tools to measure your reliability and latency to help improve and debug state reporting. Once you hit key targets for reliability and latency, we will shift from a default of querying your state to using report state to render stateful controls. This will reduce query volume on your servers and improve the user experience with accurate device state across multiple surfaces.
In addition to state accuracy, the best user experience comes with strong reliability and low latency. To help achieve both, we launched local execution with the Local Home SDK back in April. As part of the Smart Home platform, local fulfillment can extend your Smart Home Action and routes commands to devices through the local network, benefitting users with reduced latency and higher reliability by removing an additional cloud hop.
To ease the development process, the Local Home platform supports both Chrome and Node.js runtime environments, as well as building and testing of apps on local development machines or personal servers. Once you've deployed your local fulfillment app, users will benefit immediately without having to upgrade hardware or manually update firmware. Nanoleaf and Yeelight have already enabled local execution for their devices. It’s available to all developers through the Actions on Google Console.
Implementing a high quality integration is important - it reduces churn and delights users. Yet, it’s still challenging to get users to discover these features, and we’re doing a couple of things on our end to increase the funnel of users linked to your action. We are excited to launch OAuth-based App Flip on the developer console today. With AppFlip, we streamline the standard account linking flow by flipping users from the Google Home App to the Partner app to gather consent without requiring the users to re-enter their credentials.
To increase awareness of your Action, you will soon be able to initiate the account linking flow within your app. There will also be more opportunities to increase awareness through feature promotion and in-app notification using your app, and we will have more details on discovery and linking opportunities later this year.
We know that visibility into the behavior of your smart home integrations is critical, from debugging in early development to detailed analytics in production. To enhance developer productivity, we've integrated with the powerful monitoring and troubleshooting tools available in Google Cloud Platform to provide detailed event logs and usage metrics for smart home projects.
We’ve also recently launched new tools to help developers improve the reliability of their integrations and aid in debugging and resolving issues quickly. You can view aggregate metrics directly in the developer console, or build logs-based metrics to find trends and gain deeper insights into common problems. Google Cloud Platform also enables developers to build custom alerts to quickly identify production issues.
You can also find a new Smart Home Analytics Dashboard accessible from the developer console and pre-populated with charts for common metrics such as Daily Active Users and Request Breakdown — giving you an overall picture of your integration's behavior. This dashboard is powered by new usage and performance metrics in Google Cloud Monitoring, giving you the power to set alerts and be notified if your integration has an issue. Get started today by going to the “Analytics” tab in the Actions console or the Google Cloud console to check out these new logs, metrics, and alerting capabilities for your projects!
Last year, we announced that we’re moving from the Works with Nest program to Works with Google Assistant and build on a foundation of privacy and data security to ensure users have confidence in how Google and our partners are protecting the consumer’s home data.
As part of that effort, we created the Device Access program to provide a way for partners to integrate directly with Nest devices. To support the Device Access program, we will soon launch the Device Access Console, a self-serve console that guides commercial developers through the different project phases: development, certification and pilot testing, and finally production.
For a commercial developer the console allows them to manage your various projects and integrations. It also provides development guides and trait documentation for all supported Nest devices. Individuals who want to create their own automations with their Nest devices will be able to do so with this console, but only for the homes they are a member of.
One of the most popular features with Nest users is the ability to automatically trigger routines based on whether users are Home or Away. Later this year, similar functionality will be available with Google Assistant through occupancy detection.
Sleep is also a critical part in maintaining our overall well-being as we stay more at home. Last year we launched the Gentle Sleep & Wake feature with Philips Hue, which slowly brightens or dims the lights at a specific time or can be tied to your morning alarms. Just say, “Turn on Gentle Wake up” to your bedroom speaker to ‘set it and forget it.’ The Light Effects trait is now public so all developers can integrate their native Sleep or Wake experiences - in fact LIFX has recently launched! We encourage you to build and integrate your own unique experiences. We’ll have a larger launch moment later this year when we launch emulated Sleep and Wake effects so that it’ll work out of the box for any smart light!
Another way partners will be able to innovate on our platform and provide more helpful experiences to users is by extending personal routines with custom routines designed by partners, available in the coming months. Developers will be able to create and suggest routines, not just for their devices, but that can work with other devices in a customer’s home. You’ll be able to create solutions for your customers that are based on your core business and bring value to your customers - whether it’s wellness, cleaning, or entertainment. Users will be able to browse and opt-in to approved routines and choose to have Nest and other devices react and participate in that routine.
Our Smart Home efforts have grown significantly over the past several years. We now have integrations with thousands of partners covering all the major connected product categories and devices, and will continue our ambitious goal to build deeper in-home integrations. Be sure to review our docs/samples/videos to learn about all the cool new stuff, and connect with us on our dev communities.
Posted by Toni Klopfenstein, Developer Advocate
Easy account linking helps create more helpful user experiences with Google Assistant and your products and services. Today, we are launching OAuth-based App Flip, a new feature to help you build a better mobile account linking experience. App Flip allows your users to seamlessly link their accounts to Google without having to re-enter their credentials if they have already signed in to your app on their device. This streamlined authorization flow helps minimize users dropping out of the account linking process, and makes it easier for users to integrate your smart devices into their smart homes. This feature is now available for all Smart Home Actions and is available in beta for other Conversational Actions.
When App Flip is implemented within your application, the Google Assistant or Google Home app automatically flips to your app when users initiate the account linking process. Once users consent to link their accounts, your app then requests an authorization code from your server, and Google handles the remainder of the account linking flow.
Account Linking Flows
By flipping directly to your local app, users can skip logging in again and can simply choose to link their accounts.
You can implement App Flip by enabling your app to accept a deep link that allows Google to connect. We also have test tools available for both Android and iOS to help you verify your app integration with App Flip.
For more details on how to use App Flip with your integration, check out the docs, or the sample Android and iOS apps. For Conversational Actions, please sign up for the Beta program.
We want to hear from you, so continue sharing your feedback with us, and engage with other Action developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!
Back in April, we released the first set of Smart Home Entertainment Device (SHED) types, including TV, set-top box, and remote, as well as the traits AppSelector, InputSelector, MediaState, TransportControl, and Volume. We are excited to announce the release of new Smart Home Entertainment Device (SHED) types and traits. These new device types and traits compliment the original set we released earlier this year, and help build out a more complete solution for smart home media and gaming devices. By implementing these types and traits on your entertainment devices, you can enable users to fully access device and media controls from any Assistant surface.
SHED Types and Traits
To expand the SHED options, we've released the following new device types for Smart Home:
We've also released the following new trait:
To ensure a consistent, high-quality experience for all end users, each of these device types require your service to report activityState and playbackState to Google using the ReportState API. This requirement improves the portability between media devices and helps the Assistant better understand user intents for these devices. By implementing the complete set of recommended device traits, you can further improve the quality of your smart home Action and improve device targeting for media playback command fulfilment.
activityState
playbackState
For more information on how to implement these new device features, check out the docs and samples. You can also join us at our "Hey Google" Smart Home Virtual Summit to learn more about these new features.
We want to hear from you, so continue sharing your feedback with us through the issue tracker, and engage with other smart home developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!
Posted by Jose Ugia and Checkout.com
We sat down with Riaz Bordie, the CTO of Checkout.com, a leading international provider of online payment solutions, to get his advice to merchants and the developer community on how to think about future-proofing payments in the uncertain world we live in today.
Jose Ugia: What advice do you have for merchants and developers as it relates to payments in these difficult times?
Riaz: Merchants are seeing a polarizing impact of COVID-19 on their businesses. For those who have an online presence, you’re either seeing a lull in traffic or a spike.
If you’re a merchant who’s seeing traffic dwindle, it’s more important than ever to make sure every transaction counts. If you used to see 50 transactions a day and now you see 10, you want to make sure all 10 deliver. Work with your Payment Service Provider (PSP) to make sure your approval ratios are as optimal as possible -- a legitimate customer who gets declined incorrectly may not return to purchase as they have in the past. If your PSP supports alternative payment methods like Google Pay that decrease friction at checkout and local payment methods if you’re selling internationally, that’s ideal. Keep an eye on your PSP’s stacks and uptimes to make sure you’re not missing out on sales due to outages or technical issues.
If you’re a merchant seeing a spike in traffic, that’s great news! But it’s important to note that a sudden traffic increase without proper operational and infrastructure planning can lead to fraud spikes, decreases in approval ratios, and downtime. With higher sales velocity, risk related issues will multiply. You’ll see more attempted fraud as fraudsters take advantage of unsuspecting consumers, higher payment declines resulting from outdated issuer risk modeling and excessive chargeback levels, subscription cancellations, buyer’s remorse, among others. How are your payments infrastructure and operations equipped to handle all of this?
Make sure your infrastructure is capable of scaling up. If you don’t have autoscaling, you’ll need a team and processes in place to scale infrastructure for traffic spikes, and keep in mind this may get harder with people working remotely. Work your PSP and other providers to optimize your payments, risk models and chargeback handling during this challenging time.
For both types of merchants, it’s important to pay closer attention to performance of your payments system. This includes both ensuring that processes are working in an optimal way - especially given remote working situations and also ensuring that you are seeing efficiencies at scale.
Jose Ugia: How did you think about building a payments infrastructure that was scalable and future-proof at Checkout.com?
We knew in the beginning we wanted a unified API, which through a single integration gives a merchant access to any market via a range of payment methods and other facilities. We’ve worked hard to get acquiring licenses in as many markets as possible so we can bring acquiring in-house, which in turn gives us greater visibility on the entire payment flow. We have also invested in a gateway that can be consistently deployed in local geographies so that whether the merchant is in Dubai or Singapore, they are getting the most optimal traffic flow.
Any engineer knows that tech breaks. Those who win have a better plan for dealing with breakage efficiently, to consistently maintain high levels of service. We spend a lot of time and resources on making sure our stack is resilient and we have the right operational processes in place to both proactively monitor for potential issues and respond correctly when they come up.
Jose Ugia: Speaking of where things are headed, where do you see the future of payments going from a payment service provider perspective?
A few key trends I see:
Risk & Fraud Detection. AI/ML is improving every aspect of tech. Fraudsters will get smarter but so will fraud prevention - it’s a cat and mouse game. In payments, sophisticated risk engines offering ML-based transaction scoring and highly customizable rules builders, among other features, will get better at detecting fraud without compromising sales.
Global acceptance will continue to be complex but paramount. Offering a variety of payment methods is table stakes these days. More and more, we’ll see that local payment methods aren’t the alternative but instead the primary way consumers pay. For example, you need to have Giropay if you’re selling in Germany and Alipay if you’re selling in China if you want a high conversion rate. Ensure that you and your local entities have an optimized setup with your acquirer (ideally domestic where possible) focused on achieving the lowest costs and highest approval rates.
Embedded infrastructure. Merchants - especially enterprise players - will want increased visibility and more control on optimizing their payment systems. We offer this level of insight and flexibility to our merchants today via our APIs around risk, reconciliation, disputes, etc. But we’re headed toward a world where dedicated infrastructure will become part of the package and allow for complete data separation and zero contention.
Jose Ugia: How do you think these changes of payments infrastructure will impact consumers downstream?
Convenience is king among consumers. I believe that COVID-19 will accelerate the move toward a contactless payments society, with consumers relying more on digital wallets and opportunities to pay through their devices. I personally no longer take my wallet out with me when I leave the house. A couple of years ago that felt like a conscious decision - now it’s just part of everyday life to rely solely on my smartphone to pay.
In some regions like MENA, which has typically been a cash-on-delivery society, we’re seeing more merchants close off cash and impose digital payments, opening up more adoption of upfront e-commerce payments. As mandated payment methods begin to change consumer behavior (studies say it takes 2 months to change a habit), new ways of paying will be here to stay, even beyond COVID-19.