Example depth map, with red indicating areas that are close by, and blue representing areas that are farther away.
A virtual cat with occlusion off and with occlusion on.
Physics, path planning, and surface interaction examples.
Summary: Flutter Interact is happening on December 11th. Sign up here for our global livestream and watch it at g.co/FlutterInteract. Google’s conference focusing on beautiful designs and apps, Flutter Interact, is streaming worldwide on December 11. Flutter Interact is a day dedicated to creation and collaboration. Whether you are a web developer, mobile developer, front-end engineer, UX designer, or designer, this is a good opportunity to hear the latest from Google. This one-day event has several talks focused on different topics regarding development and design. Speakers include Matias Duarte, VP of Google Design; Tim Sneath, Group PM for Flutter and Dart; and Grant Skinner, CEO, GSkinner, Inc.
It will include content and announcements from the Material Design and Flutter teams, partners, and other companies.
We are grateful to experience Flutter Interact with you on December 11th. In the meantime, follow us on twitter at @FlutterDev and get started with Flutter at flutter.dev.
Last month, we announced that Coral graduated out of beta, into a wider, global release. Today, we're announcing the next version of Mendel Linux (4.0 release Day) for the Coral Dev Board and SoM, as well as a number of other exciting updates.
We have made significant updates to improve performance and stability. Mendel Linux 4.0 release Day is based on Debian 10 Buster and includes upgraded GStreamer pipelines and support for Python 3.7, OpenCV, and OpenCL. The Linux kernel has also been updated to version 4.14 and U-Boot to version 2017.03.3.
We’ve also made it possible to use the Dev Board's GPU to convert YUV to RGB pixel data at up to 130 frames per second on 1080p resolution, which is one to two orders of magnitude faster than on Mendel Linux 3.0 release Chef. These changes make it possible to run inferences with YUV-producing sources such as cameras and hardware video decoders.
To upgrade your Dev Board or SoM, follow our guide to flash a new system image.
MediaPipe is an open-source, cross-platform framework for building multi-modal machine learning perception pipelines that can process streaming data like video and audio. For example, you can use MediaPipe to run on-device machine learning models and process video from a camera to detect, track and visualize hand landmarks in real-time.
Developers and researchers can prototype their real-time perception use cases starting with the creation of the MediaPipe graph on desktop. Then they can quickly convert and deploy that same graph to the Coral Dev Board, where the quantized TensorFlow Lite model will be accelerated by the Edge TPU.
As part of this first release, MediaPipe is making available new experimental samples for both object and face detection, with support for the Coral Dev Board and SoM. The source code and instructions for compiling and running each sample are available on GitHub and on the MediaPipe documentation site.
A new Teachable Sorter tutorial is now available. The Teachable Sorter is a physical sorting machine that combines the Coral USB Accelerator's ability to perform very low latency inference with an ML model that can be trained to rapidly recognize and sort different objects as they fall through the air. It leverages Google’s new Teachable Machine 2.0, a web application that makes it easy for anyone to quickly train a model in a fun, hands-on way.
The tutorial walks through how to build the free-fall sorter, which separates marshmallows from cereal and can be trained using Teachable Machine.
Earlier this month, the TensorFlow team announced a new version of TensorFlow Hub, a central repository of pre-trained models. With this update, the interface has been improved with a fresh landing page and search experience. Pre-trained Coral models compiled for the Edge TPU continue to be available on our Coral site, but a select few are also now available from the TensorFlow Hub. On the site, you can find models featuring an Overlay interface, allowing you to test the model's performance against a custom set of images right from the browser. Check out the experience for MobileNet v1 and MobileNet v2.
We are excited to share all that Coral has to offer as we continue to evolve our platform. For a list of worldwide distributors, system integrators and partners, visit the new Coral partnerships page. We hope you’ll use the new features offered on Coral.ai as a resource and encourage you to keep sending us feedback at coral-support@google.com.
High Level Details
Date: All entries must be submitted by January 20, 2020 11:59 PM PST (GMT-8).
How to Submit: Entries will be collected on the form linked at flutter.dev/clock, but see the Official Rules for full details.
Winners: Submissions will be rated by Google and Flutter expert judges against the following rubric: visual beauty, code quality, novelty of idea, and overall execution.
Prizes: Potential prizes include a fully loaded iMac Pro, Lenovo Smart Display, and Lenovo Smart Clock. Also, all complete and valid submissions will receive a digital certificate of completion. In addition, some of the clock contest submissions might be integrated into the Lenovo Smart Clock's lineup of clock faces, or used as inspiration for future clock faces!
Results will be announced at our Mobile World Congress 2020 Keynote.
Good luck and have fun! Time is ticking…
Japan is well known as an epicenter of innovation and technology, and its startup ecosystem is no different. We’ve seen this first hand from our work with startups such as Cinnamon-- who uses artificial intelligence to remove repetitive tasks from office workers daily function, allowing more work to get done by fewer people, faster.
This is why we are pleased to announce our second accelerator program, housed at the new Google for Startups Campus in the heart of Tokyo. The Google for Startups Accelerator (previously Launchpad Accelerator) is an intensive three-month program for high potential, AI-focused startups, utilizing the proven Launchpad foundational components and content.
Founders who successfully apply for the accelerator will have the opportunity to work on the technical problems facing their startup alongside relevant experts from Google and the industry. They will receive mentorship on these challenges, support on machine learning best practices, as well as connections to relevant teams from across Google to help grow their business.
In addition to mentorship and technical project support, the accelerator also includes deep dives and workshops focused on product design, customer acquisition, and leadership development for founders.
“We hope that by providing these founders with the tools, mentorship, and connections to prepare for the next step in their journey it will, in turn, contribute to a stronger Japanese economy.” says Takuo Suzuki, Google Developers Regional Lead for Japan. “We are excited to work with such passionate startups in a new Google for Startups Campus, an environment built to foster startup growth, and meet our next cohort in 2020”
The program will run from February-May 2020 and applications are now open until 13th December 2019.
Research shows the potential impact of FAW on continental wide maize yield lies between 8.3 and 20.6 million tonnes per year (total expected production of 39m tonnes per year); with losses lying between US$2,48m and US$6,19m per year (of a US$11,59m annual expected value). The impact of FAW is far reaching, and is now reported in many countries around the world.
Agriculture is the backbone of Uganda’s economy, employing 70% of the population. It contributes to half of Uganda’s export earnings and a quarter of the country’s gross domestic product (GDP). Fall armyworm posses a great threat on our livelihoods. We are a small group of like minded developers living and working in Uganda. Most of our relatives grow maize so the impact of the worm was very close to home. We really felt like we needed to do something about it. The vast damage and yield losses in maize production, due to FAW, got the attention of global organizations, who are calling for innovators to help. It is the perfect time to apply machine learning. Our goal is to build an intelligent agent to help local farmers fight this pest in order to increase our food security.
Based on a Machine Learning Crash Course, our Google Developer Group (GDG) in Mbale hosted some study jams in May 2018, alongside several other code labs. This is where we first got hands-on experience using TensorFlow, from which the foundations were laid for the Farmers Companion app. Finally, we felt as though an intelligent solution to help farmers had been conceived.
Equipped with this knowledge & belief, the team embarked on collecting training data from nearby fields. This was done using a smartphone to take images, with the help of some GDG Mbale members. With farmers miles from town, and many fields inaccessible by road (not to mention the floods), this was not as simple as we had first hoped. To inhibit us further, our smartphones were (and still are) the only hard drives we had, thus decreasing the number of images & data we can capture in a day.
But we persisted! Once gathered, the images were sorted, one at a time, and categorized. With TensorFlow we re-trained a MobileNet, a technique known as transfer learning. We then used the TensorFlow Converter to generate a TensorFlow Lite FlatButter file which we deployed in an Android app. We started with about 3956 images, but our dataset is growing exponentially. We are actively collecting more and more data to improve our model’s accuracy. The improvements in TensorFlow, with Keras high level APIs, has really made our approach to deep learning easy and enjoyable and we are now experimenting with TensorFlow 2.0.
The app is simple for the user. Once installed, the user focuses the camera through the app, on a maize crop. Then an image frame is picked and, using TensorFlow Lite, the image frame is analysed to look for Fall armyworm damage. Depending on the results from this phase, a suggestion of a possible solution is given.
The app is available for download and it is constantly undergoing updates, as we push for local farmers to adapt and use it. We strive to ensure a world with #ZeroHunger and believe technology can do a lot to help us achieve this.
We have so far been featured on a national TV station in Uganda, participated in the #hackAgainstHunger and ‘The International Symposium on Agricultural Innovations’ for family farmers, organized by the Food Agricultural Organization of the United Nations, where our solution was highlighted.
We have embarked on scaling the solution to coffee disease and cassava diseases and will slowly be moving on to more. We have also introduced virtual reality to help farmers showcase good farming practices and various training.
Our plan is to collect more data and to scale the solution to handle more pests and diseases. We are also shifting to cloud services and Firebase to improve and serve our model better despite the lack of resources. With improved hardware and greater localised understanding, there's huge scope for Machine Learning to make a difference in the fight against hunger.
The Go gopher was created by renowned illustrator Renee French. This image is adapted from a drawing by Egon Elbre.
November 10 marked Go’s 10th anniversary—a milestone that we are lucky enough to celebrate with our global developer community.
The Gopher community will be celebrating Go’s 10th anniversary at conferences such as Gopherpalooza in Mountain View and KubeCon in San Diego, and dozens of meetups around the world.
In recognition of this milestone, we’re taking a moment to reflect on the tremendous growth and progress Go (also known as golang) has made: from its creation at Google and open sourcing, to many early adopters and enthusiasts, to the global enterprises that now rely on Go everyday for critical workloads.
Go is an open-source programming language designed to help developers build fast, reliable, and efficient software at scale. It was created at Google and is now supported by over 2100 contributors, primarily from the open-source community. Go is syntactically similar to C, but with the added benefits of memory safety, garbage collection, structural typing, and CSP-style concurrency.
Most importantly, Go was purposefully designed to improve productivity for multicore, networked machines and large codebases—allowing programmers to rapidly scale both software development and deployment.
Today, Go has more than a million users worldwide, ranging across industries, experience, and engineering disciplines. Go’s simple and expressive syntax, ease-of-use, formatting, and speed have helped it become one of the fastest growing languages—with a thriving open source community.
As Go’s use has grown, more and more foundational services have been built with it. Popular open source applications built on Go include Docker, Hugo, Kubernetes. Google’s hybrid cloud platform, Anthos, is also built with Go.
Go was first adopted to support large amounts of Google’s services and infrastructure. Today, Go is used by companies including, American Express, Dropbox, The New York Times, Salesforce, Target, Capital One, Monzo, Twitch, IBM, Uber, and Mercado Libre. For many enterprises, Go has become their language of choice for building on the cloud.
One exciting example of Go in action is at MercadoLibre, which uses Go to scale and modernize its ecommerce ecosystem, improve cost-efficiencies, and system response times.
MercadoLibre’s core API team builds and maintains the largest APIs at the center of the company’s microservices solutions. Historically, much of the company’s stack was based on Grails and Groovy backed by relational databases. However this big framework with multiple layers was soon found encountering scalability issues.
Converting that legacy architecture to Go as a new, very thin framework for building APIs streamlined those intermediate layers and yielded great performance benefits. For example, one large Go service is now able to run 70,000 requests per machine with just 20 MB of RAM.
“Go was just marvelous for us,” explains Eric Kohan, Software Engineering Manager at MercadoLibre. “It’s very powerful and very easy to learn, and with backend infrastructure has been great for us in terms of scalability.”
Using Go allowed MercadoLibre to cut the number of servers they use for this service to one-eighth the original number (from 32 servers down to four), plus each server can operate with less power (originally four CPU cores, now down to two CPU cores). With Go, the company obviated 88 percent of their servers and cut CPU on the remaining ones in half—producing a tremendous cost-savings.
With Go, MercadoLibre’s build times are three times (3x) faster and their test suite runs an amazing 24 times faster. This means the company’s developers can make a change, then build and test that change much faster than they could before.
Today, roughly half of Mercadolibre's traffic is handled by Go applications.
"We really see eye-to-eye with the larger philosophy of the language," Kohan explains. "We love Go's simplicity, and we find that having its very explicit error handling has been a gain for developers because it results in safer, more stable code in production."
We’re thrilled by how the Go community continues to grow, through developer usage, enterprise adoption, package contribution, and in many other ways.
Building off of that growth, we’re excited to announce go.dev, a new hub for Go developers.
There you’ll find centralized information for Go packages and modules, a wealth of learning resources to get started with the language, and examples of critical use cases and case studies of companies using Go.
MercadoLibre’s recent experience is just one example of how Go is being used to build fast, reliable, and efficient software at scale.
You can read more about MercadoLibre’s success with Go in the full case study.
Google Pay is now available on Stripe Checkout. Businesses with Stripe Checkout on their websites can now provide an optimized checkout experience to Google Pay users.
Google Pay is available directly from Stripe Checkout
Refer to Stripe’s Checkout documentation for more information.
Stripe merchants that aren’t using Stripe Checkout can integrate directly with Google Pay using the Google Pay Setup Guide.
Google Pay is the fast, simple and secure way to pay on sites, in apps, and in stores using the payment options saved to your Google Account.
See Google Pay Developer documentation for information on additional integration options.
With Cardboard and the Google VR software development kit (SDK), developers have created and distributed VR experiences across both Android and iOS devices, giving them the ability to reach millions of users. While we’ve seen overall usage of Cardboard decline over time and we’re no longer actively developing the Google VR SDK, we still see consistent usage around entertainment and education experiences, like YouTube and Expeditions, and want to ensure that Cardboard’s no-frills, accessible-to-everyone approach to VR remains available.
Today, we’re releasing the Cardboard open source project to let the developer community continue to build Cardboard experiences and add support to their apps for an ever increasing diversity of smartphone screen resolutions and configurations. We think that an open source model—with additional contributions from us—is the best way for developers to continue to build experiences for Cardboard. We’ve already seen success with this approach with our Cardboard Manufacturer Kit—an open source project to enable third-party manufacturers to design and build their own unique compatible VR viewers—and we’re excited to see where the developer community takes Cardboard in the future.
What's Included in the open source project
We're releasing libraries for developers to build their Cardboard apps for iOS and Android and render VR experiences on Cardboard viewers. The open source project provides APIs for head tracking, lens distortion rendering and input handling. We’ve also included an Android QR code library, so that apps can pair any Cardboard viewer without depending on the Cardboard app.
An open source model will enable the community to continue to improve Cardboard support and expand its capabilities, for example adding support for new smartphone display configurations and Cardboard viewers as they become available. We’ll continue to contribute to the Cardboard open source project by releasing new features, including an SDK package for Unity.
If you’re interested in learning how to develop with the Cardboard open source project, please see our developer documentation, or visit the Cardboard GitHub repo to access source code, build the project and download the latest release.
Who doesn’t love finding a good shortcut? A year ago, G Suite created a handful of shortcuts: docs.new, sheets.new, and slides.new. You can easily pull up a new document, spreadsheet or presentation by typing those shortcuts into your address bar.
This inspired Google Registry to release the .new domain extension as a way for people to perform online actions in one quick step. And now any company or organization can register its own .new domain to help people get things done faster, too. Here are some of our favorite shortcuts that you can use:
OpenTable’s reservation.new, eBay’s sell.new and Github’s repo.new are also handy time-savers. Similar to .app, .page, and .dev, .new will be secure because all domains will be served over HTTPS connections. Through January 14, 2020, trademark owners can register their trademarked .new domains. Starting December 2, 2019, anyone can apply for a .new domain during the Limited Registration Period. If you’ve got an idea for a .new domain, you can learn more about our policies and how to register at whats.new.
With .new, you can help people take action faster. We hope to see .new shortcuts for all the things people frequently do online.
Have you built an Action for the Google Assistant and wondered how many people are using it? Or how many of your users are returning users? In this blog post, we will dive into 5 improvements that the Actions on Google Console team has made to give you more insight into how your Action is being used.
We've updated three areas of the Actions Console for readability: Active Users Chart, Date Range Selection, and Filter Options. With these new updates, you can now better customize the data to analyze the usage of your Actions.
The labels at the top of the Active Users chart now read Daily, Weekly and Monthly, instead of the previous 1 Day, 7 Days and 28 Days labels. We also improved the readability of the individual date labels at the bottom of the chart to be more clear. You’ll also notice a quick insight at the bottom of the chart that shows the unique number of users during this time period.
Previously, the date range selectors applied globally to all the charts. These selectors are now local to each chart, allowing you more control over how you view your data.
The date selector provides the following ranges:
Previously when you added a filter, it was applied to all the charts on the page. Now, the filters apply only to the chart you're viewing. We’ve also enhanced the filtering options available for the ‘Surface’ filter, such as mobile devices, smart speakers, and smart display.
Before:
After:
The filter feature also lets you show data breakdowns over different dimensions. By default, the chart shows a single consolidated line, a result of all the filters applied. You can now select the ‘Show breakdown by’ option to see how the components of that data contribute to the totals based on the dimension you selected.
A brand new addition to analytics is the introduction of a retention metrics chart to help you understand how well your action is retaining users. This chart shows you how many users you had in a week and how many returned each week for up to 5 weeks. The higher the percentage week after week, the better your retention.
When you hover over each cell in the chart, you can see the exact number of users who have returned for that week from the previous week.
To learn more about what each metric means, you can check out our documentation.
Try out these new improvements to see how your Actions are performing with your users. You can also check out our documentation to learn more. Let us know if you have any feedback or suggestions in terms of metrics that you need to improve your Action. Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.
Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!