Google is announcing Project Connected Home over IP, a new Working Group alongside industry partners such as Amazon, Apple and the Zigbee Alliance (separate from the existing Zigbee 3.0/PRO protocol). The project aims to build a new standard that enables IP-based communication across smart home devices, mobile apps, and cloud services. Device manufacturers, silicon providers, and other developers are invited to join the working group and contribute existing open-source technologies into the initiative to accelerate its development so customers and device makers can benefit sooner.
Google Developers Launchpad is an accelerator program that excels in helping startups solve the world’s biggest problems through the best of Google, with a focus on advanced technology. However our impact doesn’t stop there. A distinguishing aspect of our program is the network that we build with, and for, our founders. Over the past five years, Launchpad has created a global community of founders based on deep, genuine connections that we foster during the program, and that community supports one another in remarkable ways.
Miami, December 9, 2019. Once again, the status of Miami as an international tech hub is elevated with TheVentureCity playing host to the final week of Google’s startup acceleration program, Launchpad. The event marks the end of a 10-week immersion that began in Mexico, continued in Argentina, and concludes in South Florida; connecting startups with dedicated support to take their businesses to the next level.
We’re writing to you from Flutter Interact, our biggest Flutter event to date, where we’re making a number of announcements about Flutter, including a new updated release and a series of partnerships that demonstrate our commitment to supporting the ever-growing ecosystem around Flutter.
Our original release for Flutter was focused on helping you build apps that run on iOS and Android from a single codebase. But we want to go further.
MediaPipe is a framework for building cross platform multimodal applied ML pipelines that consist of fast ML inference, classic computer vision, and media processing (e.g. video decoding). MediaPipe was open sourced at CVPR in June 2019 as v0.5.0. Since our first open source version, we have released various ML pipeline examples like
In this blog, we will introduce another MediaPipe example: Object Detection and Tracking. We first describe our newly released box tracking solution, then we explain how it can be connected with Object Detection to provide an Object Detection and Tracking system.
Summary: Flutter Interact is happening on December 11th. Sign up here for our global livestream and watch it at g.co/FlutterInteract.
Last month, we announced that Coral graduated out of beta, into a wider, global release. Today, we're announcing the next version of Mendel Linux (4.0 release Day) for the Coral Dev Board and SoM, as well as a number of other exciting updates.
Japan is well known as an epicenter of innovation and technology, and its startup ecosystem is no different. We’ve seen this first hand from our work with startups such as Cinnamon-- who uses artificial intelligence to remove repetitive tasks from office workers daily function, allowing more work to get done by fewer people, faster.
This is why we are pleased to announce our second accelerator program, housed at the new Google for Startups Campus in the heart of Tokyo. The Google for Startups Accelerator (previously Launchpad Accelerator) is an intensive three-month program for high potential, AI-focused startups, utilizing the proven Launchpad foundational components and content.
Research shows the potential impact of FAW on continental wide maize yield lies between 8.3 and 20.6 million tonnes per year (total expected production of 39m tonnes per year); with losses lying between US$2,48m and US$6,19m per year (of a US$11,59m annual expected value). The impact of FAW is far reaching, and is now reported in many countries around the world.
Agriculture is the backbone of Uganda’s economy, employing 70% of the population. It contributes to half of Uganda’s export earnings and a quarter of the country’s gross domestic product (GDP). Fall armyworm posses a great threat on our livelihoods. We are a small group of like minded developers living and working in Uganda. Most of our relatives grow maize so the impact of the worm was very close to home. We really felt like we needed to do something about it. The vast damage and yield losses in maize production, due to FAW, got the attention of global organizations, who are calling for innovators to help. It is the perfect time to apply machine learning. Our goal is to build an intelligent agent to help local farmers fight this pest in order to increase our food security.
Based on a Machine Learning Crash Course, our Google Developer Group (GDG) in Mbale hosted some study jams in May 2018, alongside several other code labs. This is where we first got hands-on experience using TensorFlow, from which the foundations were laid for the Farmers Companion app. Finally, we felt as though an intelligent solution to help farmers had been conceived.
Equipped with this knowledge & belief, the team embarked on collecting training data from nearby fields. This was done using a smartphone to take images, with the help of some GDG Mbale members. With farmers miles from town, and many fields inaccessible by road (not to mention the floods), this was not as simple as we had first hoped. To inhibit us further, our smartphones were (and still are) the only hard drives we had, thus decreasing the number of images & data we can capture in a day.
But we persisted! Once gathered, the images were sorted, one at a time, and categorized. With TensorFlow we re-trained a MobileNet, a technique known as transfer learning. We then used the TensorFlow Converter to generate a TensorFlow Lite FlatButter file which we deployed in an Android app. We started with about 3956 images, but our dataset is growing exponentially. We are actively collecting more and more data to improve our model’s accuracy. The improvements in TensorFlow, with Keras high level APIs, has really made our approach to deep learning easy and enjoyable and we are now experimenting with TensorFlow 2.0.
The app is simple for the user. Once installed, the user focuses the camera through the app, on a maize crop. Then an image frame is picked and, using TensorFlow Lite, the image frame is analysed to look for Fall armyworm damage. Depending on the results from this phase, a suggestion of a possible solution is given.
The app is available for download and it is constantly undergoing updates, as we push for local farmers to adapt and use it. We strive to ensure a world with #ZeroHunger and believe technology can do a lot to help us achieve this.
We have so far been featured on a national TV station in Uganda, participated in the #hackAgainstHunger and ‘The International Symposium on Agricultural Innovations’ for family farmers, organized by the Food Agricultural Organization of the United Nations, where our solution was highlighted.
We have embarked on scaling the solution to coffee disease and cassava diseases and will slowly be moving on to more. We have also introduced virtual reality to help farmers showcase good farming practices and various training.
Our plan is to collect more data and to scale the solution to handle more pests and diseases. We are also shifting to cloud services and Firebase to improve and serve our model better despite the lack of resources. With improved hardware and greater localised understanding, there's huge scope for Machine Learning to make a difference in the fight against hunger.
The Go gopher was created by renowned illustrator Renee French. This image is adapted from a drawing by Egon Elbre.
November 10 marked Go’s 10th anniversary—a milestone that we are lucky enough to celebrate with our global developer community.
The Gopher community will be celebrating Go’s 10th anniversary at conferences such as Gopherpalooza in Mountain View and KubeCon in San Diego, and dozens of meetups around the world.
In recognition of this milestone, we’re taking a moment to reflect on the tremendous growth and progress Go (also known as golang) has made: from its creation at Google and open sourcing, to many early adopters and enthusiasts, to the global enterprises that now rely on Go everyday for critical workloads.
Go is an open-source programming language designed to help developers build fast, reliable, and efficient software at scale. It was created at Google and is now supported by over 2100 contributors, primarily from the open-source community. Go is syntactically similar to C, but with the added benefits of memory safety, garbage collection, structural typing, and CSP-style concurrency.
Most importantly, Go was purposefully designed to improve productivity for multicore, networked machines and large codebases—allowing programmers to rapidly scale both software development and deployment.
Today, Go has more than a million users worldwide, ranging across industries, experience, and engineering disciplines. Go’s simple and expressive syntax, ease-of-use, formatting, and speed have helped it become one of the fastest growing languages—with a thriving open source community.
As Go’s use has grown, more and more foundational services have been built with it. Popular open source applications built on Go include Docker, Hugo, Kubernetes. Google’s hybrid cloud platform, Anthos, is also built with Go.
Go was first adopted to support large amounts of Google’s services and infrastructure. Today, Go is used by companies including, American Express, Dropbox, The New York Times, Salesforce, Target, Capital One, Monzo, Twitch, IBM, Uber, and Mercado Libre. For many enterprises, Go has become their language of choice for building on the cloud.
One exciting example of Go in action is at MercadoLibre, which uses Go to scale and modernize its ecommerce ecosystem, improve cost-efficiencies, and system response times.
MercadoLibre’s core API team builds and maintains the largest APIs at the center of the company’s microservices solutions. Historically, much of the company’s stack was based on Grails and Groovy backed by relational databases. However this big framework with multiple layers was soon found encountering scalability issues.
Converting that legacy architecture to Go as a new, very thin framework for building APIs streamlined those intermediate layers and yielded great performance benefits. For example, one large Go service is now able to run 70,000 requests per machine with just 20 MB of RAM.
“Go was just marvelous for us,” explains Eric Kohan, Software Engineering Manager at MercadoLibre. “It’s very powerful and very easy to learn, and with backend infrastructure has been great for us in terms of scalability.”
Using Go allowed MercadoLibre to cut the number of servers they use for this service to one-eighth the original number (from 32 servers down to four), plus each server can operate with less power (originally four CPU cores, now down to two CPU cores). With Go, the company obviated 88 percent of their servers and cut CPU on the remaining ones in half—producing a tremendous cost-savings.
With Go, MercadoLibre’s build times are three times (3x) faster and their test suite runs an amazing 24 times faster. This means the company’s developers can make a change, then build and test that change much faster than they could before.
Today, roughly half of Mercadolibre's traffic is handled by Go applications.
"We really see eye-to-eye with the larger philosophy of the language," Kohan explains. "We love Go's simplicity, and we find that having its very explicit error handling has been a gain for developers because it results in safer, more stable code in production."
We’re thrilled by how the Go community continues to grow, through developer usage, enterprise adoption, package contribution, and in many other ways.
Building off of that growth, we’re excited to announce go.dev, a new hub for Go developers.
There you’ll find centralized information for Go packages and modules, a wealth of learning resources to get started with the language, and examples of critical use cases and case studies of companies using Go.
MercadoLibre’s recent experience is just one example of how Go is being used to build fast, reliable, and efficient software at scale.
You can read more about MercadoLibre’s success with Go in the full case study.
Google Pay is now available on Stripe Checkout. Businesses with Stripe Checkout on their websites can now provide an optimized checkout experience to Google Pay users.
Google Pay is available directly from Stripe Checkout
Refer to Stripe’s Checkout documentation for more information.
Stripe merchants that aren’t using Stripe Checkout can integrate directly with Google Pay using the Google Pay Setup Guide.
Google Pay is the fast, simple and secure way to pay on sites, in apps, and in stores using the payment options saved to your Google Account.
See Google Pay Developer documentation for information on additional integration options.
With Cardboard and the Google VR software development kit (SDK), developers have created and distributed VR experiences across both Android and iOS devices, giving them the ability to reach millions of users. While we’ve seen overall usage of Cardboard decline over time and we’re no longer actively developing the Google VR SDK, we still see consistent usage around entertainment and education experiences, like YouTube and Expeditions, and want to ensure that Cardboard’s no-frills, accessible-to-everyone approach to VR remains available.
Today, we’re releasing the Cardboard open source project to let the developer community continue to build Cardboard experiences and add support to their apps for an ever increasing diversity of smartphone screen resolutions and configurations. We think that an open source model—with additional contributions from us—is the best way for developers to continue to build experiences for Cardboard. We’ve already seen success with this approach with our Cardboard Manufacturer Kit—an open source project to enable third-party manufacturers to design and build their own unique compatible VR viewers—and we’re excited to see where the developer community takes Cardboard in the future.
What's Included in the open source project
We're releasing libraries for developers to build their Cardboard apps for iOS and Android and render VR experiences on Cardboard viewers. The open source project provides APIs for head tracking, lens distortion rendering and input handling. We’ve also included an Android QR code library, so that apps can pair any Cardboard viewer without depending on the Cardboard app.
An open source model will enable the community to continue to improve Cardboard support and expand its capabilities, for example adding support for new smartphone display configurations and Cardboard viewers as they become available. We’ll continue to contribute to the Cardboard open source project by releasing new features, including an SDK package for Unity.
If you’re interested in learning how to develop with the Cardboard open source project, please see our developer documentation, or visit the Cardboard GitHub repo to access source code, build the project and download the latest release.
Who doesn’t love finding a good shortcut? A year ago, G Suite created a handful of shortcuts: docs.new, sheets.new, and slides.new. You can easily pull up a new document, spreadsheet or presentation by typing those shortcuts into your address bar.
This inspired Google Registry to release the .new domain extension as a way for people to perform online actions in one quick step. And now any company or organization can register its own .new domain to help people get things done faster, too. Here are some of our favorite shortcuts that you can use:
OpenTable’s reservation.new, eBay’s sell.new and Github’s repo.new are also handy time-savers. Similar to .app, .page, and .dev, .new will be secure because all domains will be served over HTTPS connections. Through January 14, 2020, trademark owners can register their trademarked .new domains. Starting December 2, 2019, anyone can apply for a .new domain during the Limited Registration Period. If you’ve got an idea for a .new domain, you can learn more about our policies and how to register at whats.new.
With .new, you can help people take action faster. We hope to see .new shortcuts for all the things people frequently do online.
Have you built an Action for the Google Assistant and wondered how many people are using it? Or how many of your users are returning users? In this blog post, we will dive into 5 improvements that the Actions on Google Console team has made to give you more insight into how your Action is being used.
We've updated three areas of the Actions Console for readability: Active Users Chart, Date Range Selection, and Filter Options. With these new updates, you can now better customize the data to analyze the usage of your Actions.
The labels at the top of the Active Users chart now read Daily, Weekly and Monthly, instead of the previous 1 Day, 7 Days and 28 Days labels. We also improved the readability of the individual date labels at the bottom of the chart to be more clear. You’ll also notice a quick insight at the bottom of the chart that shows the unique number of users during this time period.
Previously, the date range selectors applied globally to all the charts. These selectors are now local to each chart, allowing you more control over how you view your data.
The date selector provides the following ranges:
Previously when you added a filter, it was applied to all the charts on the page. Now, the filters apply only to the chart you're viewing. We’ve also enhanced the filtering options available for the ‘Surface’ filter, such as mobile devices, smart speakers, and smart display.
Before:
After:
The filter feature also lets you show data breakdowns over different dimensions. By default, the chart shows a single consolidated line, a result of all the filters applied. You can now select the ‘Show breakdown by’ option to see how the components of that data contribute to the totals based on the dimension you selected.
A brand new addition to analytics is the introduction of a retention metrics chart to help you understand how well your action is retaining users. This chart shows you how many users you had in a week and how many returned each week for up to 5 weeks. The higher the percentage week after week, the better your retention.
When you hover over each cell in the chart, you can see the exact number of users who have returned for that week from the previous week.
To learn more about what each metric means, you can check out our documentation.
Try out these new improvements to see how your Actions are performing with your users. You can also check out our documentation to learn more. Let us know if you have any feedback or suggestions in terms of metrics that you need to improve your Action. Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.
Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!
We’re excited to announce the official launch of the Google Maps Platform YouTube channel, a place for developers to learn and immerse themselves in the possibilities with maps.
We already have some great content on the channel about how to get started, incredible user stories, and the return of our Geocasts series coming soon - a series dedicated to providing walkthroughs and tips to help you learn how to implement Google Maps Platform features in your web and mobile apps.
Modes and toggles let you define the configurable attributes of your device that may exist outside the standard grammar for device control traits (such as On/Off or Start/Stop). This feature is often used to express device-specific settings, such as the "load size" for a clothes washer or the "cooking mode" for an oven.
When we initially introduced modes and toggles, we supported a whitelisted set of names and synonyms to ensure the most accurate responses and best user experience. Over time, we continued to add support based on the community's requests, but getting these requests approved has been a common pain point for many of you.
Starting today, you no longer have to get the names and synonyms provided in your SYNC response approved. The Google Assistant dynamically determines the necessary grammar for users to invoke these traits. If you're not already familiar with modes and toggles, here is an example using these traits to add support for custom cooking modes to an oven.
{ availableModes: [{ name: 'cook', name_values: [{ name_synonym: ['cook setting'], lang: 'en' }], settings: [{ setting_name: 'pizza', setting_values: [{ setting_synonym: ['pizza'], lang: 'en' }] }, { setting_name: 'pasta', setting_values: [{ setting_synonym: ['pasta'], lang: 'en' }] }] }], }
Example modes in SYNC response
Controlling a device using modes and toggles
We're excited to see what you build with these improved modes and toggles! For more details on using these features, see the updated guides for the Modes Trait and Toggles Trait. To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.
Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on.
Coral is already delivering impact across industries, and several of our partners are including Coral in products that require fast ML inferencing at the edge.
In healthcare, Care.ai is using Coral to build a device that enables hospitals and care centers to respond quickly to falls, prevent bed sores, improve patient care, and reduce costs. Virgo SVS is also using Coral as the basis of a polyp detection system that helps doctors improve the accuracy of endoscopies.
In a very different use case, Olea Edge employs Coral to help municipal water utilities accurately measure the amount of water used by their commercial customers. Their Meter Health Analytics solution uses local AI to reduce waste and predict equipment failure in industrial water meters.
Nexcom is using Coral to build gateways with local AI and provide a platform for next-gen, AI-enabled IoT applications. By moving AI processing to the gateway, existing sensor networks can stay in service without the need to add AI processing to each node.
Coral’s Dev Board is designed as an integrated prototyping solution for new product development. Under the heatsink is the detachable Coral SoM, which combines Google’s Edge TPU with the NXP IMX8M SoC, Wi-Fi and Bluetooth connectivity, memory, and storage. We’re happy to announce that you can now purchase the Coral SoM standalone. We’ve also created a baseboard developer guide to help integrate it into your own production design.
Our Coral USB Accelerator allows users with existing system designs to add local AI inferencing via USB 2/3. For production workloads, we now offer three new Accelerators that feature the Edge TPU and connect via PCIe interfaces: Mini PCIe, M.2 A+E key, and M.2 B+M key. You can easily integrate these Accelerators into new products or upgrade existing devices that have an available PCIe slot.
The new Coral products are available globally and for sale at Mouser; for large volume sales, contact our sales team. By the end of 2019, we'll continue to expand our distribution of the Coral Dev Board and SoM into new markets including: Taiwan, Australia, New Zealand, India, Thailand, Singapore, Oman, Ghana and the Philippines.
We’ve also revamped the Coral site with better organization for our docs and tools, a set of success stories, and industry focused pages. All of it can be found at a new, easier to remember URL Coral.ai.
To help you get the most out of the hardware, we’re also publishing a new set of examples. The included models and code can provide solutions to the most common on-device ML problems, such as image classification, object detection, pose estimation, and keyword spotting.
For those looking for a more in-depth application—and a way to solve the eternal problem of squirrels plundering your bird feeder—the Smart Bird Feeder project shows you how to perform classification with a custom dataset on the Coral Dev board.
Finally, we’ll soon release a new version of the Mendel OS that updates the system to Debian Buster, and we're hard at work on more improvements to the Edge TPU compiler and runtime that will improve the model development workflow.
The official launch of Coral is, of course, just the beginning, and we’ll continue to evolve the platform. Please keep sending us feedback at coral-support@google.com.
DevFest season is always full of lively surprises with enchanting adventures right around the corner. Sometimes these adventures are big: attending a DevFest in the Caribbean, in the heart of the amazon jungle, or traveling more than 3,000 meters above sea level to discover the beautiful South American highlands. Other times they are small but precious: unlocking a new way of thinking that completely shifts how you code.
October marks the beginning of our DevFest 2019 season in Latin America, where all of these experiences become a reality thanks to the efforts of our communities.
What makes DevFests in LATAM different? Our community is free spirited, eager to explore the natural landscapes we call home, proud of our deep cultural diversity, and energized by our big cities. At the same time, we are connected to the tranquil spirit of our small towns. This year, we hope to reflect this way of life through our 55 official Latin America DevFests.
During the season, Latin America will open its doors to Google Developer Experts, Women Techmakers, Googlers, and other renowned speakers, to exchange ideas on Google products such as Android, TensorFlow, Flutter, Google Cloud Platform. Activities include, hackathons, codelabs and training sessions. This season, we will be joined by Googlers Grant Timmerman and Mete Atamel.
Grant is a Developer Programs Engineer at Google where he works on Cloud Functions, Cloud Run, and other serverless technologies on Google Cloud Platform. He loves open source, Node, and plays the alto saxophone in his spare time. During his time in Latin America, he'll be discussing all things serverless at DevFests and Cloud Summits in Chile, Argentina, Peru, Colombia, and Mexico.
Mete is a Developer Advocate based in London. He focuses on helping developers with Google Cloud. At DevFest Sul in Floripa and other conferences and meetups throughout Brazil in October, he’ll be talking about serverless containers using Knative and Cloud Run. He first visited the region back in 2017 when he visited Sao Paulo
Afterwards, he went to Rio de Janeiro and immediately fell in love with the city, its friendly people and its positive vibe. Since then, he spoke at a number of conferences and meetups in Mexico, Colombia, Peru, Argentina, Uruguay and Brazil, and always has been impressed with the eagerness of people to learn more.
This year we will be visiting new countries such as Jamaica, Haiti, Guyana, Honduras, Venezuela and Ecuador that have created their first GDG (Google Developer Group) communities. Most of these new communities are celebrating their first DevFest! We'll also be hosting diversity and inclusion events, so keep an eye out for more details!
We thank everyone for being a part of DevFest and our community.
We hope you join us!
#DevFest
#DevFestLATAM
Find a DevFest near you at g.co/dev/fest/sa