Six months ago, we introduced the standalone version of the ML Kit SDK, making it even easier to integrate on-device machine learning into mobile apps. Since then, we’ve launched the Digital Ink Recognition and Pose Detection APIs, and also introduced the ML Kit early access program. Today we are excited to add Entity Extraction to the official ML Kit lineup and also debut a new API for our early access program, Selfie Segmentation!
With ML Kit’s Entity Extraction API, you can now improve the user experience inside your app by understanding text and performing specific actions on it.
The Entity Extraction API allows you to detect and locate entities from raw text, and take action based on those entities. The API works on static text and also in real-time while a user is typing. It supports 11 different entities and 15 different languages (with more coming in the future) to allow developers to make any text interaction a richer experience for the user.
Supported Entities
(Images courtesy of TamTam)
Our early access partner, TamTam, has been using the Entity Extraction API to provide helpful suggestions to their users during their chat conversations. This feature allows users to quickly perform actions based on the context of their conversations.
While integrating this API, Iurii Dorofeev, Head of TamTam Android Development, mentioned, “We appreciated the ease of integration of the ML Kit ... and it works offline. Clustering the content of messages right on the device allowed us to save resources. ML Kit capabilities will help us develop other features for TamTam messenger in the future.”
Check out their messaging app on the Google Play and App Store today.
(Diagram of underlying Text Classifier API)
ML Kit’s Entity Extraction API builds upon the technology powering the Smart Linkify feature in Android 10+ to deliver an easy-to-use and streamlined experience for developers. For an in-depth review of the Text Classifier API, please see our blog post here.
The neural network annotators/models in the Entity Extraction API work as follows: A given input text is first split into words (based on space separation), then all possible word subsequences of certain maximum length (15 words in the example above) are generated, and for each candidate the scoring neural net assigns a value (between 0 and 1) based on whether it represents a valid entity.
Next, the generated entities that overlap are removed, favoring the ones with a higher score over the conflicting ones with a lower score. Then a second neural network is used to classify the type of the entity as a phone number, an address, or in some cases, a non-entity.
The neural network models in the Entity Extraction API are used in combination with other types of models (e.g. rule-based) to identify additional entities in text, such as: flight numbers, currencies and other examples listed above. Therefore, if multiple entities are detected for one text input, the Entity Extraction API can return several overlapping results.
Lastly, ML Kit will automatically download the required language-specific models to the device dynamically. You can also explicitly manage models you want available on the device by using ML Kit’s model management API. This can be useful if you want to download models ahead of time for your users. The API also allows you to delete models that are no longer required.
Selfie Segmentation
With the increased usage of selfie cameras and webcams in today's world, being able to quickly and easily add effects to camera experiences has become a necessity for many app developers today.
ML Kit's Selfie Segmentation API allows developers to easily separate the background from a scene and focus on what matters. Adding cool effects to selfies or inserting your users into interesting background environments has never been easier. This API produces great results with low latency on both Android and iOS devices.
(Example of ML Kit Selfie Segmentation)
Key capabilities:
To join our early access program and request access to ML Kit's Selfie Segmentation API, please fill out this form.
Posted by Charles Maxson, Developer Advocate, Google Cloud
It’s been a little over a decade since Apps Script was introduced as the development platform to automate and extend Google Workspace. Since its inception, tens of millions of solution builders ranging from professional developers, business users, and hobbyists have adopted Apps Script because of its tight integration with Google Workspace, coupled with its relative ease of use, makes building solutions fast and accessible.
Over the course of its history, Apps Script has constantly evolved to keep up with the ever-changing Google Workspace applications themselves, as new features are introduced and existing ones enhanced. Changes to the platform and the development environment itself have been more deliberate, allowing the wide-ranging Apps Script developer audience to rely on a predictable and proven development experience.
Recently, there have been some notable updates. Earlier this year the Apps Script runtime engine went through a major update from the original Rhino runtime to the new V8 version, allowing you to leverage modern JavaScript features in your Apps Script projects. Another milestone launch was the introduction of the Apps Script Dashboard, the ‘homepage’ of Apps Script, where you have access to all your projects and Apps Script platform settings by simply navigating to script.google.com.
But as far as the overall developer experience with Apps Script, the core components of the Apps Script IDE (Integrated Development Environment) where developers spend most of their time writing and debugging code, managing versions and exceptions, deploying projects, etc.; that has been relatively unchanged over App Script’s long storied history—that is until now—and as an Apps Script developer, you are about to get more productive!
The new Apps Script IDE features the same rich integration with Google Workspace as it did before, allowing you to get started building solutions without having to install or configure anything. If you are working on a standalone script project application, you can use the Apps Script Dashboard to launch your project directly, or if you are working on a container bound project in Sheets, Slides or Docs, you can do so from selecting Tools > Script editor from their top menus.
Apps Script Project Dashboard
If you launch your project using the Apps Script Dashboard, you will still start off in the Project Details Overview page. The contents of the Project Details page are relatively unchanged with just a few cosmetic updates where you can still get project info on the numbers of executions and users for your projects, errors, and OAuth scopes in use. On closer inspection, however, the seemingly subtle change to the left hand navigation is actually the first big enhancement of the new Apps Script IDE. Previously, when you launched into a project, you still had the Application Dashboard menus which let you navigate your projects, view all your executions and triggers as well as Apps Script features.
Apps Script Project Details Overview
With the new IDE experience, the prior Apps Script Dashboard menu gives way to a new project-specific menu that lets you focus on your active project. This offers developers a more unified experience of moving between project settings and the code editor without having to navigate menus or bounce back to the applications dashboard. So while it's a subtle change at first glance, it's actually a significant boost for productivity.
If you launch the new IDE as a container bound project, you will immediately enter into the new Apps Script code editor, but the new project menu and developer flow is identical.
One of the more striking updates of the new Apps Script IDE was the work done on the code editor to modernize its look and feel, while also unifying the design with the overall developer experience. More than just aesthetic changes, the new code editor was designed to help developers focus on the most common essential tasks. This includes moving away from the traditional menu elements across the top of the original code editor, to a streamlined set of commands that focus on developer productivity. For example, the new code editor offers a simplified menu system optimized for running and debugging code, while all the other ‘project-related’ functions have been reorganized outside the code editor to the left-hand project navigation bar. This will simplify and set the focus on the core task of writing code, which will assist both new and seasoned Apps Script developers.
Apps Script Code Editor
Behind the fresh new look of the Apps Script code editor, there is a long list of new productivity enhancements that will make any Apps Script developer happy. Some are subtle, some are major. Many are simply delightful. Any combination of them will make you more productive in writing Apps Script code. Here are just some of the highlights:
Code Editor Enhancements
Context Menu Image
Commend Palette Menu Image
Debugger Image
Logging Image
The best way to explore all that awaits is you in the new Apps Script IDE and code editor is by simply diving in and writing some code by visiting script.new. Then you will truly see how the new code editor really does help you be more productive, enabling you to better write code faster, with less errors and with greater readability. The previous version of the Apps Script IDE and code editor served us well, but the jump in productivity and an overall better developer experience awaits you with the new Apps Script IDE.
Go write your best code starting at: script.google.com
To learn more: developers.google.com/apps-script
Posted by Google Developer Studio
Computer Science Education Week kicks off on Monday, December 7th and runs through the 13th. This annual call-to-action was started in 2009 by the Computer Science Teachers Association (CSTA) to raise awareness about the need to encourage CS education at all levels and to highlight the importance of computing across industries.
Google Developers strives to make learning Google technology accessible to all and works to empower developers to build good things together.
Whether you’re a student or a teacher, check out our collection of introductory resources and demos below. Learn how you can get started in your developer journey or empower others to get started.
Note: Some resources may require additional background and extra knowledge
The Google Assistant developer platform lets you create software to extend the functionality of the Google Assistant with “Actions”. Actions let users get things done through a conversational interface that can range from a simple command to turn on the lights or a longer conversation, such as playing a trivia game or exploring a recipe for dinner.
As a developer, you can use the platform to easily create and manage unique and effective conversational experiences for your users.
Actions auto-generated from web content.
Codelab: Build Actions for Google Assistant using Actions Builder (Level 1)
This codelab covers beginner-level concepts for developing with Google Assistant; you do not need any prior experience with the platform to complete it. You’ll learn how to build a simple Action for the Google Assistant that tells users their fortune as they begin their adventure in the mythical land of Gryffinberg. Continue on to level 2 if you’re ready!
Codelab: Build Actions for Google Assistant using Actions SDK (Level 1)
This codelab covers beginner-level concepts for developing with the Actions SDK for Google Assistant; you do not need any prior experience with the platform to complete it.
Tip: If you prefer to work with more visual tools, do the Level 1 Actions Builder codelab instead, which creates the same Action using an in-console Actions Builder instead. View additional codelabs here.
Android is the world's most powerful mobile platform, with more than 2.5 billion active devices.
Build your first Android app by taking the free online Android Basics in Kotlin course created by Google. No programming experience is needed. You'll learn important concepts on how to build an app as well as the fundamentals of programming in Kotlin, the recommended programing language for developers who are new to Android. Start with the first unit: Kotlin Basics for Android!
Once you’re ready to take your app development to the next level, check out Android Basics: Unit 2 where you’ll where you'll build a tip calculator app and an app with a scrollable list of images. You can customize these apps or start building your own Android apps!
You can find more resources such as courses and documentation on developer.android.com. Stay up-to-date on the latest educational resources from the Android engineering team by following our YouTube channel, Twitter account, and subscribing to our newsletter.
Developer Student Clubs are university based community groups for students interested in Google developer technologies.
DSC Solution Challenge
For two years, DSC has challenged students to solve problems in their local communities using technology. Learn the steps to get started on a real life project with tips from Arman Hezarkhani. Get inspired by the 2020 Solution Challenge winners and see what they built here.
If you’re a university student interested in joining or leading a DSC near you, click here to learn more.
Firebase is a mobile and web applications development platform that allows you to manage and solve key challenges across the app lifecycle with its full suite of tools for building, quality, and business.
Codelab: Get to know Firebase for web
In this introductory codelab, you'll learn some of the basics of Firebase to create interactive web applications. Learn how to build and deploy an event RSVP and guestbook chat app using several Firebase products.
Codelab: Firebase web codelab
Following “Get to Know Firebase for web”, take this next codelab and you'll learn how to use Firebase to easily create web applications by implementing and deploying a chat client using Firebase products and services.
Get all the latest educational resources from the Firebase engineering team by following our YouTube channel, Twitter account, and visiting the website.
Do you want to learn how to build natively compiled apps for mobile, web, and desktop from a single codebase? If the answer is yes, we have some great resources for you.
This is a guide to creating your first Flutter app. If you are familiar with object-oriented code and basic programming concepts such as variables, loops, and conditionals, you can complete this tutorial. You don’t need previous experience with Dart, mobile, or web programming.
Check out this free course from Google and Udacity, which is the perfect course if you’re brand new to Flutter.
Google Cloud Platform helps you build what's next with secure infrastructure, developer tools, APIs, data analytics and machine learning.
Cloud OnBoard: Core Infrastructure
In this training, learn the fundamentals of Google Cloud and how it can boost your flexibility and efficiency. During sessions and demos, you'll see the ins and outs of some of Google Cloud's most impactful tools, discover how to maximize your VM instances, and explore the best ways to approach your container strategy.
Google Cloud Codelabs and Challenges
Complete a codelab and coding challenge on Google Cloud topics such as Google Cloud Basics, Compute, Data, Mobile, Monitoring, Machine Learning and Networking.
For in-depth Google Cloud tutorials and the latest Google Cloud news, tune into our Google Cloud Platform YouTube channel!
Google Pay lets your customers pay with the press of a button — using payment methods saved to their Google Account. Learn how to integrate the Google Pay APIs for web and Android.
Google Workspace, formerly known as G Suite, includes all of the productivity apps you know and love—Gmail, Calendar, Drive, Docs, Sheets, Slides, Meet, and many more.
The Google Workspace Developer Platform is a collection of tools and resources that let you customize, extend, and integrate with Google Workspace. Low-code tools such as Apps Script enables you to build customizations that automate routine tasks, and professional resources such as Add-ons and APIs enable software vendors to build applications that extend and integrate with Google Workspace.
Learn Apps Script fundamentals with codelabs
If you're new to Apps Script, you can learn the basics using our Fundamentals of Apps Script with this Google Sheets codelab playlist.
Stay updated on the newest Google Workspace developer tools and tutorials by following us on our YouTube channel and Twitter!
Material Design is a design system, created by Google and backed by open-source code, that helps teams build high-quality digital experiences. Whether you’re building for Android, Flutter, or the web we have guidelines, code, and resources to help you build beautiful products, faster. We’ve compiled the best beginner resources here:
If you’re interested in learning more about Material Design subscribe to the brand new YouTube channel for updates, and Q&A format videos.
TensorFlow is an open source platform for machine learning to help you solve challenging, real-world problems with an entire ecosystem of tools, libraries and community resources.
Teachable Machine
Teachable Machine is a web tool that makes creating machine learning models fast, easy, and accessible to everyone. See how you can write a machine learning model without writing any code, save models, and use them in your own future projects. The models you make with are real TensorFlow.js models that work anywhere JavaScript runs.
Machine Learning Foundations:
Machine Learning Foundations is a free training course where you’ll learn the fundamentals of building machine learned models using TensorFlow. Heads up! You will need to know a little bit of Python.
Subscribe to our YouTube channel and Twitter account for all the latest in machine learning.
Here are other ways our friends within the ecosystem are supporting #CSEdWeek.
Google for Education
Experiments with Google
A collection of innovative projects using Chrome, Android, AI, AR, Web, and more, along with helpful tools and resources to inspire others to create new experiments. New projects are added weekly, like this machine learning collaboration between Billie Eilish, YouTube Music and Google Creative Lab.
Posted by Murat Yener, Developer Advocate
Today marks the release of the first Canary version of Android Studio Arctic Fox (2020.3.1), together with Android Gradle plugin (AGP) version 7.0.0-alpha01. With this release we are adjusting the version numbering for our Gradle plugin and decoupling it from the Android Studio versioning scheme. In this blog post we'll explain the reasons for the change, as well as give a preview of some important changes we're making to our new, incubating Android Gradle plugin APIs and DSL.
With AGP 7.0.0 we are adopting the principles of semantic versioning. What this means is that only major version changes will break API compatibility. We intend to release one major version each year, right after Gradle introduces its own yearly major release.
Moreover, in the case of a breaking change, we will ensure that the removed API is marked with @Deprecated about a year in advance and that its replacement is available at the same time. This will give developers roughly a year to migrate and test their plugins with the new API before the old API is removed.
@Deprecated
Alignment with Gradle's version is also why we're skipping versions 5 and 6, and moving directly to AGP 7.0.0. This alignment indicates that AGP 7.x is meant to work with Gradle 7.x APIs. While it may also run on Gradle 8.x, this is not guaranteed and will depend on whether 8.x removes APIs that AGP relies on.
With this change, the AGP version number will be decoupled from the Android Studio version number. However we will keep releasing Android Studio and Android Gradle plugin together for the foreseeable future.
Compatibility between Android Studio and Android Gradle plugin remains unchanged. As a general rule, projects that use stable versions of AGP can be opened with newer versions of Android Studio.
You can still use Java programming language version 8 with AGP 7.0.0-alpha01 but we are changing the minimum required Java programming language version to Java 11, starting with AGP 7.0.0-alpha02. We are announcing this early in the Canary schedule and many months ahead of the stable release to allow developers time to get ready.
This release of AGP also introduces some API changes. As a reminder, a number of APIs that were introduced in AGP 4.1 were marked as incubating and were subject to change. In fact, in AGP 4.2 some of these APIs have changed. The APIs that are currently incubating do not follow the deprecation cycle that we explain above.
Here is a summary of some important API changes.
onVariants
onProperties
onVariantProperties
beforeVariants
androidComponents
VariantSelector
withBuildType
withName
withFlavor
afterEvaluate
beforeUnitTest
unitTest
beforeAndroidTest
androidTest
Variant
VariantBuilder
VariantProperties
Let’s take a look at some of these changes. Here is a sample onVariants block which targets the release build. The onVariants block Is changed to beforeVariants and uses a variant selector in the following example.
.
``` android { … //onVariants.withName("release") { // ... //} … } androidComponents { val release = selector().withBuildType("release") beforeVariants(release) { variant -> ... } } ```
Similarly onVariantProperties block is changed to onVariants.
onVariants.
``` android { ... //onVariantProperties { // ... //} … } androidComponents.onVariants { variant -> ... } ```
Note, this customization is typically done in a plugin and should not be located in build.gradle. We are moving away from using functions with receivers which suited the DSL syntax but are not necessary in the plugin code.
We are planning to make these APIs stable with AGP 7.0.0 and all plugin authors must migrate to the new androidComponents. If you want to avoid dealing with such changes, make sure your plugins only use stable APIs and do not depend on APIs marked as incubating.
If you want to learn more about other changes coming with this release, make sure to take a look at the release notes.
Java is a registered trademark of Oracle and/or its affiliates.
Posted by Jamal Eason, Product Manager
Today marks the release of the first version of Android Studio Arctic Fox (2020.3.1) on the canary channel, together with Android Gradle plugin (AGP) version 7.0.0-alpha01. With this release, we are adjusting the version numbering of Android Studio and our Gradle plugin. This change decouples the Gradle plugin from the Android Studio versioning scheme and brings more clarity to which year and IntelliJ version Android Studio aligns with for each release.
With Android Studio Arctic Fox (2020.3.1) we are moving to a year-based system that is more closely aligned with IntelliJ IDEA, the IDE upon which Android Studio is built. We are changing the version numbering scheme to encode a number of important attributes: the year, the version of IntelliJ it is based on, plus feature and patch level. WIth this name change you can quickly figure out which version of the IntelliJ platform you are using in Android Studio. In addition, each major version will have a canonical codename, starting with Arctic Fox, and then proceeding alphabetically to help make it easy to see which version is newer.
We recommend that you use the latest version of Android Studio so that you have access to the latest features and quality improvements. To make it easier to stay up to date, we made the version change to clearly de-couple Android Studio from your Android Gradle Plugin version. An important detail to keep in mind is that there is no impact to the way the build system compiles and packages your app when you update the IDE. In contrast, app build process changes and APK/Bundles are dictated by your project AGP version. Therefore, it is safe to update your Android Studio version, even late in your development cycle, because your project AGP version can be updated in a different cadence than your Android Studio version. Lastly, with the new version system it is even easier than before for you or your team to run both the stable and preview versions of Android Studio at the same time on your app project as long as you keep the AGP version on a stable release.
In the previous numbering system, this release would have been Android Studio 4.3. With the new numbering system, it is now Android Studio Arctic Fox (2020.3.1) Canary 1 or just, Arctic Fox.
Going forward, here is how the Android Studio version number scheme will work:
<Year of IntelliJ Version>.<IntelliJ major version>.<Studio major version>
With AGP 7.0.0 we are adopting the principles of semantic versioning, and aligning with the Gradle version that AGP requires. Compatibility between Android Studio and Android Gradle plugin remains unchanged. Projects that use stable versions of AGP can be opened with newer versions of Android Studio.
We will publish another post soon with more details about our AGP versioning philosophy and what is new in AGP 7.0.
We are in early days in the feature development phase for Arctic Fox, but we have invested much of our time in addressing over 200 quality improvements and bugs across a wide range of areas in the IDE from the code editor, app inspection tools, layout editor to the embedded emulator. Check out the release notes for the specific bug fixes.
For those trying out Jetpack Compose, we have a host of new updates, like deploy @Preview composables to device/emulator:
@Preview
Deploy preview composable
Also try out the new Layout Validation Tool in Arctic Fox to see how your layout responds to various screens sizes, font sizes, and Android Color Correction/Color Blind Modes. You can access this via the Layout Validation tool window when you are using the Layout Editor.
Layout Validation
Lastly, for those running MacOS (other platforms are coming soon) with the latest Android Platform tools and an Android 11 device, you can try out the IDE integration for the Wireless ADB feature by going to the Run device selection dialogue → Pair Devices Using Wi-Fi.
Menu to access Wireless ADB feature
Wireless ADB Setup Window
If you want to learn more about other detailed changes coming with this release for both Android Studio and the Android Gradle plugin, make sure to take a look at the release notes.
Posted by Google Creative Lab
“Bad Guy” by Billie Eilish is one of the most-covered songs on YouTube, inspiring thousands of fans to upload their own versions. To celebrate all these covers, YouTube and Google Creative Lab built an AI experiment to combine all of them seamlessly in the world’s first infinite music video: Infinite Bad Guy. The experience aligns every cover to the same beat, no matter its genre, language, or instrumentation.
How do you find “Bad Guy” covers amidst all the billions of videos on YouTube? Just searching for “Bad Guy” would result in false positives, like videos of Billie being interviewed about the song, or miss covers that didn’t use the song name in their titles. YouTube’s ContentID system allows us to find videos that match the musical composition “Bad Guy” and also allows us to narrow our search to videos that appear to be performances or creative interpretations of the song. That way, we can also avoid videos where “Bad Guy” was just background music. We continue to run this search daily, collecting an ever-expanding list of potential covers to use in the experience.
A key part of the experience is being able to jump from cover to cover seamlessly. But fan covers of “Bad Guy” vary widely. Some might be similar to the original, like a dance video set to Billie’s track. Some might vary more in tempo and instrumentation, like a heavy metal cover. And others might diverge greatly from the original, like a clarinet version with no lyrics. How can you get all these covers on the same beat? After trying several approaches like dynamic time warping and chord recognition, we’ve found the most success with a recurrent neural network trained to recognize sections and beats of “Bad Guy.” We collaborated with our friends at IYOYO on cover alignment and they have a great writeup about the process.
Finding and aligning the covers is a fascinating research problem, but the crucial final step is making them explorable to everyone. We’ve tried to make it intuitive and fun to navigate all the infinite combinations, while keeping latency low so the song never drops a beat.
The experience centers around three YouTube players, a number we settled on after a lot of experimentation. Initially we thought more players would be more interesting, but the experience got chaotic and slow. Around the players we’ve added discoverable features like the hashtag drawer and stats page. Video game interfaces have been a big inspiration for us, as they combine multiple interactions in a single dashboard. We’ve also added an autoplay mode for users who want to just sit back and be taken through an ever-changing mix of covers.
We’re excited about how Infinite Bad Guy showcases the incredibly diverse talent of YouTube and the potential machine learning can have for music and creativity. Give it a try and see what beautiful, strange, and brilliant covers you can find.
Posted by Jennifer Kohl, Global Program Manager, Google Developer Groups
Irem presenting at a Google Developer Group event
We recently caught up with Irem Komurcu, a TensorFlow developer and researcher at Istanbul Technical University in Turkey. Irem has been a long-serving member of Google Developer Groups (GDG) Düzce and also serves as a Women Techmakers (WTM) ambassador. Her work with TensorFlow has received several accolades, including being named a Hamdi Ulukaya Girişimi fellow. As one one of twenty-four young entrepreneurs selected, she was flown to New York City last year to learn more about business and receive professional development.
With all this experience to share, we wanted you to hear how she approaches pursuing a career in tech, hones her TensorFlow skills with the GDG community, and thinks about how upcoming programmers can best position themselves for success. Check out the full interview below for more.
I first became interested in tech when I was in high school and went on to study computer engineering. At university, I had an eye-opening experience when I traveled from Turkey to the Google Developer Day event in India. It was here where I observed various code languages, products, and projects that were new to me.
In particular, I saw TensorFlow in action for the first time. Watching the powerful machine learning tool truly sparked my interest in deep learning and project development.
I have studied many different aspects of Tensorflow and ML. My first work was on voice recognition and deep learning. However, I am now working as a computer vision researcher conducting various segmentation, object detection, and classification processes with Tensorflow. In my free time, I write various articles about best practices and strategies to leverage TensorFlow in ML.
I kicked off my studies on deep learning on tensorflow.org. It’s a basic first step, but a powerful one. There were so many blogs, codes, examples, and tutorials for me to dive into. Both the Google Developer Group and TensorFlow communities also offered chances to bounce questions and ideas off other developers as I learned.
Between these technical resources and the person-to-person support, I was lucky to start working with the GDG community while also taking the first steps of my career. There were so many opportunities to meet people and grow all around.
I love being in a large community with technology-oriented people. GDG is a network of professionals who support each other, and that enables people to develop. I am continuously sharing my knowledge with other programmers as they simultaneously mentor me. The chance for us to collaborate together is truly fulfilling.
The number of women supported in science, technology, engineering, and mathematics (STEM) is low in Turkey. To address this, I partner with Women Techmakers (WTM) to give educational talks on TensorFlow and machine learning to women who want to learn how to code in my country. So many women are interested in ML, but just need a friendly, familiar face to help them get started. With WTM, I’ve already given over 30 talks to women in STEM.
Keep researching new things. Read everything you can get your eyes on. Technology has been developing rapidly, and it is necessary to make sure your mind can keep up with the pace. That’s why I recommend communities like GDG that help make sure you’re up to date on the newest trends and learnings.
Want to work with other developers like Irem? Then find the right Google Developer Developer Group for you, here.
Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs
(Irene (left) and her DSC team from the Polytechnic University of Cartagena (photo prior to COVID-19)
Irene Ruiz Pozo is a former Google Developer Student Club (DSC) Lead at the Polytechnic University of Cartagena in Murcia, Spain. As one of the founding members, Irene has seen the club grow from just a few student developers at her university to hosting multiple learning events across Spain. Recently, we spoke with Irene to understand more about the unique ways in which her team helped local university students learn more about Google technologies.
Irene mentioned two fascinating projects that she had the chance to work on through her DSC at the Polytechnic University of Cartagena. The first was a learning lab that helped students understand how to use 360º cameras and 3D scanners for machine learning.
(A DSC member giving a demo of a 360º camera to students at the National Museum of Underwater Archeology in Cartagena)
The second was a partnership with the National Museum of Underwater Archeology, where Irene and her team created an augmented reality game that let students explore a digital rendition of the museum’s exhibitions.
(An image from the augmented reality game created for the National Museum of Underwater Archeology)
In the above AR experience created by Irene’s team, users can create their own character and move throughout the museum and explore different virtual renditions of exhibits in a video game-like setting.
One particularly memorable experience for Irene and her DSC was participating in Google’s annual programming competition, Hash Code. As Irene explained, the event allowed developers to share their skills and connect in small teams of two to four programmers. They would then come together to tackle engineering problems like how to best design the layout of a Google data center, create the perfect video streaming experience on YouTube, or establish the best practices for compiling code at Google scale.
(Students working on the Hash Code competition (photo taken prior to COVID-19)
To Irene, the experience felt like a live look at being a software engineer at Google. The event taught her and her DSC team that while programming skills are important, communication and collaboration skills are what really help solve problems. For Irene, the experience truly bridged the gap between theory and practice.
(Irene’s team working with other student developers (photo taken before COVID-19)
After the event, Irene felt that if a true mentorship network was established among other DSCs in Europe, students would feel more comfortable partnering with one another to talk about common problems they faced. Inspired, she began to build out her mentorship program which included a podcast where student developers could collaborate on projects together.
The podcast, which just released its second episode, also highlights upcoming opportunities for students. In the most recent episode, Irene and friends dive into how to apply for Google Summer of Code Scholarships and talk about other upcoming open source project opportunities. Organizing these types of learning experiences for the community was one of the most fulfilling parts of working as a DSC Lead, according to Irene. She explained that the podcast has been an exciting space that allows her and other students to get more experience presenting ideas to an audience. Through this podcast, Irene has already seen many new DSC members eager to join the conversation and collaborate on new ideas.
As Irene now looks out on her future, she is excited for all the learning and career development that awaits her from the entire Google Developer community. Having graduated from university, Irene is now a Google Developer Groups (GDG) Lead - a program similar to DSC, but created for the professional developer community. In this role, she is excited to learn new skills and make professional connections that will help her start her career.
Are you also a student with a passion for code? Then join a local Google Developer Student Club near you, here.
Posted by Patricia Correa - Director, Global Developer Marketing
Editor's note: Last night, on the eve of the Black Consciousness Day in Brazil, a Black man, João Alberto Silveira Freitas, died after being beaten at a supermarket in Porto Alegre, in the south of the country. We would like to express our sentiments to the Black community in Brazil.
Today is Black Consciousness Day in Brazil, a country where over 55% of the population identifies as Black. To commemorate, we are showcasing local developers who create apps, games and websites. Watch this video to hear about their journeys, tips and passions.
Meet the founders & developers
Vitor Eleotério, Software Engineer at iFood, a popular food delivery app in Brazil. As much as he liked technology, his colleagues used to mock and discourage him. Vitor heard many times that he would be a great security man as he is tall and strong. People kept saying that IT was only for rich people. With his passion and hard work, he proved them all wrong. Now, he wants to motivate others to also follow their dreams.
Priscila Aparecida Ferreira Theodoro, Software Engineer at Centauro, a large sports goods retailer in Brazil. Her first contact with technology happened while working at an organization that teaches programming. At 38 years old, Priscila decided to completely change careers and learn how to code. She now teaches programming to women, mentors youths, and is involved in a podcast project for women developers.
Marcos Pablo, Co-founder & CTO at G4IT Solutions, a platform that helps companies to manage and automate the work schedules of off-site teams. It was his mother who encouraged him to enter the tech world when he was in high school. By the time he was 19 years old, he was already managing a small tech company.
Iago Silva Dos Santos, Co-founder & CEO of Trazfavela Delivery, a platform for deliveries to and from favelas. He wanted to help his community, including drivers, retailers and people who wanted easier access to goods. TrazFavela is one of the first companies to receive investment from the Google for Startups Black Founders Fund in Brazil.
Tiago Santos, Founder & CEO of Husky, an app for Brazilian professionals to receive international payments. As a software developer working with international clients, Tiago had experienced first hand how difficult it was to get payments from abroad. With his friend Mauricio Carvalho he created the app so professionals can focus on their careers instead of wasting time with bureaucratic tasks.
Ronaldo Valentino da Cruz, Co-founder & CEO of Oktagon, a games studio that produces indie titles and games for clients. He learned how to program when he was 14 and started working with game development in 2002 at the Universidade Federal Tecnológica do Paraná. So far, the company has launched well-received mid-core titles and worked with publishers and clients all over the world.
Nohoa Arcanjo Allgayer, Co-founder & CMO of Creators.LLC, a network that connects creative talent with potential clients. For Nohoa, it was not an easy decision to quit her previous comfortable corporate job to set up this startup. Now she is proud of the risk she took, as it opened up a world of opportunity and endless learning. She took part in the Google for Startups Residency Program. Creators.LLC was one of the first startups to receive capital from the Google for Startups Black Founders Fund in Brazil.
Samuel Matias, Software Engineer at iFood. He became a developer in 2015 and is very active in the Flutter community. He frequently shares his learnings through online articles and talks.
Aline Bezzoco, Founder & Developer of Ta tudo bem? - a suicide prevention app . She feels that the best thing about technology is being able to create solutions to help people. Her app aids those struggling with mental health problems to feel calmer, less anxious and ask for help.
Egio Arruda Junior, Co-founder & CEO of EasyCrédito, a platform that facilitates loans. The main focus is to help those who don’t even have bank accounts. Egio is passionate about innovation and is always looking to create something new. He took part in two Google for Startups programs - Residency and Accelerator.
Márcio Dos Santos, Co-founder & CTO at Facio, a platform that provides loans and financial education to employees in Brazil. Amongst his family and friends, there was no one who had completed a higher education degree. He decided to study Computer Science because he was a video game fan. At University, a professor selected him to do an internship in the United States. Currently based in Seattle, USA, Márcio likes to be approached for advice by those at the beginning of their careers.
Danielle Monteiro, Data Engineer & Founder of Dani.Academy, an educational platform with free and paid courses about data, architecture, NoSQL and infrastructure. She was the first member of her family to start and finish college. She has now won many awards in and outside Brazil, and is a Google for Startups Mentor. Dani is passionate about giving back to society by sharing her knowledge through her blog, lectures, courses and articles.
---
These are just some of the stories that show that the tech world is not for a few but for everyone. Together we can create change and see more Black people finding opportunities in tech. Celebrate these stories by sharing the video on Twitter, Instagram, Facebook & LinkedIn.
Since we launched Coral back in March 2019, we’ve added a number of new product form factors to accommodate the many ways users are adding on-device ML to their products. We've also streamlined the ML workflow and added capabilities like model pipelining with multiple Edge TPUs for an easier and more robust developer experience. And from this, we’ve helped enable amazing use cases from smart water meters that prevent water loss with Olea Edge, to systems for improving harvest yield with Farmwave, to noise cancellation in meetings in Google’s own Series One meeting kits.
This week, we’ll begin shipping the Coral Accelerator Module, a multi-chip module that combines the Edge TPU and it’s power circuitry into a solderable package. The module exposes PCIe and USB2 interfaces, which make it even easier to integrate Coral into custom designs. Several companies are already taking advantage of the compact size and capabilities with their new products coming to market. Read more about how Gumstix, STD, Siana Systems and IEI are using our module.
And in December, we’ll begin shipping the Dev Board Mini, a smaller, more power-efficient, and value-oriented board that brings forward a more traditional, flattened single-board computer design. The Dev Board Mini pairs a Mediatek 8167 SoC with the Coral Accelerator Module over USB 2 and is a great way to evaluate the module as the center of a project or deployment.
You can see the new Dev Board Mini and Accelerator Module in action in the latest episode of Level Up, where Markku Lepisto controls his studio lights with speech commands.
To get updates on when the board will be available for purchase and other Coral news, sign up for our newsletter.
We recently announced a new version of the Coral ML APIs and tools. This release brings the C++ API into parity with Python and makes it more modular, reusable and performant. At the same time it eliminates unnecessary abstractions and surfaces replacing them with native TensorFlow Lite APIs. This release also graduates the Model Pipelining API out of beta and introduces a new model partitioner that automatically partitions models based on profiling and up to 10x better performance.
We’ve added a pre-trained version of MobileDet — a state-of-the-art object detection model for mobile systems — into our models portfolio. We’re migrating our model-development workflow to TensorFlow 2, and we’re including a handful of updated or new models based on the TF2 Keras framework. For details, check out the full announcement on the TensorFlow blog.
We’re also excited to see great developer tools coming from our ecosystem partners. For example, PerceptiLabs offers a visual API for building TensorFlow models and recently published a new demo which trains a machine learning model to identify sign language optimized for the edge with Coral.
The MRQ design from SigFox enables prototyping at the edge for low bandwidth IoT solutions with Coral
And SigFox released a radio transceiver board that stacks on either the Coral Dev Board or Dev Board Mini. This allows small data payloads to be transmitted across low power, long range radio networks for use cases like smart cities, fleet management, asset tracking, agriculture and energy. The PCB design will be offered as a free download on SigFox’s website. Google Cloud Solutions Architect Markku Lepisto will present the new design today, in the opening keynote at SigFox Connect.
The tool, from Farmwave, includes custom-developed ML models, a harvester-mounted box with cameras, an in-cab display, and on- device AI acceleration from Coral.
Just in time for harvest we wanted to share a story about how Farmwave is using Coral to improve the efficiency of farm equipment and reduce food waste. Traditional yield loss analysis involves hand-counting grains of corn left on the ground mid harvest. It’s a time and labor intensive task, and not feasible for farmers who measure the value of their half-million-dollar combines in minutes spent running them.
By leveraging Coral’s on-device AI capabilities, Farmwave was able to build a system that automates the count while the machine is running. Thus allowing farmers to make real-time adjustments to harvesting machines in response to conditions in the field, which can make a big difference in yield.
Kura Sushi designed their intelligent QA system using a Raspberry Pi paired with the Coral USB Accelerator
Kura Revolving Sushi Bar in Japan has always been committed to the highest standards of health and safety for its customers. Known for their tech forward approach, Kura has dabbled in sushi making robots, an automated prize machine called Bikkura-pon, and a patented dome-shaped dish cover, aptly dubbed Mr. Fresh. But most recently, Kura has used Coral to develop an AI powered system that not only facilitates efficiency for better customer experiences, but also enables better tracking to prevent foodborne illnesses.
While this year has presented the world with many obstacles, we’ve been impressed by the new ideas and innovations coming forward through technology. By providing the necessary tools and technology for edge AI, we strive to empower society to create affordable, adaptable, and intelligent systems.
We are excited to share all that Coral has to offer as we evolve our platform. For a list of worldwide distributors, system integrators and partners, visit the Coral partnerships page.
Please visit Coral.ai to discover more about our edge ML platform and share your feedback at coral-support@google.com. To receive future Coral updates directly in your inbox, sign up for our newsletter.