Posted by Jason Scott, Head of Startup Developer Ecosystem, U.S., Google
At Google, we have long understood that voice user interfaces can help millions of people accomplish their goals more effectively. Our journey in voice began in 2008 with Voice Search -- with notable milestones since, such as building our first deep neural network in 2012, our first sequence-to-sequence network in 2015, launching Google Assistant in 2016, and processing speech fully on device in 2019. These building blocks have enabled the unique voice experiences across Google products that our users rely on everyday.
Voice AI startups play a key role in helping build and deliver innovative voice-enabled experiences to users. And, Google is committed to helping tech startups deliver high impact solutions in the voice space. This month, we are excited to announce the Google for Startups Accelerator: Voice AI program, which will bring together the best of Google’s programs, products, people and technology with a joint mission to advance and support the most promising voice-enabled AI startups across North America.
As part of this Google for Startups Accelerator, selected startups will be paired with experts to help tackle the top technical challenges facing their startup. With an emphasis on product development and machine learning, founders will connect with voice technology and AI/ML experts from across Google to take their innovative solutions to the next level.
We are proud to launch our first ever Google for Startups Accelerator: Voice AI -- building upon Google’s longstanding efforts to advance the future of voice-based computing. The accelerator will kick off in March 2021, bringing together a cohort of 10 to 12 innovative voice technology startups. If this sounds like your startup, we'd love to hear from you. Applications are open until January 28, 2021.
Posted by Louis Wasserman, Software Engineer and James Ward, Developer Advocate
Kotlin is now the fourth "most loved" programming language with millions of developers using it for Android, server-side / cloud backends, and various other target runtimes. At Google, we've been building more of our apps and backends with Kotlin to take advantage of its expressiveness, safety, and excellent support for writing asynchronous code with coroutines.
Since everything in Google runs on top of gRPC, we needed an idiomatic way to do gRPC with Kotlin. Back in April 2020 we announced the open sourcing of gRPC Kotlin, something we'd originally built for ourselves. Since then we've seen over 30,000 downloads and usage in Android and Cloud. The community and our engineers have been working hard polishing docs, squashing bugs, and making improvements to the project; culminating in the shiny new 1.0 release! Dive right in with the gRPC Kotlin Quickstart!
For those new to gRPC & Kotlin let's do a quick runthrough of some of the awesomeness. gRPC builds on Protocol Buffers, aka "protos" (language agnostic & high performance data interchange) and adds the network protocol for efficiently communicating with protos. From a proto definition the servers, clients, and data transfer objects can all be generated. Here is a simple gRPC proto:
message HelloRequest { string name = 1; } message HelloReply { string message = 1; } service Greeter { rpc SayHello (HelloRequest) returns (HelloReply) {} }
In a Kotlin project you can then define the implementation of the Greeter's SayHello service with something like:
object : GreeterGrpcKt.GreeterCoroutineImplBase() { override suspend fun sayHello(request: HelloRequest) = HelloReply .newBuilder() .setMessage("hello, ${request.name}") .build() }
You'll notice that the function has `suspend` on it because it uses Kotlin's coroutines, a built-in way to handle async / reactive IO. Check out the server example project.
With gRPC the client "stubs" are generated making it easy to connect to gRPC services. For the protoc above, the client stub can be used in Kotlin with:
val stub = GreeterCoroutineStub(channel) val request = HelloRequest.newBuilder().setName("world").build() val response = stub.sayHello(request) println("Received: ${response.message}")
In this example the `sayHello` method is also a `suspend` function utilizing Kotlin coroutines to make the reactive IO easier. Check out the client example project.
Kotlin also has an API for doing reactive IO on streams (as opposed to requests), called Flow. gRPC Kotlin generates client and server stubs using the Flow API for stream inputs and outputs. The proto can define a service with unary streaming or bidirectional streaming, like:
service Greeter { rpc SayHello (stream HelloRequest) returns (stream HelloReply) {} }
In this example, the server's `sayHello` can be implemented with Flows:
object : GreeterGrpcKt.GreeterCoroutineImplBase() { override fun sayHello(requests: Flow<HelloRequest>): Flow<HelloReply> { return requests.map { request -> println(request) HelloReply.newBuilder().setMessage("hello, ${request.name}").build() } } }
This example just transforms each `HelloRequest` item on the flow to an item in the output / `HelloReply` Flow.
The bidirectional stream client is similar to the coroutine one but instead it passes a Flow to the `sayHello` stub method and then operates on the returned Flow:
val stub = GreeterCoroutineStub(channel) val helloFlow = flow { while(true) { delay(1000) emit(HelloRequest.newBuilder().setName("world").build()) } } stub.sayHello(helloFlow).collect { helloResponse -> println(helloResponse.message) }
In this example the client sends a `HelloRequest` to the server via Flow, once per second. When the client gets items on the output Flow, it just prints them. Check out the bidi-streaming example project.
As you've seen, creating data transfer objects and services around them is made elegant and easy with gRPC Kotlin. But there are a few other exciting things we can do with this...
Android Clients
Protobuf compilers can have a "lite" mode which generates smaller, higher performance classes which are more suitable for Android. Since gRPC Kotlin uses gRPC Java it inherits the benefits of gRPC Java's lite mode. The generated code works great on Android and there is a `grpc-kotlin-stub-lite` artifact which depends on the associated `grpc-protobuf-lite`. Using the generated Kotlin stub client is just like on the JVM. Check out the stub-android example and android example.
GraalVM Native Image Clients
The gRPC lite mode is also a great fit for GraalVM Native Image which turns JVM-based applications into ahead-of-time compiled native images, i.e. they run without a JVM. These applications can be smaller, use less memory, and start much faster so they are a good fit for auto-scaling and Command Line Interface environments. Check out the native-client example project which produces a nice & small 14MB executable client app (no JVM needed) and starts, connects to the server, makes a request, handles the response, and exits in under 1/100th of a second using only 18MB of memory.
Google Cloud Ready
Backend services created with gRPC Kotlin can easily be packaged for deployment in Kubernetes, Cloud Run, or really anywhere you can run docker containers or JVM apps. Cloud Run is a cloud service that runs docker containers and scales automatically based on demand so you only pay when your service is handling requests. If you'd like to give a gRPC Kotlin service a try on Cloud Run:
export PROJECT_ID=PUT_YOUR_PROJECT_ID_HERE docker run -it gcr.io/$PROJECT_ID/grpc-hello-world-mvn \ "java -cp target/classes:target/dependency/* io.grpc.examples.helloworld.HelloWorldClientKt YOUR_CLOUD_RUN_DOMAIN_NAME"
Here is a video of what that looks like:
Check out more Cloud Run gRPC Kotlin examples
Thank You!
We are super excited to have reached 1.0 for gRPC Kotlin and are incredibly grateful to everyone who filed bugs, sent pull requests, and gave the pre-releases a try! There is still more to do, so if you want to help or follow along, check out the project on GitHub.
Also huge shoutouts to Brent Shaffer, Patrice Chalin, David Winer, Ray Tsang, Tyson Henning, and Kevin Bierhoff for all their contributions to this release!
Posted by Payam Shodjai, Director, Product Management Google Assistant
With 2020 coming to a close, we wanted to reflect on everything we have launched this year to help you, our developers and partners, create powerful voice experiences with Google Assistant.
Today, many top brands and developers turn to Google Assistant to help users get things done on their phones and on Smart Displays. Over the last year, the number of Actions built by third-party developers has more than doubled. Below is a snapshot of some of our partners who’ve integrated with Google Assistant:
Below are a few highlights of what we have launched in 2020:
1. Integrate your Android mobile Apps with Google Assistant
App Actions allow your users to jump right into existing functionality in your Android app with the help of Google Assistant. It makes it easier for users to find what they're looking for in your app in a natural way by using their voice. We take care of all the Natural Language Understanding (NLU) processing, making it easy to develop in only a few days. In 2020, we announced that App Actions are now available for all Android developers to voicify their apps and integrate with Google Assistant.
For common tasks such as opening your apps, opening specific pages in your apps or searching within apps, we introduced Common Intents. For a deeper integration, we’ve expanded our vertical-specific built-in intents (BIIs), to cover more than 60 intents across 10 verticals, adding new categories like Social, Games, Travel & Local, Productivity, Shopping and Communications.
For cases where there isn't a built-in intent for your app functionality, you can instead create custom intents that are unique to your Android app. Like BIIs, custom intents follow the actions.xml schema and act as connection points between Assistant and your defined fulfillments.
Learn more about how to integrate your app with Google Assistant here.
2. Create new experiences for Smart Displays
We also announced new developer tools to help you build high quality, engaging experiences to reach users at home by building for Smart Displays.
Actions Builder is a new web-based IDE that provides a graphical interface to show the entire conversation flow. It allows you to manage Natural Language Understanding (NLU) training data and provides advanced debugging tools. And, it is fully integrated into the Actions Console so you can now build, debug, test, release, and analyze your Actions - all in one place.
Actions SDK, a file based representation of your Action and the ability to use a local IDE. The SDK not only enables local authoring of NLU and conversation schemas, but it also allows bulk import and export of training data to improve conversation quality. The Actions SDK is accompanied by a command line interface, so you can build and manage an Action fully in code using your favorite source control and continuous integration tools.
Interactive Canvas allows you to add visual, immersive experiences to Conversational Actions. We announced the expansion of Interactive Canvas to support Storytelling and Education verticals earlier this year.
Continuous Match Mode allows the Assistant to respond immediately to a user’s speech for more fluid experiences by recognizing defined words and phrases set by you.
We also created a central hub for you to find resources to build games on Smart Displays. This site is filled with a game design playbook, interviews with game creators, code samples, tools access, and everything you need to create awesome games for smart displays.
Actions API provides a new programmatic way to test your critical user journeys more thoroughly and effectively, to help you ensure your Action's conversations run smoothly.
The Dialogflow migration tool inside the Actions Console automates much of the work to move projects to the new and improved Actions Builder tool.
We also worked with partners such as Voiceflow and Jovo, to launch integrations to support voice application development on the Assistant. This effort is part of our commitment to enable you to leverage your favorite development tools, while building for Google Assistant.
We launched several other new features that help you build high quality experiences for the home, such as Media APIs, new and improved voices (available in Actions Console), home storage API.
Get started building for Smart Displays here.
3. Discovery features
Once you build high quality Actions, you are ready for your users to discover them. We have designed new touch points to help your users easily learn about your Actions..
For example, on Android mobile, we’ll be recommending relevant Apps Actions even when the user doesn't mention the app’s name explicitly by showing suggestions. Google Assistant will also be suggesting apps proactively, depending on individual app usage patterns. Android mobile users will also be able to customize their experience, creating their own way to automate their most common tasks with app shortcuts, enabling people to set up quick phrases to enable app functions they frequently use. By simply saying "Hey Google, shortcuts", they can set up and explore suggested shortcuts in the settings screen. We’ll also make proactive suggestions for shortcuts throughout Google Assistants’ mobile experience, tailored to how you use your phone.
Assistant Links deep link to your conversational Action to deliver rich Google Assistant experiences to your websites, so you can send your users directly to your conversational Actions from anywhere on the web.
We also recently opened two new built-in intents (BIIs) for public registration: Education and Storytelling. Registering your Actions for these intents allows your users to discover them in a simple, natural way through general requests to Google Assistant on Smart Displays. People will then be able to say "Hey Google, teach me something new" and they will be presented with a browsable selection of different education experiences. For stories, users can simply say "Hey Google, tell me a story".
We know you build personalized and premium experience for your users, and need to make it easy for them to connect their accounts to your Actions. To help streamline this process we opened two betas for improved account linking flows that will allow simple, streamlined authentication via apps.
Looking ahead, we will double down on enabling you, our developers and partners to build great experiences for GoogleAssistant and help you reach your users on the go and at home. You can expect to hear more from us on how we are improving the Google Assistant experience to make it easy for Android developers to integrate their Android app with Google Assistant and also help developers achieve success through discovery and monetization.
We are excited to see what you will build with these new features and tools. Thank you for being a part of the Google Assistant ecosystem. We can’t wait to launch even more features and tools for Android developers and Smart Display experiences in 2021.
Want to stay in the know with announcements from the Google Assistant team? Sign up for our monthly developer newsletter here.
Superheroes are well known for wearing capes, fighting villains and looking to save the world from evil. There also are superheroes that quietly choose to use their super powers to explain technology to new users, maintain community forums, write blog posts, speak at events, host video series, create demos, share sample code and more. All in the name of helping other developers become more successful by learning new skills, delivering better apps, and ultimately enhancing their careers. At Google, we refer to the latter category of superheroes as Google Developer Experts or “GDEs” for short.
The Google Developer Experts program is a global network of deeply experienced technology experts, thought leaders and influencers who actively support developer communities around the world, sharing their knowledge and enthusiasm for a wide range of topic areas from Android to Angular to Google Assistant to Google Cloud – and of course, Google Workspace. Mindful that all GDEs are volunteers who not only give their time freely to support others, they also help improve our products by offering their insightful feedback, heavily testing new features often before they are released, all the while helping expand both use cases and audiences along the way.
With the Google Workspace GDE community including members from more than a dozen countries around the world, we wanted to ask Google Workspace Developer experts what excites them about building on Google Workspace, and why they do what they do as our superheroes helping others become better Google Workspace developers. Here’s what a few of these experts had to say:
Six months ago, we introduced the standalone version of the ML Kit SDK, making it even easier to integrate on-device machine learning into mobile apps. Since then, we’ve launched the Digital Ink Recognition and Pose Detection APIs, and also introduced the ML Kit early access program. Today we are excited to add Entity Extraction to the official ML Kit lineup and also debut a new API for our early access program, Selfie Segmentation!
With ML Kit’s Entity Extraction API, you can now improve the user experience inside your app by understanding text and performing specific actions on it.
The Entity Extraction API allows you to detect and locate entities from raw text, and take action based on those entities. The API works on static text and also in real-time while a user is typing. It supports 11 different entities and 15 different languages (with more coming in the future) to allow developers to make any text interaction a richer experience for the user.
Supported Entities
(Images courtesy of TamTam)
Our early access partner, TamTam, has been using the Entity Extraction API to provide helpful suggestions to their users during their chat conversations. This feature allows users to quickly perform actions based on the context of their conversations.
While integrating this API, Iurii Dorofeev, Head of TamTam Android Development, mentioned, “We appreciated the ease of integration of the ML Kit ... and it works offline. Clustering the content of messages right on the device allowed us to save resources. ML Kit capabilities will help us develop other features for TamTam messenger in the future.”
Check out their messaging app on the Google Play and App Store today.
(Diagram of underlying Text Classifier API)
ML Kit’s Entity Extraction API builds upon the technology powering the Smart Linkify feature in Android 10+ to deliver an easy-to-use and streamlined experience for developers. For an in-depth review of the Text Classifier API, please see our blog post here.
The neural network annotators/models in the Entity Extraction API work as follows: A given input text is first split into words (based on space separation), then all possible word subsequences of certain maximum length (15 words in the example above) are generated, and for each candidate the scoring neural net assigns a value (between 0 and 1) based on whether it represents a valid entity.
Next, the generated entities that overlap are removed, favoring the ones with a higher score over the conflicting ones with a lower score. Then a second neural network is used to classify the type of the entity as a phone number, an address, or in some cases, a non-entity.
The neural network models in the Entity Extraction API are used in combination with other types of models (e.g. rule-based) to identify additional entities in text, such as: flight numbers, currencies and other examples listed above. Therefore, if multiple entities are detected for one text input, the Entity Extraction API can return several overlapping results.
Lastly, ML Kit will automatically download the required language-specific models to the device dynamically. You can also explicitly manage models you want available on the device by using ML Kit’s model management API. This can be useful if you want to download models ahead of time for your users. The API also allows you to delete models that are no longer required.
Selfie Segmentation
With the increased usage of selfie cameras and webcams in today's world, being able to quickly and easily add effects to camera experiences has become a necessity for many app developers today.
ML Kit's Selfie Segmentation API allows developers to easily separate the background from a scene and focus on what matters. Adding cool effects to selfies or inserting your users into interesting background environments has never been easier. This API produces great results with low latency on both Android and iOS devices.
(Example of ML Kit Selfie Segmentation)
Key capabilities:
To join our early access program and request access to ML Kit's Selfie Segmentation API, please fill out this form.
Posted by Charles Maxson, Developer Advocate, Google Cloud
It’s been a little over a decade since Apps Script was introduced as the development platform to automate and extend Google Workspace. Since its inception, tens of millions of solution builders ranging from professional developers, business users, and hobbyists have adopted Apps Script because of its tight integration with Google Workspace, coupled with its relative ease of use, makes building solutions fast and accessible.
Over the course of its history, Apps Script has constantly evolved to keep up with the ever-changing Google Workspace applications themselves, as new features are introduced and existing ones enhanced. Changes to the platform and the development environment itself have been more deliberate, allowing the wide-ranging Apps Script developer audience to rely on a predictable and proven development experience.
Recently, there have been some notable updates. Earlier this year the Apps Script runtime engine went through a major update from the original Rhino runtime to the new V8 version, allowing you to leverage modern JavaScript features in your Apps Script projects. Another milestone launch was the introduction of the Apps Script Dashboard, the ‘homepage’ of Apps Script, where you have access to all your projects and Apps Script platform settings by simply navigating to script.google.com.
But as far as the overall developer experience with Apps Script, the core components of the Apps Script IDE (Integrated Development Environment) where developers spend most of their time writing and debugging code, managing versions and exceptions, deploying projects, etc.; that has been relatively unchanged over App Script’s long storied history—that is until now—and as an Apps Script developer, you are about to get more productive!
The new Apps Script IDE features the same rich integration with Google Workspace as it did before, allowing you to get started building solutions without having to install or configure anything. If you are working on a standalone script project application, you can use the Apps Script Dashboard to launch your project directly, or if you are working on a container bound project in Sheets, Slides or Docs, you can do so from selecting Tools > Script editor from their top menus.
Apps Script Project Dashboard
If you launch your project using the Apps Script Dashboard, you will still start off in the Project Details Overview page. The contents of the Project Details page are relatively unchanged with just a few cosmetic updates where you can still get project info on the numbers of executions and users for your projects, errors, and OAuth scopes in use. On closer inspection, however, the seemingly subtle change to the left hand navigation is actually the first big enhancement of the new Apps Script IDE. Previously, when you launched into a project, you still had the Application Dashboard menus which let you navigate your projects, view all your executions and triggers as well as Apps Script features.
Apps Script Project Details Overview
With the new IDE experience, the prior Apps Script Dashboard menu gives way to a new project-specific menu that lets you focus on your active project. This offers developers a more unified experience of moving between project settings and the code editor without having to navigate menus or bounce back to the applications dashboard. So while it's a subtle change at first glance, it's actually a significant boost for productivity.
If you launch the new IDE as a container bound project, you will immediately enter into the new Apps Script code editor, but the new project menu and developer flow is identical.
One of the more striking updates of the new Apps Script IDE was the work done on the code editor to modernize its look and feel, while also unifying the design with the overall developer experience. More than just aesthetic changes, the new code editor was designed to help developers focus on the most common essential tasks. This includes moving away from the traditional menu elements across the top of the original code editor, to a streamlined set of commands that focus on developer productivity. For example, the new code editor offers a simplified menu system optimized for running and debugging code, while all the other ‘project-related’ functions have been reorganized outside the code editor to the left-hand project navigation bar. This will simplify and set the focus on the core task of writing code, which will assist both new and seasoned Apps Script developers.
Apps Script Code Editor
Behind the fresh new look of the Apps Script code editor, there is a long list of new productivity enhancements that will make any Apps Script developer happy. Some are subtle, some are major. Many are simply delightful. Any combination of them will make you more productive in writing Apps Script code. Here are just some of the highlights:
Code Editor Enhancements
Context Menu Image
Commend Palette Menu Image
Debugger Image
Logging Image
The best way to explore all that awaits is you in the new Apps Script IDE and code editor is by simply diving in and writing some code by visiting script.new. Then you will truly see how the new code editor really does help you be more productive, enabling you to better write code faster, with less errors and with greater readability. The previous version of the Apps Script IDE and code editor served us well, but the jump in productivity and an overall better developer experience awaits you with the new Apps Script IDE.
Go write your best code starting at: script.google.com
To learn more: developers.google.com/apps-script
Posted by Google Developer Studio
Computer Science Education Week kicks off on Monday, December 7th and runs through the 13th. This annual call-to-action was started in 2009 by the Computer Science Teachers Association (CSTA) to raise awareness about the need to encourage CS education at all levels and to highlight the importance of computing across industries.
Google Developers strives to make learning Google technology accessible to all and works to empower developers to build good things together.
Whether you’re a student or a teacher, check out our collection of introductory resources and demos below. Learn how you can get started in your developer journey or empower others to get started.
Note: Some resources may require additional background and extra knowledge
The Google Assistant developer platform lets you create software to extend the functionality of the Google Assistant with “Actions”. Actions let users get things done through a conversational interface that can range from a simple command to turn on the lights or a longer conversation, such as playing a trivia game or exploring a recipe for dinner.
As a developer, you can use the platform to easily create and manage unique and effective conversational experiences for your users.
Actions auto-generated from web content.
Codelab: Build Actions for Google Assistant using Actions Builder (Level 1)
This codelab covers beginner-level concepts for developing with Google Assistant; you do not need any prior experience with the platform to complete it. You’ll learn how to build a simple Action for the Google Assistant that tells users their fortune as they begin their adventure in the mythical land of Gryffinberg. Continue on to level 2 if you’re ready!
Codelab: Build Actions for Google Assistant using Actions SDK (Level 1)
This codelab covers beginner-level concepts for developing with the Actions SDK for Google Assistant; you do not need any prior experience with the platform to complete it.
Tip: If you prefer to work with more visual tools, do the Level 1 Actions Builder codelab instead, which creates the same Action using an in-console Actions Builder instead. View additional codelabs here.
Android is the world's most powerful mobile platform, with more than 2.5 billion active devices.
Build your first Android app by taking the free online Android Basics in Kotlin course created by Google. No programming experience is needed. You'll learn important concepts on how to build an app as well as the fundamentals of programming in Kotlin, the recommended programing language for developers who are new to Android. Start with the first unit: Kotlin Basics for Android!
Once you’re ready to take your app development to the next level, check out Android Basics: Unit 2 where you’ll where you'll build a tip calculator app and an app with a scrollable list of images. You can customize these apps or start building your own Android apps!
You can find more resources such as courses and documentation on developer.android.com. Stay up-to-date on the latest educational resources from the Android engineering team by following our YouTube channel, Twitter account, and subscribing to our newsletter.
Developer Student Clubs are university based community groups for students interested in Google developer technologies.
DSC Solution Challenge
For two years, DSC has challenged students to solve problems in their local communities using technology. Learn the steps to get started on a real life project with tips from Arman Hezarkhani. Get inspired by the 2020 Solution Challenge winners and see what they built here.
If you’re a university student interested in joining or leading a DSC near you, click here to learn more.
Firebase is a mobile and web applications development platform that allows you to manage and solve key challenges across the app lifecycle with its full suite of tools for building, quality, and business.
Codelab: Get to know Firebase for web
In this introductory codelab, you'll learn some of the basics of Firebase to create interactive web applications. Learn how to build and deploy an event RSVP and guestbook chat app using several Firebase products.
Codelab: Firebase web codelab
Following “Get to Know Firebase for web”, take this next codelab and you'll learn how to use Firebase to easily create web applications by implementing and deploying a chat client using Firebase products and services.
Get all the latest educational resources from the Firebase engineering team by following our YouTube channel, Twitter account, and visiting the website.
Do you want to learn how to build natively compiled apps for mobile, web, and desktop from a single codebase? If the answer is yes, we have some great resources for you.
This is a guide to creating your first Flutter app. If you are familiar with object-oriented code and basic programming concepts such as variables, loops, and conditionals, you can complete this tutorial. You don’t need previous experience with Dart, mobile, or web programming.
Check out this free course from Google and Udacity, which is the perfect course if you’re brand new to Flutter.
Google Cloud Platform helps you build what's next with secure infrastructure, developer tools, APIs, data analytics and machine learning.
Cloud OnBoard: Core Infrastructure
In this training, learn the fundamentals of Google Cloud and how it can boost your flexibility and efficiency. During sessions and demos, you'll see the ins and outs of some of Google Cloud's most impactful tools, discover how to maximize your VM instances, and explore the best ways to approach your container strategy.
Google Cloud Codelabs and Challenges
Complete a codelab and coding challenge on Google Cloud topics such as Google Cloud Basics, Compute, Data, Mobile, Monitoring, Machine Learning and Networking.
For in-depth Google Cloud tutorials and the latest Google Cloud news, tune into our Google Cloud Platform YouTube channel!
Google Pay lets your customers pay with the press of a button — using payment methods saved to their Google Account. Learn how to integrate the Google Pay APIs for web and Android.
Google Workspace, formerly known as G Suite, includes all of the productivity apps you know and love—Gmail, Calendar, Drive, Docs, Sheets, Slides, Meet, and many more.
The Google Workspace Developer Platform is a collection of tools and resources that let you customize, extend, and integrate with Google Workspace. Low-code tools such as Apps Script enables you to build customizations that automate routine tasks, and professional resources such as Add-ons and APIs enable software vendors to build applications that extend and integrate with Google Workspace.
Learn Apps Script fundamentals with codelabs
If you're new to Apps Script, you can learn the basics using our Fundamentals of Apps Script with this Google Sheets codelab playlist.
Stay updated on the newest Google Workspace developer tools and tutorials by following us on our YouTube channel and Twitter!
Material Design is a design system, created by Google and backed by open-source code, that helps teams build high-quality digital experiences. Whether you’re building for Android, Flutter, or the web we have guidelines, code, and resources to help you build beautiful products, faster. We’ve compiled the best beginner resources here:
If you’re interested in learning more about Material Design subscribe to the brand new YouTube channel for updates, and Q&A format videos.
TensorFlow is an open source platform for machine learning to help you solve challenging, real-world problems with an entire ecosystem of tools, libraries and community resources.
Teachable Machine
Teachable Machine is a web tool that makes creating machine learning models fast, easy, and accessible to everyone. See how you can write a machine learning model without writing any code, save models, and use them in your own future projects. The models you make with are real TensorFlow.js models that work anywhere JavaScript runs.
Machine Learning Foundations:
Machine Learning Foundations is a free training course where you’ll learn the fundamentals of building machine learned models using TensorFlow. Heads up! You will need to know a little bit of Python.
Subscribe to our YouTube channel and Twitter account for all the latest in machine learning.
Here are other ways our friends within the ecosystem are supporting #CSEdWeek.
Google for Education
Experiments with Google
A collection of innovative projects using Chrome, Android, AI, AR, Web, and more, along with helpful tools and resources to inspire others to create new experiments. New projects are added weekly, like this machine learning collaboration between Billie Eilish, YouTube Music and Google Creative Lab.
Posted by Murat Yener, Developer Advocate
Today marks the release of the first Canary version of Android Studio Arctic Fox (2020.3.1), together with Android Gradle plugin (AGP) version 7.0.0-alpha01. With this release we are adjusting the version numbering for our Gradle plugin and decoupling it from the Android Studio versioning scheme. In this blog post we'll explain the reasons for the change, as well as give a preview of some important changes we're making to our new, incubating Android Gradle plugin APIs and DSL.
With AGP 7.0.0 we are adopting the principles of semantic versioning. What this means is that only major version changes will break API compatibility. We intend to release one major version each year, right after Gradle introduces its own yearly major release.
Moreover, in the case of a breaking change, we will ensure that the removed API is marked with @Deprecated about a year in advance and that its replacement is available at the same time. This will give developers roughly a year to migrate and test their plugins with the new API before the old API is removed.
@Deprecated
Alignment with Gradle's version is also why we're skipping versions 5 and 6, and moving directly to AGP 7.0.0. This alignment indicates that AGP 7.x is meant to work with Gradle 7.x APIs. While it may also run on Gradle 8.x, this is not guaranteed and will depend on whether 8.x removes APIs that AGP relies on.
With this change, the AGP version number will be decoupled from the Android Studio version number. However we will keep releasing Android Studio and Android Gradle plugin together for the foreseeable future.
Compatibility between Android Studio and Android Gradle plugin remains unchanged. As a general rule, projects that use stable versions of AGP can be opened with newer versions of Android Studio.
You can still use Java programming language version 8 with AGP 7.0.0-alpha01 but we are changing the minimum required Java programming language version to Java 11, starting with AGP 7.0.0-alpha02. We are announcing this early in the Canary schedule and many months ahead of the stable release to allow developers time to get ready.
This release of AGP also introduces some API changes. As a reminder, a number of APIs that were introduced in AGP 4.1 were marked as incubating and were subject to change. In fact, in AGP 4.2 some of these APIs have changed. The APIs that are currently incubating do not follow the deprecation cycle that we explain above.
Here is a summary of some important API changes.
onVariants
onProperties
onVariantProperties
beforeVariants
androidComponents
VariantSelector
withBuildType
withName
withFlavor
afterEvaluate
beforeUnitTest
unitTest
beforeAndroidTest
androidTest
Variant
VariantBuilder
VariantProperties
Let’s take a look at some of these changes. Here is a sample onVariants block which targets the release build. The onVariants block Is changed to beforeVariants and uses a variant selector in the following example.
.
``` android { … //onVariants.withName("release") { // ... //} … } androidComponents { val release = selector().withBuildType("release") beforeVariants(release) { variant -> ... } } ```
Similarly onVariantProperties block is changed to onVariants.
onVariants.
``` android { ... //onVariantProperties { // ... //} … } androidComponents.onVariants { variant -> ... } ```
Note, this customization is typically done in a plugin and should not be located in build.gradle. We are moving away from using functions with receivers which suited the DSL syntax but are not necessary in the plugin code.
We are planning to make these APIs stable with AGP 7.0.0 and all plugin authors must migrate to the new androidComponents. If you want to avoid dealing with such changes, make sure your plugins only use stable APIs and do not depend on APIs marked as incubating.
If you want to learn more about other changes coming with this release, make sure to take a look at the release notes.
Java is a registered trademark of Oracle and/or its affiliates.
Posted by Jamal Eason, Product Manager
Today marks the release of the first version of Android Studio Arctic Fox (2020.3.1) on the canary channel, together with Android Gradle plugin (AGP) version 7.0.0-alpha01. With this release, we are adjusting the version numbering of Android Studio and our Gradle plugin. This change decouples the Gradle plugin from the Android Studio versioning scheme and brings more clarity to which year and IntelliJ version Android Studio aligns with for each release.
With Android Studio Arctic Fox (2020.3.1) we are moving to a year-based system that is more closely aligned with IntelliJ IDEA, the IDE upon which Android Studio is built. We are changing the version numbering scheme to encode a number of important attributes: the year, the version of IntelliJ it is based on, plus feature and patch level. WIth this name change you can quickly figure out which version of the IntelliJ platform you are using in Android Studio. In addition, each major version will have a canonical codename, starting with Arctic Fox, and then proceeding alphabetically to help make it easy to see which version is newer.
We recommend that you use the latest version of Android Studio so that you have access to the latest features and quality improvements. To make it easier to stay up to date, we made the version change to clearly de-couple Android Studio from your Android Gradle Plugin version. An important detail to keep in mind is that there is no impact to the way the build system compiles and packages your app when you update the IDE. In contrast, app build process changes and APK/Bundles are dictated by your project AGP version. Therefore, it is safe to update your Android Studio version, even late in your development cycle, because your project AGP version can be updated in a different cadence than your Android Studio version. Lastly, with the new version system it is even easier than before for you or your team to run both the stable and preview versions of Android Studio at the same time on your app project as long as you keep the AGP version on a stable release.
In the previous numbering system, this release would have been Android Studio 4.3. With the new numbering system, it is now Android Studio Arctic Fox (2020.3.1) Canary 1 or just, Arctic Fox.
Going forward, here is how the Android Studio version number scheme will work:
<Year of IntelliJ Version>.<IntelliJ major version>.<Studio major version>
With AGP 7.0.0 we are adopting the principles of semantic versioning, and aligning with the Gradle version that AGP requires. Compatibility between Android Studio and Android Gradle plugin remains unchanged. Projects that use stable versions of AGP can be opened with newer versions of Android Studio.
We will publish another post soon with more details about our AGP versioning philosophy and what is new in AGP 7.0.
We are in early days in the feature development phase for Arctic Fox, but we have invested much of our time in addressing over 200 quality improvements and bugs across a wide range of areas in the IDE from the code editor, app inspection tools, layout editor to the embedded emulator. Check out the release notes for the specific bug fixes.
For those trying out Jetpack Compose, we have a host of new updates, like deploy @Preview composables to device/emulator:
@Preview
Deploy preview composable
Also try out the new Layout Validation Tool in Arctic Fox to see how your layout responds to various screens sizes, font sizes, and Android Color Correction/Color Blind Modes. You can access this via the Layout Validation tool window when you are using the Layout Editor.
Layout Validation
Lastly, for those running MacOS (other platforms are coming soon) with the latest Android Platform tools and an Android 11 device, you can try out the IDE integration for the Wireless ADB feature by going to the Run device selection dialogue → Pair Devices Using Wi-Fi.
Menu to access Wireless ADB feature
Wireless ADB Setup Window
If you want to learn more about other detailed changes coming with this release for both Android Studio and the Android Gradle plugin, make sure to take a look at the release notes.
Posted by Google Creative Lab
“Bad Guy” by Billie Eilish is one of the most-covered songs on YouTube, inspiring thousands of fans to upload their own versions. To celebrate all these covers, YouTube and Google Creative Lab built an AI experiment to combine all of them seamlessly in the world’s first infinite music video: Infinite Bad Guy. The experience aligns every cover to the same beat, no matter its genre, language, or instrumentation.
How do you find “Bad Guy” covers amidst all the billions of videos on YouTube? Just searching for “Bad Guy” would result in false positives, like videos of Billie being interviewed about the song, or miss covers that didn’t use the song name in their titles. YouTube’s ContentID system allows us to find videos that match the musical composition “Bad Guy” and also allows us to narrow our search to videos that appear to be performances or creative interpretations of the song. That way, we can also avoid videos where “Bad Guy” was just background music. We continue to run this search daily, collecting an ever-expanding list of potential covers to use in the experience.
A key part of the experience is being able to jump from cover to cover seamlessly. But fan covers of “Bad Guy” vary widely. Some might be similar to the original, like a dance video set to Billie’s track. Some might vary more in tempo and instrumentation, like a heavy metal cover. And others might diverge greatly from the original, like a clarinet version with no lyrics. How can you get all these covers on the same beat? After trying several approaches like dynamic time warping and chord recognition, we’ve found the most success with a recurrent neural network trained to recognize sections and beats of “Bad Guy.” We collaborated with our friends at IYOYO on cover alignment and they have a great writeup about the process.
Finding and aligning the covers is a fascinating research problem, but the crucial final step is making them explorable to everyone. We’ve tried to make it intuitive and fun to navigate all the infinite combinations, while keeping latency low so the song never drops a beat.
The experience centers around three YouTube players, a number we settled on after a lot of experimentation. Initially we thought more players would be more interesting, but the experience got chaotic and slow. Around the players we’ve added discoverable features like the hashtag drawer and stats page. Video game interfaces have been a big inspiration for us, as they combine multiple interactions in a single dashboard. We’ve also added an autoplay mode for users who want to just sit back and be taken through an ever-changing mix of covers.
We’re excited about how Infinite Bad Guy showcases the incredibly diverse talent of YouTube and the potential machine learning can have for music and creativity. Give it a try and see what beautiful, strange, and brilliant covers you can find.
Posted by Jennifer Kohl, Global Program Manager, Google Developer Groups
Irem presenting at a Google Developer Group event
We recently caught up with Irem Komurcu, a TensorFlow developer and researcher at Istanbul Technical University in Turkey. Irem has been a long-serving member of Google Developer Groups (GDG) Düzce and also serves as a Women Techmakers (WTM) ambassador. Her work with TensorFlow has received several accolades, including being named a Hamdi Ulukaya Girişimi fellow. As one one of twenty-four young entrepreneurs selected, she was flown to New York City last year to learn more about business and receive professional development.
With all this experience to share, we wanted you to hear how she approaches pursuing a career in tech, hones her TensorFlow skills with the GDG community, and thinks about how upcoming programmers can best position themselves for success. Check out the full interview below for more.
I first became interested in tech when I was in high school and went on to study computer engineering. At university, I had an eye-opening experience when I traveled from Turkey to the Google Developer Day event in India. It was here where I observed various code languages, products, and projects that were new to me.
In particular, I saw TensorFlow in action for the first time. Watching the powerful machine learning tool truly sparked my interest in deep learning and project development.
I have studied many different aspects of Tensorflow and ML. My first work was on voice recognition and deep learning. However, I am now working as a computer vision researcher conducting various segmentation, object detection, and classification processes with Tensorflow. In my free time, I write various articles about best practices and strategies to leverage TensorFlow in ML.
I kicked off my studies on deep learning on tensorflow.org. It’s a basic first step, but a powerful one. There were so many blogs, codes, examples, and tutorials for me to dive into. Both the Google Developer Group and TensorFlow communities also offered chances to bounce questions and ideas off other developers as I learned.
Between these technical resources and the person-to-person support, I was lucky to start working with the GDG community while also taking the first steps of my career. There were so many opportunities to meet people and grow all around.
I love being in a large community with technology-oriented people. GDG is a network of professionals who support each other, and that enables people to develop. I am continuously sharing my knowledge with other programmers as they simultaneously mentor me. The chance for us to collaborate together is truly fulfilling.
The number of women supported in science, technology, engineering, and mathematics (STEM) is low in Turkey. To address this, I partner with Women Techmakers (WTM) to give educational talks on TensorFlow and machine learning to women who want to learn how to code in my country. So many women are interested in ML, but just need a friendly, familiar face to help them get started. With WTM, I’ve already given over 30 talks to women in STEM.
Keep researching new things. Read everything you can get your eyes on. Technology has been developing rapidly, and it is necessary to make sure your mind can keep up with the pace. That’s why I recommend communities like GDG that help make sure you’re up to date on the newest trends and learnings.
Want to work with other developers like Irem? Then find the right Google Developer Developer Group for you, here.
Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs
(Irene (left) and her DSC team from the Polytechnic University of Cartagena (photo prior to COVID-19)
Irene Ruiz Pozo is a former Google Developer Student Club (DSC) Lead at the Polytechnic University of Cartagena in Murcia, Spain. As one of the founding members, Irene has seen the club grow from just a few student developers at her university to hosting multiple learning events across Spain. Recently, we spoke with Irene to understand more about the unique ways in which her team helped local university students learn more about Google technologies.
Irene mentioned two fascinating projects that she had the chance to work on through her DSC at the Polytechnic University of Cartagena. The first was a learning lab that helped students understand how to use 360º cameras and 3D scanners for machine learning.
(A DSC member giving a demo of a 360º camera to students at the National Museum of Underwater Archeology in Cartagena)
The second was a partnership with the National Museum of Underwater Archeology, where Irene and her team created an augmented reality game that let students explore a digital rendition of the museum’s exhibitions.
(An image from the augmented reality game created for the National Museum of Underwater Archeology)
In the above AR experience created by Irene’s team, users can create their own character and move throughout the museum and explore different virtual renditions of exhibits in a video game-like setting.
One particularly memorable experience for Irene and her DSC was participating in Google’s annual programming competition, Hash Code. As Irene explained, the event allowed developers to share their skills and connect in small teams of two to four programmers. They would then come together to tackle engineering problems like how to best design the layout of a Google data center, create the perfect video streaming experience on YouTube, or establish the best practices for compiling code at Google scale.
(Students working on the Hash Code competition (photo taken prior to COVID-19)
To Irene, the experience felt like a live look at being a software engineer at Google. The event taught her and her DSC team that while programming skills are important, communication and collaboration skills are what really help solve problems. For Irene, the experience truly bridged the gap between theory and practice.
(Irene’s team working with other student developers (photo taken before COVID-19)
After the event, Irene felt that if a true mentorship network was established among other DSCs in Europe, students would feel more comfortable partnering with one another to talk about common problems they faced. Inspired, she began to build out her mentorship program which included a podcast where student developers could collaborate on projects together.
The podcast, which just released its second episode, also highlights upcoming opportunities for students. In the most recent episode, Irene and friends dive into how to apply for Google Summer of Code Scholarships and talk about other upcoming open source project opportunities. Organizing these types of learning experiences for the community was one of the most fulfilling parts of working as a DSC Lead, according to Irene. She explained that the podcast has been an exciting space that allows her and other students to get more experience presenting ideas to an audience. Through this podcast, Irene has already seen many new DSC members eager to join the conversation and collaborate on new ideas.
As Irene now looks out on her future, she is excited for all the learning and career development that awaits her from the entire Google Developer community. Having graduated from university, Irene is now a Google Developer Groups (GDG) Lead - a program similar to DSC, but created for the professional developer community. In this role, she is excited to learn new skills and make professional connections that will help her start her career.
Are you also a student with a passion for code? Then join a local Google Developer Student Club near you, here.