Posted by Kübra Zengin, GDG North America Regional Lead
Image of participants in a recent Elevate workshop.
The North America Developer Ecosystem team recently hosted Elevate for Google Developer Groups organizers and Women Techmakers Ambassadors in US & Canada. The three-month professional development program met every Wednesday via Google Meet to help tech professionals upskill themselves with workshops on leadership, communication, thinking, and teamwork.
The first cohort of the seminar-style program recently came to a close, with 40+ Google Developer Groups organizers and Women Techmakers Ambassadors participating. Additionally, 18 guest speakers - 89% of whom were underrepresented genders - hosted specialized learning sessions over three months of events.
Elevate is just one example of the specialized applied skills training available to the Google Developer Groups community. As we look ahead to offering Elevate again in 2021, we wanted to share with you some of the key takeaways from the first installment of the program.
What the graduates had to say
From landing new roles at companies like Twitter and Accenture, to negotiating salary raises, the 40 graduates of Elevate have seen many successes. Here’s what a few of them had to say:
Whether it’s finding new jobs or moving to new countries, Elevate’s graduates have used their new skills to guide their careers towards their passions. Check out a few of the program’s key lessons below:
Bringing your best self to the table
One major focus of the program was to help community leaders develop their own professional identity and confidence by learning communication techniques that would help them stand out and define themselves in the workplace.
Entire learning sessions were dedicated to specific value-adding topics, including:
Along with other sessions on growth mindsets, problem solving, and more, attendees gained a deeper understanding of the best ways to present themselves, their ideas, and their worth in a professional setting - an essential ability that many feel has already helped them navigate job markets with more precision.
A team that feels valued brings value
The advice above, offered by a guest speaker during a teambuilding session, was one of the quotes that resonated with participants the most during the program. The emphasis on how coworkers think of each other and the best ways to build a culture of ownership over a team’s wins and losses embodies the key learnings central to Elevate’s mission.
The program further emphasized this message with learning sessions on:
With these trainings, paired with others on growth mindsets and decision making, Elevate’s participants were able to start analyzing the effectiveness of different work environments on productivity. Through breakout sessions, they quickly realized that the more secure and supported an employee feels, the more willing they are to go the extra mile for their team. Equipped with this new knowledge base, many participants have already started bringing these key takeaways to their own workplaces in an effort to build more inclusive and productive cultures.
Whether it’s finding a new role or improving your applied skills, we can’t wait to see how Google Developer programs can help members achieve their professional goals.
For similar opportunities, find out how to join a Google Developer Group near you, here. And register for upcoming applied skills trainings on the Elevate website, here.
Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs
Created by the United Nations in 2015 to be achieved by 2030, the 17 Sustainable Development Goals (SDGs) agreed upon by all 193 United Nations Member States aim to end poverty, ensure prosperity, and protect the planet.
Last year brought many challenges, but it also brought a greater spirit around helping each other and giving back to our communities. With that in mind, we invite students around the world to join the Google Developer Student Clubs 2021 Solution Challenge!
If you’re new to the Solution Challenge, it is an annual competition that invites university students to develop solutions for real world problems using one or more Google products or platforms.
This year, see how you can use Android, TensorFlow, Google Cloud, Flutter, or any of your favorite Google technologies to promote employment for all, economic growth, and climate action, by building a solution for one or more of the UN Sustainable Development Goals.
Participants will receive specialized prizes at different stages:
There are four main steps to joining the Solution Challenge and getting started on your project:
Google will provide Solution Challenge participants with various resources to help students build strong projects for their contest submission.
Once all the projects are submitted after the March 31st deadline, judges will evaluate and score each submission from around the world using the criteria listed on the website. From there, winning solutions will be announced in three rounds.
Round 1 (May): The Top 50 teams will be announced.
Round 2 (July): After the top 50 teams submit their new and improved solutions, 10 finalists will be announced.
Round 3 (August): In the finale, the top 3 grand prize winners will be announced live on YouTube during the 2021 Solution Challenge Demo Day.
With a passion for building a better world, savvy coding skills, and a little help from Google technology, we can’t wait to see the solutions students create.
Learn more and sign up for the 2021 Solution Challenge, here.
Posted by Toni Klopfenstein, Developer Advocate
When a user connects a smart device to the Google Assistant via the Home app, the user must select the appropriate related Action from the list of all available Actions. The user then clicks through multiple screens to complete their device setup. Today, we're releasing two new features to improve this device discovery process and drive customer adoption of your Smart Home Action through the Google Home app. App Discovery and Deep Linking are two convenience features that help users find your Google-Assistant compatible smart devices quickly and onboard faster.
App Discovery enables users to quickly find your smart home Action thanks to suggestion chips within the Google Home app. You can implement this new feature through the Actions Console by creating a verified brand link between your Action, your website, and your mobile app. App Discovery doesn't require any coding work to implement, making this a development-light feature that provides great improvements to the user experience of device linking.
In addition to helping users discover your Action directly through suggestion chips, Deep Linking enables you to guide users to your account linking flow within the Google Home app in one step. These deep links are easily added to your mobile app or web content, guiding users to your smart home integration with a single tap.
Deep Linking and App Discovery can help you create a more streamlined onboarding experience for your users, driving increased engagement and user satisfaction, and can be implemented with minimal engineering work.
To implement App Discovery and Deep Linking for your Smart Home Action, check out the developer documents, or watch the video covering these new features.
You can also check out the smart home codelabs if you are just starting to build out your Action.
We want to hear from you, so continue sharing your feedback with us through the issue tracker, and engage with other smart home developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!
Posted by Jason Scott, Head of Startup Developer Ecosystem, U.S., Google
At Google, we have long understood that voice user interfaces can help millions of people accomplish their goals more effectively. Our journey in voice began in 2008 with Voice Search -- with notable milestones since, such as building our first deep neural network in 2012, our first sequence-to-sequence network in 2015, launching Google Assistant in 2016, and processing speech fully on device in 2019. These building blocks have enabled the unique voice experiences across Google products that our users rely on everyday.
Voice AI startups play a key role in helping build and deliver innovative voice-enabled experiences to users. And, Google is committed to helping tech startups deliver high impact solutions in the voice space. This month, we are excited to announce the Google for Startups Accelerator: Voice AI program, which will bring together the best of Google’s programs, products, people and technology with a joint mission to advance and support the most promising voice-enabled AI startups across North America.
As part of this Google for Startups Accelerator, selected startups will be paired with experts to help tackle the top technical challenges facing their startup. With an emphasis on product development and machine learning, founders will connect with voice technology and AI/ML experts from across Google to take their innovative solutions to the next level.
We are proud to launch our first ever Google for Startups Accelerator: Voice AI -- building upon Google’s longstanding efforts to advance the future of voice-based computing. The accelerator will kick off in March 2021, bringing together a cohort of 10 to 12 innovative voice technology startups. If this sounds like your startup, we'd love to hear from you. Applications are open until January 28, 2021.
Posted by Louis Wasserman, Software Engineer and James Ward, Developer Advocate
Kotlin is now the fourth "most loved" programming language with millions of developers using it for Android, server-side / cloud backends, and various other target runtimes. At Google, we've been building more of our apps and backends with Kotlin to take advantage of its expressiveness, safety, and excellent support for writing asynchronous code with coroutines.
Since everything in Google runs on top of gRPC, we needed an idiomatic way to do gRPC with Kotlin. Back in April 2020 we announced the open sourcing of gRPC Kotlin, something we'd originally built for ourselves. Since then we've seen over 30,000 downloads and usage in Android and Cloud. The community and our engineers have been working hard polishing docs, squashing bugs, and making improvements to the project; culminating in the shiny new 1.0 release! Dive right in with the gRPC Kotlin Quickstart!
For those new to gRPC & Kotlin let's do a quick runthrough of some of the awesomeness. gRPC builds on Protocol Buffers, aka "protos" (language agnostic & high performance data interchange) and adds the network protocol for efficiently communicating with protos. From a proto definition the servers, clients, and data transfer objects can all be generated. Here is a simple gRPC proto:
message HelloRequest { string name = 1; } message HelloReply { string message = 1; } service Greeter { rpc SayHello (HelloRequest) returns (HelloReply) {} }
In a Kotlin project you can then define the implementation of the Greeter's SayHello service with something like:
object : GreeterGrpcKt.GreeterCoroutineImplBase() { override suspend fun sayHello(request: HelloRequest) = HelloReply .newBuilder() .setMessage("hello, ${request.name}") .build() }
You'll notice that the function has `suspend` on it because it uses Kotlin's coroutines, a built-in way to handle async / reactive IO. Check out the server example project.
With gRPC the client "stubs" are generated making it easy to connect to gRPC services. For the protoc above, the client stub can be used in Kotlin with:
val stub = GreeterCoroutineStub(channel) val request = HelloRequest.newBuilder().setName("world").build() val response = stub.sayHello(request) println("Received: ${response.message}")
In this example the `sayHello` method is also a `suspend` function utilizing Kotlin coroutines to make the reactive IO easier. Check out the client example project.
Kotlin also has an API for doing reactive IO on streams (as opposed to requests), called Flow. gRPC Kotlin generates client and server stubs using the Flow API for stream inputs and outputs. The proto can define a service with unary streaming or bidirectional streaming, like:
service Greeter { rpc SayHello (stream HelloRequest) returns (stream HelloReply) {} }
In this example, the server's `sayHello` can be implemented with Flows:
object : GreeterGrpcKt.GreeterCoroutineImplBase() { override fun sayHello(requests: Flow<HelloRequest>): Flow<HelloReply> { return requests.map { request -> println(request) HelloReply.newBuilder().setMessage("hello, ${request.name}").build() } } }
This example just transforms each `HelloRequest` item on the flow to an item in the output / `HelloReply` Flow.
The bidirectional stream client is similar to the coroutine one but instead it passes a Flow to the `sayHello` stub method and then operates on the returned Flow:
val stub = GreeterCoroutineStub(channel) val helloFlow = flow { while(true) { delay(1000) emit(HelloRequest.newBuilder().setName("world").build()) } } stub.sayHello(helloFlow).collect { helloResponse -> println(helloResponse.message) }
In this example the client sends a `HelloRequest` to the server via Flow, once per second. When the client gets items on the output Flow, it just prints them. Check out the bidi-streaming example project.
As you've seen, creating data transfer objects and services around them is made elegant and easy with gRPC Kotlin. But there are a few other exciting things we can do with this...
Android Clients
Protobuf compilers can have a "lite" mode which generates smaller, higher performance classes which are more suitable for Android. Since gRPC Kotlin uses gRPC Java it inherits the benefits of gRPC Java's lite mode. The generated code works great on Android and there is a `grpc-kotlin-stub-lite` artifact which depends on the associated `grpc-protobuf-lite`. Using the generated Kotlin stub client is just like on the JVM. Check out the stub-android example and android example.
GraalVM Native Image Clients
The gRPC lite mode is also a great fit for GraalVM Native Image which turns JVM-based applications into ahead-of-time compiled native images, i.e. they run without a JVM. These applications can be smaller, use less memory, and start much faster so they are a good fit for auto-scaling and Command Line Interface environments. Check out the native-client example project which produces a nice & small 14MB executable client app (no JVM needed) and starts, connects to the server, makes a request, handles the response, and exits in under 1/100th of a second using only 18MB of memory.
Google Cloud Ready
Backend services created with gRPC Kotlin can easily be packaged for deployment in Kubernetes, Cloud Run, or really anywhere you can run docker containers or JVM apps. Cloud Run is a cloud service that runs docker containers and scales automatically based on demand so you only pay when your service is handling requests. If you'd like to give a gRPC Kotlin service a try on Cloud Run:
export PROJECT_ID=PUT_YOUR_PROJECT_ID_HERE docker run -it gcr.io/$PROJECT_ID/grpc-hello-world-mvn \ "java -cp target/classes:target/dependency/* io.grpc.examples.helloworld.HelloWorldClientKt YOUR_CLOUD_RUN_DOMAIN_NAME"
Here is a video of what that looks like:
Check out more Cloud Run gRPC Kotlin examples
Thank You!
We are super excited to have reached 1.0 for gRPC Kotlin and are incredibly grateful to everyone who filed bugs, sent pull requests, and gave the pre-releases a try! There is still more to do, so if you want to help or follow along, check out the project on GitHub.
Also huge shoutouts to Brent Shaffer, Patrice Chalin, David Winer, Ray Tsang, Tyson Henning, and Kevin Bierhoff for all their contributions to this release!
Posted by Payam Shodjai, Director, Product Management Google Assistant
With 2020 coming to a close, we wanted to reflect on everything we have launched this year to help you, our developers and partners, create powerful voice experiences with Google Assistant.
Today, many top brands and developers turn to Google Assistant to help users get things done on their phones and on Smart Displays. Over the last year, the number of Actions built by third-party developers has more than doubled. Below is a snapshot of some of our partners who’ve integrated with Google Assistant:
Below are a few highlights of what we have launched in 2020:
1. Integrate your Android mobile Apps with Google Assistant
App Actions allow your users to jump right into existing functionality in your Android app with the help of Google Assistant. It makes it easier for users to find what they're looking for in your app in a natural way by using their voice. We take care of all the Natural Language Understanding (NLU) processing, making it easy to develop in only a few days. In 2020, we announced that App Actions are now available for all Android developers to voicify their apps and integrate with Google Assistant.
For common tasks such as opening your apps, opening specific pages in your apps or searching within apps, we introduced Common Intents. For a deeper integration, we’ve expanded our vertical-specific built-in intents (BIIs), to cover more than 60 intents across 10 verticals, adding new categories like Social, Games, Travel & Local, Productivity, Shopping and Communications.
For cases where there isn't a built-in intent for your app functionality, you can instead create custom intents that are unique to your Android app. Like BIIs, custom intents follow the actions.xml schema and act as connection points between Assistant and your defined fulfillments.
Learn more about how to integrate your app with Google Assistant here.
2. Create new experiences for Smart Displays
We also announced new developer tools to help you build high quality, engaging experiences to reach users at home by building for Smart Displays.
Actions Builder is a new web-based IDE that provides a graphical interface to show the entire conversation flow. It allows you to manage Natural Language Understanding (NLU) training data and provides advanced debugging tools. And, it is fully integrated into the Actions Console so you can now build, debug, test, release, and analyze your Actions - all in one place.
Actions SDK, a file based representation of your Action and the ability to use a local IDE. The SDK not only enables local authoring of NLU and conversation schemas, but it also allows bulk import and export of training data to improve conversation quality. The Actions SDK is accompanied by a command line interface, so you can build and manage an Action fully in code using your favorite source control and continuous integration tools.
Interactive Canvas allows you to add visual, immersive experiences to Conversational Actions. We announced the expansion of Interactive Canvas to support Storytelling and Education verticals earlier this year.
Continuous Match Mode allows the Assistant to respond immediately to a user’s speech for more fluid experiences by recognizing defined words and phrases set by you.
We also created a central hub for you to find resources to build games on Smart Displays. This site is filled with a game design playbook, interviews with game creators, code samples, tools access, and everything you need to create awesome games for smart displays.
Actions API provides a new programmatic way to test your critical user journeys more thoroughly and effectively, to help you ensure your Action's conversations run smoothly.
The Dialogflow migration tool inside the Actions Console automates much of the work to move projects to the new and improved Actions Builder tool.
We also worked with partners such as Voiceflow and Jovo, to launch integrations to support voice application development on the Assistant. This effort is part of our commitment to enable you to leverage your favorite development tools, while building for Google Assistant.
We launched several other new features that help you build high quality experiences for the home, such as Media APIs, new and improved voices (available in Actions Console), home storage API.
Get started building for Smart Displays here.
3. Discovery features
Once you build high quality Actions, you are ready for your users to discover them. We have designed new touch points to help your users easily learn about your Actions..
For example, on Android mobile, we’ll be recommending relevant Apps Actions even when the user doesn't mention the app’s name explicitly by showing suggestions. Google Assistant will also be suggesting apps proactively, depending on individual app usage patterns. Android mobile users will also be able to customize their experience, creating their own way to automate their most common tasks with app shortcuts, enabling people to set up quick phrases to enable app functions they frequently use. By simply saying "Hey Google, shortcuts", they can set up and explore suggested shortcuts in the settings screen. We’ll also make proactive suggestions for shortcuts throughout Google Assistants’ mobile experience, tailored to how you use your phone.
Assistant Links deep link to your conversational Action to deliver rich Google Assistant experiences to your websites, so you can send your users directly to your conversational Actions from anywhere on the web.
We also recently opened two new built-in intents (BIIs) for public registration: Education and Storytelling. Registering your Actions for these intents allows your users to discover them in a simple, natural way through general requests to Google Assistant on Smart Displays. People will then be able to say "Hey Google, teach me something new" and they will be presented with a browsable selection of different education experiences. For stories, users can simply say "Hey Google, tell me a story".
We know you build personalized and premium experience for your users, and need to make it easy for them to connect their accounts to your Actions. To help streamline this process we opened two betas for improved account linking flows that will allow simple, streamlined authentication via apps.
Looking ahead, we will double down on enabling you, our developers and partners to build great experiences for GoogleAssistant and help you reach your users on the go and at home. You can expect to hear more from us on how we are improving the Google Assistant experience to make it easy for Android developers to integrate their Android app with Google Assistant and also help developers achieve success through discovery and monetization.
We are excited to see what you will build with these new features and tools. Thank you for being a part of the Google Assistant ecosystem. We can’t wait to launch even more features and tools for Android developers and Smart Display experiences in 2021.
Want to stay in the know with announcements from the Google Assistant team? Sign up for our monthly developer newsletter here.
Superheroes are well known for wearing capes, fighting villains and looking to save the world from evil. There also are superheroes that quietly choose to use their super powers to explain technology to new users, maintain community forums, write blog posts, speak at events, host video series, create demos, share sample code and more. All in the name of helping other developers become more successful by learning new skills, delivering better apps, and ultimately enhancing their careers. At Google, we refer to the latter category of superheroes as Google Developer Experts or “GDEs” for short.
The Google Developer Experts program is a global network of deeply experienced technology experts, thought leaders and influencers who actively support developer communities around the world, sharing their knowledge and enthusiasm for a wide range of topic areas from Android to Angular to Google Assistant to Google Cloud – and of course, Google Workspace. Mindful that all GDEs are volunteers who not only give their time freely to support others, they also help improve our products by offering their insightful feedback, heavily testing new features often before they are released, all the while helping expand both use cases and audiences along the way.
With the Google Workspace GDE community including members from more than a dozen countries around the world, we wanted to ask Google Workspace Developer experts what excites them about building on Google Workspace, and why they do what they do as our superheroes helping others become better Google Workspace developers. Here’s what a few of these experts had to say:
Six months ago, we introduced the standalone version of the ML Kit SDK, making it even easier to integrate on-device machine learning into mobile apps. Since then, we’ve launched the Digital Ink Recognition and Pose Detection APIs, and also introduced the ML Kit early access program. Today we are excited to add Entity Extraction to the official ML Kit lineup and also debut a new API for our early access program, Selfie Segmentation!
With ML Kit’s Entity Extraction API, you can now improve the user experience inside your app by understanding text and performing specific actions on it.
The Entity Extraction API allows you to detect and locate entities from raw text, and take action based on those entities. The API works on static text and also in real-time while a user is typing. It supports 11 different entities and 15 different languages (with more coming in the future) to allow developers to make any text interaction a richer experience for the user.
Supported Entities
(Images courtesy of TamTam)
Our early access partner, TamTam, has been using the Entity Extraction API to provide helpful suggestions to their users during their chat conversations. This feature allows users to quickly perform actions based on the context of their conversations.
While integrating this API, Iurii Dorofeev, Head of TamTam Android Development, mentioned, “We appreciated the ease of integration of the ML Kit ... and it works offline. Clustering the content of messages right on the device allowed us to save resources. ML Kit capabilities will help us develop other features for TamTam messenger in the future.”
Check out their messaging app on the Google Play and App Store today.
(Diagram of underlying Text Classifier API)
ML Kit’s Entity Extraction API builds upon the technology powering the Smart Linkify feature in Android 10+ to deliver an easy-to-use and streamlined experience for developers. For an in-depth review of the Text Classifier API, please see our blog post here.
The neural network annotators/models in the Entity Extraction API work as follows: A given input text is first split into words (based on space separation), then all possible word subsequences of certain maximum length (15 words in the example above) are generated, and for each candidate the scoring neural net assigns a value (between 0 and 1) based on whether it represents a valid entity.
Next, the generated entities that overlap are removed, favoring the ones with a higher score over the conflicting ones with a lower score. Then a second neural network is used to classify the type of the entity as a phone number, an address, or in some cases, a non-entity.
The neural network models in the Entity Extraction API are used in combination with other types of models (e.g. rule-based) to identify additional entities in text, such as: flight numbers, currencies and other examples listed above. Therefore, if multiple entities are detected for one text input, the Entity Extraction API can return several overlapping results.
Lastly, ML Kit will automatically download the required language-specific models to the device dynamically. You can also explicitly manage models you want available on the device by using ML Kit’s model management API. This can be useful if you want to download models ahead of time for your users. The API also allows you to delete models that are no longer required.
Selfie Segmentation
With the increased usage of selfie cameras and webcams in today's world, being able to quickly and easily add effects to camera experiences has become a necessity for many app developers today.
ML Kit's Selfie Segmentation API allows developers to easily separate the background from a scene and focus on what matters. Adding cool effects to selfies or inserting your users into interesting background environments has never been easier. This API produces great results with low latency on both Android and iOS devices.
(Example of ML Kit Selfie Segmentation)
Key capabilities:
To join our early access program and request access to ML Kit's Selfie Segmentation API, please fill out this form.
Posted by Charles Maxson, Developer Advocate, Google Cloud
It’s been a little over a decade since Apps Script was introduced as the development platform to automate and extend Google Workspace. Since its inception, tens of millions of solution builders ranging from professional developers, business users, and hobbyists have adopted Apps Script because of its tight integration with Google Workspace, coupled with its relative ease of use, makes building solutions fast and accessible.
Over the course of its history, Apps Script has constantly evolved to keep up with the ever-changing Google Workspace applications themselves, as new features are introduced and existing ones enhanced. Changes to the platform and the development environment itself have been more deliberate, allowing the wide-ranging Apps Script developer audience to rely on a predictable and proven development experience.
Recently, there have been some notable updates. Earlier this year the Apps Script runtime engine went through a major update from the original Rhino runtime to the new V8 version, allowing you to leverage modern JavaScript features in your Apps Script projects. Another milestone launch was the introduction of the Apps Script Dashboard, the ‘homepage’ of Apps Script, where you have access to all your projects and Apps Script platform settings by simply navigating to script.google.com.
But as far as the overall developer experience with Apps Script, the core components of the Apps Script IDE (Integrated Development Environment) where developers spend most of their time writing and debugging code, managing versions and exceptions, deploying projects, etc.; that has been relatively unchanged over App Script’s long storied history—that is until now—and as an Apps Script developer, you are about to get more productive!
The new Apps Script IDE features the same rich integration with Google Workspace as it did before, allowing you to get started building solutions without having to install or configure anything. If you are working on a standalone script project application, you can use the Apps Script Dashboard to launch your project directly, or if you are working on a container bound project in Sheets, Slides or Docs, you can do so from selecting Tools > Script editor from their top menus.
Apps Script Project Dashboard
If you launch your project using the Apps Script Dashboard, you will still start off in the Project Details Overview page. The contents of the Project Details page are relatively unchanged with just a few cosmetic updates where you can still get project info on the numbers of executions and users for your projects, errors, and OAuth scopes in use. On closer inspection, however, the seemingly subtle change to the left hand navigation is actually the first big enhancement of the new Apps Script IDE. Previously, when you launched into a project, you still had the Application Dashboard menus which let you navigate your projects, view all your executions and triggers as well as Apps Script features.
Apps Script Project Details Overview
With the new IDE experience, the prior Apps Script Dashboard menu gives way to a new project-specific menu that lets you focus on your active project. This offers developers a more unified experience of moving between project settings and the code editor without having to navigate menus or bounce back to the applications dashboard. So while it's a subtle change at first glance, it's actually a significant boost for productivity.
If you launch the new IDE as a container bound project, you will immediately enter into the new Apps Script code editor, but the new project menu and developer flow is identical.
One of the more striking updates of the new Apps Script IDE was the work done on the code editor to modernize its look and feel, while also unifying the design with the overall developer experience. More than just aesthetic changes, the new code editor was designed to help developers focus on the most common essential tasks. This includes moving away from the traditional menu elements across the top of the original code editor, to a streamlined set of commands that focus on developer productivity. For example, the new code editor offers a simplified menu system optimized for running and debugging code, while all the other ‘project-related’ functions have been reorganized outside the code editor to the left-hand project navigation bar. This will simplify and set the focus on the core task of writing code, which will assist both new and seasoned Apps Script developers.
Apps Script Code Editor
Behind the fresh new look of the Apps Script code editor, there is a long list of new productivity enhancements that will make any Apps Script developer happy. Some are subtle, some are major. Many are simply delightful. Any combination of them will make you more productive in writing Apps Script code. Here are just some of the highlights:
Code Editor Enhancements
Context Menu Image
Commend Palette Menu Image
Debugger Image
Logging Image
The best way to explore all that awaits is you in the new Apps Script IDE and code editor is by simply diving in and writing some code by visiting script.new. Then you will truly see how the new code editor really does help you be more productive, enabling you to better write code faster, with less errors and with greater readability. The previous version of the Apps Script IDE and code editor served us well, but the jump in productivity and an overall better developer experience awaits you with the new Apps Script IDE.
Go write your best code starting at: script.google.com
To learn more: developers.google.com/apps-script
Posted by Google Developer Studio
Computer Science Education Week kicks off on Monday, December 7th and runs through the 13th. This annual call-to-action was started in 2009 by the Computer Science Teachers Association (CSTA) to raise awareness about the need to encourage CS education at all levels and to highlight the importance of computing across industries.
Google Developers strives to make learning Google technology accessible to all and works to empower developers to build good things together.
Whether you’re a student or a teacher, check out our collection of introductory resources and demos below. Learn how you can get started in your developer journey or empower others to get started.
Note: Some resources may require additional background and extra knowledge
The Google Assistant developer platform lets you create software to extend the functionality of the Google Assistant with “Actions”. Actions let users get things done through a conversational interface that can range from a simple command to turn on the lights or a longer conversation, such as playing a trivia game or exploring a recipe for dinner.
As a developer, you can use the platform to easily create and manage unique and effective conversational experiences for your users.
Actions auto-generated from web content.
Codelab: Build Actions for Google Assistant using Actions Builder (Level 1)
This codelab covers beginner-level concepts for developing with Google Assistant; you do not need any prior experience with the platform to complete it. You’ll learn how to build a simple Action for the Google Assistant that tells users their fortune as they begin their adventure in the mythical land of Gryffinberg. Continue on to level 2 if you’re ready!
Codelab: Build Actions for Google Assistant using Actions SDK (Level 1)
This codelab covers beginner-level concepts for developing with the Actions SDK for Google Assistant; you do not need any prior experience with the platform to complete it.
Tip: If you prefer to work with more visual tools, do the Level 1 Actions Builder codelab instead, which creates the same Action using an in-console Actions Builder instead. View additional codelabs here.
Android is the world's most powerful mobile platform, with more than 2.5 billion active devices.
Build your first Android app by taking the free online Android Basics in Kotlin course created by Google. No programming experience is needed. You'll learn important concepts on how to build an app as well as the fundamentals of programming in Kotlin, the recommended programing language for developers who are new to Android. Start with the first unit: Kotlin Basics for Android!
Once you’re ready to take your app development to the next level, check out Android Basics: Unit 2 where you’ll where you'll build a tip calculator app and an app with a scrollable list of images. You can customize these apps or start building your own Android apps!
You can find more resources such as courses and documentation on developer.android.com. Stay up-to-date on the latest educational resources from the Android engineering team by following our YouTube channel, Twitter account, and subscribing to our newsletter.
Developer Student Clubs are university based community groups for students interested in Google developer technologies.
DSC Solution Challenge
For two years, DSC has challenged students to solve problems in their local communities using technology. Learn the steps to get started on a real life project with tips from Arman Hezarkhani. Get inspired by the 2020 Solution Challenge winners and see what they built here.
If you’re a university student interested in joining or leading a DSC near you, click here to learn more.
Firebase is a mobile and web applications development platform that allows you to manage and solve key challenges across the app lifecycle with its full suite of tools for building, quality, and business.
Codelab: Get to know Firebase for web
In this introductory codelab, you'll learn some of the basics of Firebase to create interactive web applications. Learn how to build and deploy an event RSVP and guestbook chat app using several Firebase products.
Codelab: Firebase web codelab
Following “Get to Know Firebase for web”, take this next codelab and you'll learn how to use Firebase to easily create web applications by implementing and deploying a chat client using Firebase products and services.
Get all the latest educational resources from the Firebase engineering team by following our YouTube channel, Twitter account, and visiting the website.
Do you want to learn how to build natively compiled apps for mobile, web, and desktop from a single codebase? If the answer is yes, we have some great resources for you.
This is a guide to creating your first Flutter app. If you are familiar with object-oriented code and basic programming concepts such as variables, loops, and conditionals, you can complete this tutorial. You don’t need previous experience with Dart, mobile, or web programming.
Check out this free course from Google and Udacity, which is the perfect course if you’re brand new to Flutter.
Google Cloud Platform helps you build what's next with secure infrastructure, developer tools, APIs, data analytics and machine learning.
Cloud OnBoard: Core Infrastructure
In this training, learn the fundamentals of Google Cloud and how it can boost your flexibility and efficiency. During sessions and demos, you'll see the ins and outs of some of Google Cloud's most impactful tools, discover how to maximize your VM instances, and explore the best ways to approach your container strategy.
Google Cloud Codelabs and Challenges
Complete a codelab and coding challenge on Google Cloud topics such as Google Cloud Basics, Compute, Data, Mobile, Monitoring, Machine Learning and Networking.
For in-depth Google Cloud tutorials and the latest Google Cloud news, tune into our Google Cloud Platform YouTube channel!
Google Pay lets your customers pay with the press of a button — using payment methods saved to their Google Account. Learn how to integrate the Google Pay APIs for web and Android.
Google Workspace, formerly known as G Suite, includes all of the productivity apps you know and love—Gmail, Calendar, Drive, Docs, Sheets, Slides, Meet, and many more.
The Google Workspace Developer Platform is a collection of tools and resources that let you customize, extend, and integrate with Google Workspace. Low-code tools such as Apps Script enables you to build customizations that automate routine tasks, and professional resources such as Add-ons and APIs enable software vendors to build applications that extend and integrate with Google Workspace.
Learn Apps Script fundamentals with codelabs
If you're new to Apps Script, you can learn the basics using our Fundamentals of Apps Script with this Google Sheets codelab playlist.
Stay updated on the newest Google Workspace developer tools and tutorials by following us on our YouTube channel and Twitter!
Material Design is a design system, created by Google and backed by open-source code, that helps teams build high-quality digital experiences. Whether you’re building for Android, Flutter, or the web we have guidelines, code, and resources to help you build beautiful products, faster. We’ve compiled the best beginner resources here:
If you’re interested in learning more about Material Design subscribe to the brand new YouTube channel for updates, and Q&A format videos.
TensorFlow is an open source platform for machine learning to help you solve challenging, real-world problems with an entire ecosystem of tools, libraries and community resources.
Teachable Machine
Teachable Machine is a web tool that makes creating machine learning models fast, easy, and accessible to everyone. See how you can write a machine learning model without writing any code, save models, and use them in your own future projects. The models you make with are real TensorFlow.js models that work anywhere JavaScript runs.
Machine Learning Foundations:
Machine Learning Foundations is a free training course where you’ll learn the fundamentals of building machine learned models using TensorFlow. Heads up! You will need to know a little bit of Python.
Subscribe to our YouTube channel and Twitter account for all the latest in machine learning.
Here are other ways our friends within the ecosystem are supporting #CSEdWeek.
Google for Education
Experiments with Google
A collection of innovative projects using Chrome, Android, AI, AR, Web, and more, along with helpful tools and resources to inspire others to create new experiments. New projects are added weekly, like this machine learning collaboration between Billie Eilish, YouTube Music and Google Creative Lab.