Posted by Allan Livingston, Product Management Director, Chrome OS App Ecosystem
When Google launched Chrome OS nine years ago, we designed every aspect around three core principles: speed, simplicity, and security. Last year at I/O, Google put those principles at developers’ fingertips by implementing Linux support on Chrome OS. This gave developers the increased flexibility of building and running Linux apps combined with the speed and security of working within Chrome OS.
In just the last year, the Chrome OS ecosystem has grown at an incredible rate. Linux support has been rolled out to over half of all Chromebooks. Plus, all devices launched this year will be Linux-ready right out of the box. The combination of Linux and Chrome OS makes for a great web development environment — and we’re making the process even easier for Android development.
At I/O this year, we showed web and Android developers a few of the most exciting improvements that have made Chrome OS an even faster, simpler, and more secure environment than ever. Let’s get into a few of the highlights:
File sharing Today we announced that it’s much easier to share files between Linux, Android, and Chrome OS. Now you can use the file manager to move your files safely across Chrome OS, Google Drive, Android, and Linux.
Port forwarding We’ve also made improvements to port forwarding on Chrome OS, making it easier to connect networking services between Linux and Chrome OS. That way, you can run a web server within the Linux container while debugging on the same machine.
Android Studio one-click installation and integrated debugging Installing Android Studio on Chrome OS used to be a fairly lengthy process. Now, it takes a simple double-click. There’s no need to use a terminal to download, move, and unzip the file — just download it, click, and install.
Now in beta channel with Chrome OS 75, we also enabled secure USB support for Android phones. You can develop, debug, and push your APK to Android phones on any of the Android developer-recommended Chromebooks.
Chrome OS also automatically handles common installation pain-points, like hardware compatibility and power management set-up.
App developers have to consider a huge range of factors to deliver amazing experiences on every screen size and form factor. In just the last few years, the app experience has evolved far beyond mobile screens. People are using apps across different devices that blur the lines between mobile and desktop — from attaching keyboards to their tablets to using their smartphones to project onto a desktop screen. And no matter what device they’re using, they expect apps to deliver a seamless experience every time.
When you’re building on and for Chrome OS, you’re on a streamlined path to reaching a massive and fast-growing audience of engaged users. In just the last year, the number of monthly active users who enabled Android apps on Chrome OS has grown by 250%.1 And in Q4 2018, 21% of notebooks sold in the U.S. were Chromebooks — a 23% YoY unit sales growth.2
Because millions of Android apps already run on Chrome OS, you can take the same APK and extend your app’s reach to even more consumers with just a few tweaks. Whether they’re building apps with larger screens in mind from the start or optimizing old apps to reach new users, developers behind some of the most popular mobile apps and games have already seen incredible results from Chromebook users.
As people use apps in more unpredictable and inspiring ways, devs are seeing even higher engagement after optimizing for larger screens. Watch the video below to see how Concepts created a larger, more responsive canvas for aspiring digital designers and how BandLab gave musicians a more immersive platform for exploring and composing new music.
It’s never been easier or more secure to develop for the Web and Android on Chrome OS. Between a fast-growing user base, Progressive Web Apps, millions of Android apps, and now, Linux, the potential for developing on and for Chrome OS is only going to keep growing.
Chrome OS delivers the speed and performance app users expect, and it’s now even faster, simpler, and more secure than ever for all developers.
We can’t wait to see the amazing stuff you create with your Chromebooks!
Posted by the Flutter Team
Today marks an important milestone for the Flutter framework, as we expand our focus from mobile to incorporate a broader set of devices and form factors. At I/O, we’re releasing our first technical preview of Flutter for web, announcing that Flutter is powering Google’s smart display platform including the Google Home Hub, and delivering our first steps towards supporting desktop-class apps with Chrome OS.
For a long time, the Flutter team mission has been to build the best framework for developing mobile apps for iOS and Android. We believe that mobile development is ripe for improvement, with developers today forced to choose between building the same app twice for two platforms, or making compromises to use cross-platform frameworks. Flutter hits the sweet spot of enabling a single codebase to deliver beautiful, fast, tailored experiences with high developer productivity for both platforms, and we’ve been excited to see how our early efforts have flourished into one of the most popular open source projects.
As we started to home in on our 1.0 release last year, we began experimenting with broadening the scope of Flutter to other platforms. This was triggered both by internal teams within Google who are increasingly relying on Flutter, as well as the latent potential of the Dart platform for delivering portable experiences. In particular, a small team who were already building a web framework for Dart for internal usage started an exploratory project (codename “Hummingbird”) to evaluate the technical merits of porting the Flutter engine to support the standards-based web.
The results of this project were startling, thanks in large part to the rapid progress in web browsers like Chrome, Firefox, and Safari, which have pervasively delivered hardware-accelerated graphics, animation, and text as well as fast JavaScript execution. Within a few months of beginning the project, we had the core Flutter framework primitives working, and soon after we had demos running on mobile and desktop browsers. Along with Dart’s long pedigree of compiling for the web, this proved that we could also bring the Flutter framework and apps to run on the web.
In parallel, the core Flutter project has been making progress to enable desktop-class apps, with input paradigms such as keyboard and mouse, window resizing, and tooling for Chrome OS app development. The exploratory work that we did for embedding Flutter into desktop-class apps running on Windows, Mac and Linux has also graduated into the core Flutter engine.
It’s worth pausing for a moment to acknowledge the business potential of a high-performance, portable UI framework that can deliver beautiful, tailored experiences to such a broad variety of form factors from a single codebase.
For startups, the ability to reach users on mobile, web, or desktop through the same app lets them reach their full audience from day one, rather than having limits due to technical considerations. Especially for larger organizations, the ability to deliver the same experience to all users with one codebase reduces complexity and development cost, and lets them focus on improving the quality of that experience.
With support for mobile, desktop, and web apps, our mission expands: we want to build the best framework for developing beautiful experiences for any screen.
This week, we are releasing the first technical preview of Flutter for the web. While this technology is still in development, we are ready for early adopters to try it out and give us feedback. Our initial vision for Flutter on the web is not as a general purpose replacement for the document experiences that HTML is optimized for; instead we intend it as a great way to build highly interactive, graphically rich content, where the benefits of a sophisticated UI framework are keenly felt.
To showcase Flutter for the web, we worked with the New York Times to build a demo. In addition to world-class news coverage, the New York Times is famous for its crossword and other puzzle games. Since avid puzzlers want to play on whatever device they’re using at the time, their development team was attracted to Flutter as a potential solution for their needs. Discovering that they could reach the web with the same code was a huge boon. At Google I/O this week, you can get a sneak peek of their newly refreshed KENKEN puzzle game, which runs with the same code on Android, iOS, web, Mac, and Chrome OS.
Here’s what Eric von Coelln, Executive Director of Puzzles at the New York Times has to say about their experiences with Flutter:
"The New York Times Crossword has more than 400,000 stand-alone subscriptions and is a daily ritual for puzzle solvers. Along with the Crossword, we’ve grown our portfolio of digital puzzles that reaches more than two million solvers each month. We were already beginning to explore Flutter as a potential solution to the challenge of quickly developing engaging, high-quality mobile experiences. Now the addition of being able to publish to web makes Flutter an even more appealing option to quickly deploy across all of our user platforms. This update of our old Flash-based KenKen game into a multi-platform playable experience is something we’re excited to bring to our solvers this year.”
We were already beginning to explore Flutter as a potential solution to the challenge of quickly developing engaging, high-quality mobile experiences. Now the addition of being able to publish to web makes Flutter an even more appealing option to quickly deploy across all of our user platforms. This update of our old Flash-based KenKen game into a multi-platform playable experience is something we’re excited to bring to our solvers this year.”
There’s lots more to say about Flutter for web than we have space for here, so check out the dedicated article about Flutter for web on the Flutter blog.
At this early stage, we’re eager to get your feedback on how you’d like to use Flutter for web. We expect to rapidly evolve the code, with a particular focus on performance, and harmonizing the codebase with the rest of the Flutter project.
The core Flutter framework also receives an upgrade this week, with the immediate availability of Flutter 1.5 in our stable channel. Flutter 1.5 includes hundreds of changes in response to developer feedback, including updates for new App Store iOS SDK requirements, updates to the iOS and Material widgets, engine support for new device types, and Dart 2.3 featuring new UI-as-code language features.
As the framework itself matures, we’re investing in building out the supporting ecosystem. The architectural model of Flutter has always prioritized a small core framework, supplemented by a rich package community. In the last few months, Google has contributed production-quality packages for web views, Google Maps, and Firebase ML Vision, and this week, we’re adding initial support for in-app payments. And with over 2,000 open source packages available for Flutter, there are options available for most scenarios.
One particularly exciting project that we’re announcing this week at I/O is the ML Kit Custom Image Classifier. Built using Flutter and Firebase, it offers an easy-to-use app-based workflow for creating custom image classification models. You can collect training data using the phone's camera, invite others to contribute to your datasets, trigger model training, and use trained models, all from the same app.
Flutter continues to grow in popularity and adoption. A growing roster of demanding customers including eBay, Sonos, Square, Capital One, Alibaba and Tencent are developing apps with Flutter. And they’re having fun! Here’s what Larry McKenzie, a senior developer at eBay had to say about Flutter:
“Flutter is fast! Features that once took us multiple days to implement can be finished in a single day. Many problems we used to spend a lot of time on, simply no longer occur. Our team can now focus on creating more polished user experiences and delivering functionality. Flutter is enabling us to exceed expectations!”
More broadly, LinkedIn recently conducted a study that showed Flutter is the single fastest-growing skill among software engineers, based on site members claiming it on their profile over the last 12 months. And in the recent 2019 StackOverflow developer survey, Flutter was listed as one of the most-loved developer frameworks.
Flutter is also being used on the desktop. For some months, we’ve been working on the desktop as an experimental project. But now we’re graduating this into Flutter engine, integrating this work directly into the mainline repo. While these targets are not production-ready yet, we have published early instructions for developing Flutter apps to run on Mac, Windows, and Linux.
Another quickly growing Flutter platform is Chrome OS, with millions of Chromebooks being sold every year, particularly in education. Chrome OS is a perfect environment for Flutter, both for running Flutter apps, and as a developer platform, since it supports execution of both Android and Linux apps. With Chrome OS, you can use Visual Studio Code or Android Studio to develop a Flutter app that you can test and run locally on the same device without an emulator. You can also publish Flutter apps for Chrome OS to the Play Store, where millions of others can benefit from your creation.
As the final example of Flutter’s portability, we offer Flutter embedded on other devices. We recently published samples that demonstrate Flutter running directly on smaller-scale devices like Raspberry Pi, and we offer an embedding API for Flutter that allows it to be used in scenarios including home, automotive and beyond.
Perhaps one of the most pervasive embedded platforms where Flutter is already running is on the smart display operating system that powers the likes of Google Home Hub.
Within Google, some Google-built features for the Smart Display platform are powered by Flutter today. And the Assistant team is excited to continue to expand the portfolio of features built with Flutter for the Smart Display in the coming months; the goal this year is to use Flutter to drive the overall system UI.
We often get asked by developers how they can get started with Flutter. We are pleased today to announce a comprehensive new training course for Flutter, built by The App Brewery, authors of the highest-rated iOS training course on Udemy. Their new course has over thirty hours of content for Flutter, including videos, demos and labs, and with Google’s sponsorship, they are announcing today a time-limited discount of this course from the retail price of $199 to just $10.
Many developers are creating inspiring apps with Flutter. In the run-up to Google I/O, we ran a contest called Flutter Create to encourage developers to see what they could build with Flutter in 5KB or less of Dart code. We had over 750 unique entries from around the world, with some amazing examples that pushed what we imagine would be possible in such a small size.
Today, we’re announcing the winners, which can be found on flutter.dev/create. Congratulations to the overall winner, Zebiao Hu, who wins a fully-loaded iMac Pro worth over $10,000!
Flutter is no longer a mobile framework, but a multi-platform framework that can help you reach your users wherever they are. We can’t wait to see what you’ll build with Flutter on the web, desktop, mobile, and beyond!
Posted by Chris Turkstra, Director, Actions on Google
People are using the Assistant every day to get things done more easily, creating lots of opportunities for developers on this quickly growing platform. And we’ve heard from many of you that want easier ways to connect your content across the Assistant.
At I/O, we’re announcing new solutions for Actions on Google that were built specifically with you in mind. Whether you build for web, mobile, or smart home, these new tools will help make your content and services available to people who want to use their voice to get things done.
Help people with their “how to” questions
Every day, people turn to the internet to ask “how to” questions, like how to tie a tie, how to fix a faucet, or how to install a dog door. At I/O, we’re introducing support for How-to markup that lets you power richer and more helpful results in Search and the Assistant.
Adding How-to markup to your pages will enable the page to appear as a rich result on mobile Search and on Google Assistant Smart Displays. This is an incredibly lightweight way for web developers and creators to connect with millions of people, giving them helpful step-by-step instructions with video, images and text. You can start seeing How-to markup results on Search today, and your content will become available on the Smart Displays in the coming months.
Here’s an example where DIY Network added markup to their existing content on the web to provide a more helpful, interactive result on both Google Search and the Assistant:
For content creators that don’t maintain a website, we created a How-to Video Template where video creators can upload a simple spreadsheet with titles, text and timestamps for their YouTube video, and we’ll handle the rest. This is a simple way to transform your existing how-to videos into interactive, step-by-step tutorials across Google Assistant Smart Displays and Android phones.
Check out how REI is getting extra mileage out of their YouTube video:
How-to Video Templates are in developer preview so you can start building today, and your content will become available on Android phones and Smart Displays in the coming months.
Help people quickly get things done with App Actions
If you’re an app developer, people are turning to your apps every day to get things done. And we see people turn to the Assistant every day for a natural way to ask for help via voice. This offers an opportunity to use intents to create voice-based entry points from the Assistant to the right spot in your app.
Last year, we previewed App Actions, a simple mechanism for Android developers that uses intents from the Assistant to deep link to exactly the right spot in your app. At I/O, we are announcing the release of built-in intents for four new App Action categories: Health & Fitness, Finance and Banking, Ridesharing, and Food Ordering. Using these intents, you can integrate with the Assistant in no time.
If I wanted to track my run with Nike Run Club, I could just say “Hey Google, start my run in Nike Run Club” and the app will automatically start tracking my run. Or, let’s say I just finished dinner with my friend Chad and we're splitting the check. I can say "Hey Google, send $15 to Chad on PayPal" and the Assistant takes me right into Paypal, I log in, and all of my information is filled in – all I need to do is hit send.
Each of these integrations were completed in less than a day with the addition of an Actions.xml file that handles the mapping of intents between your app and the Actions platform. You can start building with these new intents today and deploy to Assistant users on Android in the coming months. This is a huge opportunity to offer your fans an effortless way to engage more frequently with your apps.
Take advantage of Smart Displays’ interactive screens
Last year, we saw the introduction of the Smart Display as a new device category. The interactive visual surface opens up many new possibilities for developers.
Today, we’re introducing a developer preview of Interactive Canvas which lets you create full-screen experiences that combine the power of voice, visuals and touch. Canvas works across Smart Displays and Android phones, and it uses open web technologies you’re likely already familiar with, like HTML, CSS and Javascript.
Here’s an example of what you can build when you can leverage the full screen of a Smart Display:
Interactive Canvas is available for building games starting today, and we’ll be adding more categories soon. Visit the Actions Console to be one of the first to try it out.
Enable smart home devices to communicate locally
There are now more than 30,000 connected devices that work with the Assistant across 3,500 brands, and today, we’re excited to announce a new suite of local technologies that are specifically designed to create an even better smart home.
Introducing a preview of the Local Home SDK which enables you to run your smart home code locally on Google Home Speakers and Nest Displays and use its radios to communicate locally with your smart devices. This reduces cloud hops and brings a new level of speed and reliability to the smart home. We’ve been working with some amazing partners including Philips, Wemo, TP-Link, and LIFX on testing this SDK and we’re excited to open it up for all developers next month.
Make setup more seamless
And, through the Local Home SDK, we’re improving the device setup experience by providing users with a seamless setup experience, something we launched in partnership with GE smart lights this past October. So far, people have loved the ability to set up their lights in less than a minute in the Google Home app. We’re now scaling this to more partners, so go here if you’re interested.
Make your devices smart with Assistant Connect
Also, at CES earlier this year we previewed Google Assistant Connect which leverages the Local Home SDK. Assistant Connect enables smart home and appliance developers to easily add Assistant functionality into their devices at low cost. It does this by offloading a lot of work onto the Assistant to complete Actions, display content and respond to commands. We've been hard at work developing the platform along with the first products built on it by Anker, Leviton and Tile. We can't wait to show you more about Assistant Connect later this year.
New device types and traits
For those of you creating Actions for the smart home, we’re also releasing 16 new device types and three new device traits including LockUnlock, ArmDisarm, and Timer. Head over to our developer documentation for the full list of 38 device types and 18 device traits, and check out our sample project on GitHub to start building.
Whether you’re looking to extend the reach of your content, drive more usage in your apps, or build custom Assistant-powered experiences, you now have more tools to do so.
If you want to learn more about how you can start building with these tools, check out our website to get started and our schedule so you can tune in to all of our developer talks that we’ll be hosting throughout the week.
We can’t wait to build together with you!
Posted by Anuj Gosalia
A little over a year ago, we introduced ARCore: a platform for building augmented reality (AR) experiences. Developers have been using it to create thousands of ARCore apps that help people with everything from fixing their dishwashers, to shopping for sunglasses, to mapping the night sky. Since last I/O, we've quadrupled the number of ARCore enabled devices to an estimated 400 million.
Today, at I/O we introduced updates to Augmented Images and Light Estimation - features that let you build more interactive, and realistic experiences. And to make it easier for people to experience AR, we introduced Scene Viewer, a new tool which lets users view 3D objects in AR right from your website.
To make experiences appear realistic, we need to account for the fact that things in the real world don’t always stay still. That’s why we’re updating Augmented Images — our API that lets people point their camera at 2D images, like posters or packaging, to bring them to life. The updates enable you to track moving images and multiple images simultaneously. This unlocks the ability to create dynamic and interactive experiences like animated playing cards where multiple images move at the same time.
An example of how the Augmented Images API can be used with moving targets by JD.com
Last year, we introduced the concept of light estimation, which provides a single ambient light intensity to extend real world lighting into a digital scene. In order to provide even more realistic lighting, we’ve added a new mode, Environmental HDR, to our Light Estimation API.
Before and after Environmental HDR is applied to the digital mannequin on the left, featuring 3D printed designs from Julia Koerner
Environmental HDR uses machine learning with a single camera frame to understand high dynamic range illumination in 360°. It takes in available light data, and extends the light into a scene with accurate shadows, highlights, reflections and more. When Environmental HDR is activated, digital objects are lit just like physical objects, so the two blend seamlessly, even when light sources are moving.
Digital mannequin on left and physical mannequin on right
Environmental HDR provides developers with three APIs to replicate real world lighting:
We want to make it easier for people to jump into AR, so today we’re introducing Scene Viewer, so that AR experience can be launched right from your website without having to download a separate app.
To make your assets accessible via Scene Viewer, first add a glTF 3D asset to your website with the <model-viewer> web component, and then add the “ar” attribute to the <model-viewer> markup. Later this year, experiences in Scene Viewer will begin to surface in your Search results.
<script type="module" src="https://unpkg.com/@google/model-viewer/dist/model-viewer.js"></script> <script nomodule src="https://unpkg.com/@google/model-viewer/dist/model-viewer-legacy.js"></script> <model-viewer ar src="examples/assets/YOURMODEL.gltf" auto-rotate camera-controls alt="TEXT ABOUT YOUR MODEL" background-color="#455A64"></model-viewer>
NASA.gov enables users to view the Curiosity Rover in their space
These are a few ways that improving real world understanding in ARCore can make AR experiences more interactive, realistic, and easier to access. Look for these features to roll out over the next two releases. To learn more and get started, check out the ARCore developer website.
Google Pay is designed to make transactions simple from contactless payments to online purchases and even peer-to-peer payments. It also allows users to store tickets and passes, manage loyalty cards and keep track of transactions. With Google Pay, users can pay with all credit and debit cards saved to their Google Account, making hundreds of millions of cards enabled for faster checkout in your apps or websites. This includes payments for goods and services on e-commerce merchants, online marketplaces and Android apps.
When you integrate the Google Pay API into your app or site, your customers can then transact using any of those cards in as few as two clicks.
When users use their NFC-enabled mobile device or smart watch to pay in places such as supermarkets, restaurants or shops, the card selected is emulated from the device using a secure number that changes on every transaction. Only the bank or card issuer can decrypt this number to process the transaction. The process of securing your card details is called tokenization. Only cards from supported banks can be tokenized, and this is a necessary step to pay contactless using Google Pay.
Users can pay in-stores using NFC-enabled devices with forms of payment that support tokenization.
In contrast, when users pay in your app or on your site through Google Pay, they can select any card saved to their Google Account, including tokenized cards. This enables users to pay on any device in your sites and apps globally.
Users paying online can use any card saved under their Google account(s).
All forms of payments are stored in the user's Google account and protected by multiple layers of security. This includes payment methods that users have already saved to pay for services like YouTube, Google Play or to speed up checkout forms using Chrome Autofill.
You can integrate Google Pay's online APIs to increase conversions by providing a more convenient, more secure and faster way to pay to your users. Some of the benefits include:
Adding Google Pay to your site or application is just a few lines of code away. There are tutorials on how to integrate Google Pay in your website or Android app and step-by-step guided codelabs for Web and Android. Here is a more visual tutorial:
To get started, use this integration checklist (Android | Web) to make sure you have everything you need to complete the integration. When you’re ready to go live with your integration, request production access and follow the final steps to deploy your app (Android | Web) in a production environment.
The Payment Request API is a Web Payments W3C standard that provides a native browser experience for collecting payment information from the user. You can accept Google Pay via PaymentRequest directly, however this may not be available across browsers.
To enable Google Pay for your users across all major browsers with a single implementation, we recommend using the Google Pay JavaScript library as described above. This enables a native Payment Request experience on Chrome, while giving you the flexibility of supporting Google users on other browsers.
The payments sheet is presented natively when triggered from a browser with support for Payment Handler API (on the right), while it falls back to showing a pop-up on browsers that don’t.
As users’ needs evolve, we continue to add features and forms of payment to the Google Pay API –like the recent addition of PayPal– so you can get access to these new payment methods in your app or site without any additional development work.
Don’t miss Google Pay sessions at Google I/O this year to learn about the latest features we are bringing to Google Pay. Bookmark our sessions and check back for livestream details–we look forward to seeing you this week.
We’re looking forward to seeing you at I/O! To help you prepare, we’re letting you know that the official Google I/O app is live on Android! iOS will arrive later this week.
Schedule on Android (left) and home page on iOS (right)
With this year’s app you can browse through the incredible content we have planned for I/O’19. Customize your schedule by favoriting sessions, which will be synced between all of your devices and the I/O website so you can check it anywhere, anytime. Attendees are also able to reserve spots in sessions, office hours, app reviews, and game reviews directly in the app.
New this year: Add individual events to your personal calendar to receive notifications before events are about to begin! Also new: Search through content by sessions, topics, and speakers.
Shoreline Amphitheatre will be your home May 7-9. Use the app to keep track of key moments with the agenda and find your way through I/O with the conference map.
New this year: Use the home page to view key conference moments, upcoming events, and receive important announcements. Also new: Explore I/O is a new feature that uses your camera to help you see where to go in augmented reality. To discover events, food, bathrooms, and more around you, scan the I/O maps at Shoreline.
This year, we’ve created Q&A forms to collect your pre-I/O questions to help direct session content at I/O. Simply sign in to the I/O website or app, click on any session, then click the ‘Q&A’ link and use the ‘+’ icon to submit your questions. Visit Q&A Help to learn more.
Get the #io19 app here!
For 2019, we've partnered with Aira to help I/O attendees who are blind or low vision navigate the event. Aira provides free assistance to I/O attendees from trained professional agents. Download the Aira app on Android or iOS to get assistance while onsite.
We look forward to seeing you very soon!
Posted by Justin Juul, Social Media Manager
We’re excited to announce the official launch of @googledevs, a new hub for developer culture where we’ll shine a spotlight on communities around the world and make new friends at events like Google I/O, The Android Dev Summit, Flutter Live, and more.
Follow us now to stay in tune with developers, designers, thought leaders, and other amazing people like yourself.
And don’t forget to say hi if you see us out in the wild. You might just wind up on our Instagram story.
Follow us here → www.instagram.com/googledevs
See you soon!
Posted by Anuj Gulati, Developer Marketing Manager and Sami Kizilbash, Developer Relations Program Manager
Last year we announced the Indie Games Accelerator, a special edition of Launchpad Accelerator, to help top indie game developers from emerging markets achieve their full potential on Google Play. Our team of program mentors had an amazing time coaching some of the best gaming talent from India, Pakistan, and Southeast Asia. We’re very encouraged by the positive feedback we received for the program and are excited to bring it back in 2019.
Applications for the class of 2019 are now open, and we’re happy to announce that we are expanding the program to developers from select countries* in Asia, Middle East, Africa, and Latin America.
Successful participants will be invited to attend two gaming bootcamps, all-expenses-paid at the Google Asia-Pacific office in Singapore, where they will receive personalized mentorship from Google teams and industry experts. Additional benefits include Google hardware, invites to exclusive Google and industry events and more.
Find out more about the program and apply to be a part of it.
* The competition is open to developers from the following countries: Bangladesh, Brunei, Cambodia, India, Indonesia, Laos, Malaysia, Myanmar, Nepal, Pakistan, Philippines, Singapore, Sri Lanka, Thailand, Vietnam, Egypt, Jordan, Kenya, Lebanon, Nigeria, South Africa, Tunisia, Turkey, Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Ecuador, Guatemala, Mexico, Panama, Paraguay, Peru, Uruguay and Venezuela.
How useful did you find this blog post?
★ ★ ★ ★ ★
Posted by Mary Chen, Strategy Lead, Actions on Google
This year at Google I/O, the Actions on Google team is sharing new ways developers of all types can use the Assistant to help users get things done. Whether you’re making Android apps, websites, web content, Actions, or IoT devices, you’ll see how the Assistant can help you engage with users in natural and conversational ways.
Tune in to our announcements during the developer keynote, and then dive deeper with our technical talks. We listed the talks out below by area of interest. Make sure to bookmark them and reserve your seat if you’re attending live, or check back for livestream details if you’re joining us online.
In addition to these sessions, stay tuned for interactive demos and codelabs that you can try at I/O and at home. Follow @ActionsOnGoogle for updates and highlights before, during, and after the festivities.
Posted by William Florance, Global Head, Developer Training Programs
Building upon our pledge to provide mobile developer training to 100,000 Africans to develop world class apps, today we are pleased to announce the next round of Google Africa Certification Scholarships aimed at helping developers become certified on Google’s Android, Web, and Cloud technologies.
This year, we are offering 30,000 additional scholarship opportunities and 1,000 grants for the Google Associate Android Developer, Mobile Web Specialist, and Associate Cloud Engineer certifications. The scholarship program will be delivered by our partners, Pluralsight and Andela, through an intensive learning curriculum designed to prepare motivated learners for entry-level and intermediate roles as software developers. Interested students in Africa can learn more about the Google Africa Certifications Scholarships and apply here
According to World Bank, Africa is on track to have the largest working-age population (1.1 billion) by 2034. Today’s announcement marks a transition from inspiring new developers to preparing them for the jobs of tomorrow. Google’s developer certifications are performance-based. They are developed around a job-task analysis which test learners for skills that employers expect developers to have.
As announced during Sundar Pichai - Google CEO’s visit to Nigeria in 2017, our continued initiatives focused on digital skills training, education and economic opportunity, and support for African developers and startups, demonstrate our commitment to help advance a healthy and vibrant ecosystem. By providing support for training and certifications we will help bridge the unemployment gap on the continent through increasing the number of employable software developers.
Although Google’s developer certifications are relatively new, we have already seen evidence that becoming certified can make a meaningful difference to developers and employers. Adaobi Frank - a graduate of the Associate Android Developer certification - got a better job that paid ten times more than her previous salary after completing her certification. Her interview was expedited as her employer was convinced that she was great for the role after she mentioned that she was certified. Now, she's got a job that helps provide for her family - see her video here. Through our efforts this year, we want to help many more developers like Ada and support the growth of startups and technology companies throughout Africa.
Follow this link to learn more about the scholarships and apply.