We launched the Google URL Shortener back in 2009 as a way to help people more easily share links and measure traffic online. Since then, many popular URL shortening services have emerged and the ways people find content on the Internet have also changed dramatically, from primarily desktop webpages to apps, mobile devices, home assistants, and more.
To refocus our efforts, we're turning down support for goo.gl over the coming weeks and replacing it with Firebase Dynamic Links (FDL). FDLs are smart URLs that allow you to send existing and potential users to any location within an iOS, Android or web app. We're excited to grow and improve the product going forward. While most features of goo.gl will eventually sunset, all existing links will continue to redirect to the intended destination.
Starting April 13, 2018, anonymous users and users who have never created short links before today will not be able to create new short links via the goo.gl console. If you are looking to create new short links, we recommend you check out popular services like Bitly and Ow.ly as an alternative.
If you have existing goo.gl short links, you can continue to use all features of goo.gl console for a period of one year, until March 30, 2019, when we will discontinue the console. You can manage all your short links and their analytics through the goo.gl console during this period.
After March 30, 2019, all links will continue to redirect to the intended destination. Your existing short links will not be migrated to the Firebase console, however, you will be able to export your link information from the goo.gl console.
Starting May 30, 2018, only projects that have accessed URL Shortener APIs before today can create short links. To create new short links, we recommend FDL APIs. FDL short links will automatically detect the user's platform and send the user to either the web or your app, as appropriate.
If you are already calling URL Shortener APIs to manage goo.gl short links, you can continue to use them for a period of one year, until March 30, 2019, when we will discontinue the APIs. For developers looking to migrate to FDL see our migration guide.
As it is for consumers, all links will continue to redirect to the intended destination after March 30, 2019. However, existing short links will not be migrated to the Firebase console/API.
URL Shortener has been a great tool that we're proud to have built. As we look towards the future, we're excited about the possibilities of Firebase Dynamic Links, particularly when it comes to dynamic platform detection and links that survive the app installation process. We hope you are too!
Today we are announcing integration of NVIDIA® TensorRTTM and TensorFlow. TensorRT is a library that optimizes deep learning models for inference and creates a runtime for deployment on GPUs in production environments. It brings a number of FP16 and INT8 optimizations to TensorFlow and automatically selects platform specific kernels to maximize throughput and minimizes latency during inference on GPUs. We are excited about the new integrated workflow as it simplifies the path to use TensorRT from within TensorFlow with world-class performance. In our tests, we found that ResNet-50 performed 8x faster under 7 ms latency with the TensorFlow-TensorRT integration using NVIDIA Volta Tensor Cores as compared with running TensorFlow only.
Now in TensorFlow 1.7, TensorRT optimizes compatible sub-graphs and let's TensorFlow execute the rest. This approach makes it possible to rapidly develop models with the extensive TensorFlow feature set while getting powerful optimizations with TensorRT when performing inference. If you were already using TensorRT with TensorFlow models, you know that certain unsupported TensorFlow layers had to be imported manually, which in some cases could be time consuming.
From a workflow perspective, you need to ask TensorRT to optimize TensorFlow's sub-graphs and replace each subgraph with a TensorRT optimized node. The output of this step is a frozen graph that can then be used in TensorFlow as before.
During inference, TensorFlow executes the graph for all supported areas, and calls TensorRT to execute TensorRT optimized nodes. As an example, if your graph has 3 segments, A, B and C. Segment B is optimized by TensorRT and replaced by a single node. During inference, TensorFlow executes A, then calls TensorRT to execute B, and then TensorFlow executes C.
The newly added TensorFlow API to optimize TensorRT takes the frozen TensorFlow graph, applies optimizations to sub-graphs and sends back to TensorFlow a TensorRT inference graph with optimizations applied. See the code below as an example.
# Reserve memory for TensorRT inference engine gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction = number_between_0_and_1) ... trt_graph = trt.create_inference_graph( input_graph_def = frozen_graph_def, outputs = output_node_name, max_batch_size=batch_size, max_workspace_size_bytes=workspace_size, precision_mode=precision) # Get optimized graph
The per_process_gpu_memory_fraction parameter defines the fraction of GPU memory that TensorFlow is allowed to use, the remaining being available for TensorRT. This parameter should be set the first time the TensorFlow-TensorRT process is started. As an example, a value of 0.67 would allocate 67% of GPU memory for TensorFlow and the remaining 33 % for TensorRT engines.
per_process_gpu_memory_fraction
The create_inference_graph function takes a frozen TensorFlow graph and returns an optimized graph with TensorRT nodes. Let's look at the function's parameters:
create_inference_graph
input_graph_def:
outputs:
["resnet_v1_50/predictions/Reshape_1"]
max_batch_size:
max_workspace_size_bytes:
precision_mode
As an example, if the GPU has 12GB memory, in order to allocate ~4GB for TensorRT engines, set the per_process_gpu_memory_fraction parameter to ( 12 - 4 ) / 12 = 0.67 and the max_workspace_size_bytes parameter to 4000000000.
Lets apply the new API to ResNet-50 and see what the optimized model looks like in TensorBoard. The complete code to run the example is available here. The image on the left is ResNet-50 without TensorRT optimizations and the right image is after. In this case, most of the graph gets optimized by TensorRT and replaced by a single node (highlighted).
TensorRT provides capabilities to take models trained in single (FP32) and half (FP16) precision and convert them for deployment with INT8 quantizations at reduced precision with minimal accuracy loss. INT8 models compute faster and place lower requirements on bandwidth but present a challenge in representing weights and activations of neural networks because of the reduced dynamic range available.
To address this, TensorRT uses a calibration process that minimizes the information loss when approximating the FP32 network with a limited 8-bit integer representation. With the new integration, after optimizing the TensorFlow graph with TensorRT, you can pass the graph to TensorRT for calibration as below.
trt_graph=trt.calib_graph_to_infer_graph(calibGraph)
The rest of the inference workflow remains unchanged from above. The output of this step is a frozen graph that is executed by TensorFlow as described earlier.
TensorRT runs half precision TensorFlow models on Tensor Cores in VOLTA GPUs for inference. Tensor Cores, provide 8x more throughput than single precision math pipelines. Half precision (also known as FP16) data compared to higher precision FP32 vs FP64 reduces memory usage of the neural network. This allows training and deployment of larger networks, and FP16 data transfers take less time than FP32 or FP64 transfers.
Each Tensor Core performs D = A x B + C, where A, B, C and D are matrices. A and B are half-precision 4x4 matrices, whereas D and C can be either half or single precision 4x4 matrices. The peak performance of Tensor Cores on the V100 is about an order of magnitude (10x) faster than double precision (FP64) and about 4 times faster than single precision (FP32).
We are excited about this release and will continue to work closely with NVIDIA to enhance this integration. We expect the new solutions ensure the highest performance possible while maintaining the ease and flexibility of TensorFlow. And as TensorRT supports more networks, you will automatically benefit from the updates without any changes to your code.
To get the new solution, you can use the standard pip install process once TensorFlow 1.7 is released:
pip install tensorflow-gpu r1.7
Till then, find detailed installation instructions here: https://github.com/tensorflow/tensorflow/tree/r1.7/tensorflow/contrib/tensorrt
Try it out and let us know what you think!
Hamilton and Posse, a design and development agency in New York, had three short months to develop and launch mobile apps for the hit Broadway show. How did they accomplish that? Using Flutter, Google's new mobile UI framework.
Reaching millions of users — with an outstanding half a million monthly active users and featured on both the App Store and Google Play— the apps let fans enter the ticket lottery, buy merchandise, play trivia, take selfies with a #HamCam, read frequently updated news and interviews, and more.
Watch this video case study to see how Flutter continues to help apps like Hamilton succeed on iOS and Android. You can read more details about the development of this app on Posse's blog post.
Flutter is free and open source. Get started today at flutter.io. We can't wait to see what you build!
Africa's digital journey is rapidly gaining speed. According to the recent data, over 73 million people came online in Africa for the first time in 2017- that's more than the population of the UK! This means there are now about 435 million people on the continent using the Web to engage, connect and access information online. That's a good thing! But with this growth comes with an increased need to scale efforts to make the Web more relevant and useful to African users. This will require more skilled hands working with individuals and local businesses to develop content and platforms that will support Africa's digital growth.
In July 2017, Google's CEO, Sundar Pichai, announced a pledge to provide digital skills training to ten million people in Africa, and also to provide mobile developer training to 100,000 Africans. Today, in line with that commitment, we're excited to announce the launch of our new Africa Web and Android Scholarship program aimed at providing 15,000 scholarships to developers resident in Africa countries.
Working in partnership with Udacity and Andela, we will be offering 15,000 2-month 'single course' scholarships and 500 6-month nanodegree scholarships to aspiring and professional developers across Africa. The training will be available online via the Udacity training website, and the Andela Learning Community will support the students (in Nigeria and Kenya) through mentorship, in-person meet-ups, and online communities.
In order to access the full nanodegree scholarships, learners will have to complete lessons and quizzes courses being offered under the Udacity single course scholarships (also known as challenge courses) in addition to their active participation and support of classmates in the student community. We will be offering 10,000 scholarships to beginners (with little or no programming experience) and 5,000 to professional developers (with +1 year of experience) spread across Android and mobile web development tracks. The 10,000 beginner scholarships will include Android beginner courses and basic introduction to HTML & CSS; while the 5,000 intermediate scholarships include Android fundamentals for intermediate and building offline web applications courses respectively. Both courses are taught in English through an online program on Udacity open to Africa residence. The top 500 students at the end of the challenge will earn a full Nanodegree scholarship to one of four Nanodegree programs in Android or web development.
The application period closes on April 24th. Interested or want to learn more, visit https://www.udacity.com/google-africa-scholarships?utm_source=devblog
Back in October, we were thrilled to launch a beta version of Firebase Crashlytics. As the top ranked mobile app crash reporter for over 3 years running, Crashlytics helps you track, prioritize, and fix stability issues in realtime. It's been exciting to see all the positive reactions, as thousands of you have upgraded to Crashlytics in Firebase!
Today, we're graduating Firebase Crashlytics out of beta. As the default crash reporter for Firebase going forward, Crashlytics is the next evolution of the crash reporting capabilities of our platform. It empowers you to achieve everything you want to with Firebase Crash Reporting, plus much more.
This release include several major new features in addition to our stamp of approval when it comes to service reliability. Here's what's new.
We heard from many of you that you love Firebase Crash Reporting's "breadcrumbs" feature. (Breadcrumbs are the automatically created Analytics events that help you retrace user actions preceding a crash.) Starting today, you can see these breadcrumbs within the Crashlytics section of the Firebase console, helping you to triage issues more easily.
To use breadcrumbs on Crashlytics, install the latest SDK and enable Google Analytics for Firebase. If you already have Analytics enabled, the feature will automatically start working.
By broadly analyzing aggregated crash data for common trends, Crashlytics automatically highlights potential root causes and gives you additional context on the underlying problems. For example, it can reveal how widespread incorrect UIKit rendering was in your app so you would know to address that issue first. Crash insights allows you to make more informed decisions on what actions to take, save time on triaging issues, and maximize the impact of your debugging efforts.
From our community:
"In the few weeks that we've been working with Crashlytics' crash insights, it's been quite helpful on a few particularly pesky issues. The description and quality of the linked resources makes it easy to immediately start debugging." - Marc Bernstein, Software Development Team Lead, Hudl
- Marc Bernstein, Software Development Team Lead, Hudl
Generally, you have a few builds you care most about, while others aren't as important at the moment. With this new release of Crashlytics, you can now "pin" your most important builds which will appear at the top of the console. Your pinned builds will also appear on your teammates' consoles so it's easier to collaborate with them. This can be especially helpful when you have a large team with hundreds of builds and millions of users.
To show you stability issues, Crashlytics automatically uploads your dSYM files in the background to symbolicate your crashes. However, some complex situations can arise (i.e. Bitcode compiled apps) and prevent your dSYMs from being uploaded properly. That's why today we're also releasing a new dSYM uploader tool within your Crashlytics console. Now, you can manually upload your dSYM for cases where it cannot be automatically uploaded.
With today's GA release of Firebase Crashlytics, we've decided to sunset Firebase Crash Reporting, so we can best serve you by focusing our efforts on one crash reporter. Starting today, you'll notice the console has changed to only list Crashlytics in the navigation. If you need to access your existing crash data in Firebase Crash Reporting, you can use the app picker to switch from Crashlytics to Crash Reporting.
Firebase Crash Reporting will continue to be functional until September 8th, 2018 - at which point it will be retired fully.
Upgrading to Crashlytics is easy: just visit your project's console, choose Crashlytics in the left navigation and click "Set up Crashlytics":
If you're currently using both Firebase and Fabric, you can now link the two to see your existing crash data within the Firebase console. To get started, click "Link app in Fabric" within the console and go through the flow on fabric.io:
If you are only using Fabric right now, you don't need to take any action. We'll be building out a new flow in the coming months to help you seamlessly link your existing app(s) from Fabric to Firebase. In the meantime, we encourage you to try other Firebase products.
We are excited to bring you the best-in class crash reporter in the Firebase console. As always, let us know your thoughts and we look forward to continuing to improve Crashlytics. Happy debugging!
It's often said that open source is free like speech, not free like beer. But every so often, the developers behind an open source project can take advantage of free services to make their project better.
We believe in supporting the good work of open source projects to help the maintainers, who do an often thankless job, to be more productive.
Last year, we collaborated with Google to announce the availability of Artifactory Pro hosted on Google Cloud Platform free of charge for qualifying open source projects. The idea was to make sure that open source maintainers could reliably share their build outputs between team members for development, testing and deployment. This will help ensure that the open source projects which developers around the world rely on are easy to consume.
Since the announcement, over 30 projects have qualified for and joined, including OpenMRS, Psono, and Grails.
If you run an open source project and are interested, we encourage you to apply.
Ever wanted to learn about developing for the Google Assistant and meet other developers that are passionate about conversational UI? Well, we've got some good news!
Today, we are launching a global series of events about Actions on Google, run by Google Developers Groups (GDG) and other community groups. In these events, you'll be able to meet other developers and go together through educational content, uniquely crafted for these events by Google engineers. This includes tutorials on how to build your first Action and advanced sessions on how to use more complex features of the platform. By the end of the event you attend, you'll be able to build an Action for your community - be it your hometown, your professional network, or interest group.
And if you don't see an event near you, don't worry - you can always organize your own. We'll help!
It's going to be a great year for Actions developers. Please join us and check out the dedicated event website with all the event details and more information: developers.google.com/events/buildactions!
Spatial audio adds to your sense of presence when you're in VR or AR, making it feel and sound, like you're surrounded by a virtual or augmented world. And regardless of the display hardware you're using, spatial audio makes it possible to hear sounds coming from all around you.
Resonance Audio, our spatial audio SDK launched last year, enables developers to create more realistic VR and AR experiences on mobile and desktop. We've seen a number of exciting experiences emerge across a variety of platforms using our SDK. Recent examples include apps like Pixar's Coco VR for Gear VR, Disney's Star WarsTM: Jedi Challenges AR app for Android and iOS, and Runaway's Flutter VR for Daydream, which all used Resonance Audio technology.
To accelerate adoption of immersive audio technology and strengthen the developer community around it, we’re opening Resonance Audio to a community-driven development model. By creating an open source spatial audio project optimized for mobile and desktop computing, any platform or software development tool provider can easily integrate with Resonance Audio. More cross-platform and tooling support means more distribution opportunities for content creators, without the worry of investing in costly porting projects.
As part of our open source project, we're providing a reference implementation of YouTube's Ambisonic-based spatial audio decoder, compatible with the same Ambisonics format (Ambix ACN/SN3D) used by others in the industry. Using our reference implementation, developers can easily render Ambisonic content in their VR media and other applications, while benefiting from Ambisonics open source, royalty-free model. The project also includes encoding, sound field manipulation and decoding techniques, as well as head related transfer functions (HRTFs) that we've used to achieve rich spatial audio that scales across a wide spectrum of device types and platforms. Lastly, we're making our entire library of highly optimized DSP classes and functions, open to all. This includes resamplers, convolvers, filters, delay lines and other DSP capabilities. Additionally, developers can now use Resonance Audio's brand new Spectral Reverb, an efficient, high quality, constant complexity reverb effect, in their own projects.
We've open sourced Resonance Audio as a standalone library and associated engine plugins, VST plugin, tutorials, and examples with the Apache 2.0 license. Most importantly, this means Resonance Audio is yours, so you're free to use Resonance Audio in your projects, no matter where you work. And if you see something you'd like to improve, submit a GitHub pull request to be reviewed by the Resonance Audio project committers. While the engine plugins for Unity, Unreal, FMOD, and Wwise will remain open source, going forward they will be maintained by project committers from our partners, Unity, Epic, Firelight Technologies, and Audiokinetic, respectively.
If you're interested in learning more about Resonance Audio, check out the documentation on our developer site. If you want to get more involved, visit our GitHub to access the source code, build the project, download the latest release, or even start contributing. We're looking forward to building the future of immersive audio with all of you.
Ready, set, go! Today we begin accepting applications from university students who want to participate in Google Summer of Code (GSoC) 2018. Are you a university student? Want to use your software development skills for good? Read on.
Now entering its 14th year, GSoC gives students from around the globe an opportunity to learn the ins and outs of open source software development while working from home. Students receive a stipend for successful contribution to allow them to focus on their project for the duration of the program. A passionate community of mentors help students navigate technical challenges and monitor their progress along the way.
Past participants say the real-world experience that GSoC provides sharpened their technical skills, boosted their confidence, expanded their professional network and enhanced their resume.
Interested students can submit proposals on the program site between now and Tuesday, March 27, 2018 at 16:00 UTC.
While many students began preparing in February when we announced the 212 participating open source organizations, it's not too late to start! The first step is to browse the list of organizations and look for project ideas that appeal to you. Next, reach out to the organization to introduce yourself and determine if your skills and interests are a good fit. Since spots are limited, we recommend writing a strong proposal and submitting a draft early so you can get feedback from the organization and increase the odds of being selected.
You can learn more about how to prepare in the video below and in the Student Guide.
You can find more information on our website, including a full timeline of important dates. We also highly recommend perusing the FAQ and Program Rules, as well as joining the discussion mailing list.
Remember to submit your proposals early as you only have until Tuesday, March 27 at 16:00 UTC. Good luck to all who apply!
Though it's been just a few short weeks since we released a new set of features for Actions on Google, we're kicking off our presence at South by Southwest (SXSW) with a few more updates for you.
SXSW brings together creatives interested in fusing marketing and technology together, and what better way to start the festival than with new features that enable you to be more creative, and to build new type of Actions that help your users get more things done.
This past year, we've heard from many developers who want to offer great media experiences as part of their Actions. While you can already make your podcasts discoverable to Assistant users, our new media response API allows you to develop deeper, more-engaging audio-focused conversational Actions that include, for example, clips from TV shows, interactive stories, meditation, relaxing sounds, and news briefings.
Your users can control this audio playback on voice-activated speakers like Google Home, Android phones, and more devices coming soon. On Android phones, they can even use the controls on their phone's notification area and lock screen.
Some developers who are already using our new media response API include The Daily Show, Calm, and CNBC.
To get started using our media response API, head over to our documentation to learn more.
And if your content is more visual than audio-based, we're also introducing a browse carousel for your Actions that allows you to show browsable content -- e.g., products, recipes, places -- with a visual experience that users can simply scroll through, left to right. See an example of how this would look to your users, below, then learn more about our browse carousel here.
While having a great user experience is important, we also want to ensure you have the right tools to re-engage your users so they keep coming back to the experience you've built. To that end, a few months ago, we introduced daily updates and push notifications as a developer preview.
Starting today, your users will have access to this feature. Esquire is already using it to send daily "wisdom tips", Forbes sends a quote of the day, and SpeedyBit sends daily updates of cryptocurrency prices to keep them in the know on market fluctuations.
As soon as you submit your Action for review with daily updates or push notifications enabled, and it's approved, your users will be able to opt into this re-engagement channel. Learn more in our docs.
Actions for Google now allows you to access digital purchases (including paid app purchases, in-app purchases, and in-app subscriptions) that your users make from your Android app. By doing so, you can recognize when you're interacting with a user who's paid for a premium experience on your Android app, and similarly serve that experience in your Action, across devices.
And the best part? This is all done behind the scenes, so the user doesn't need to take any additional steps, like signing in, for you to provide this experience. Economist Espresso, for example, now knows when a user has already paid for a subscription with Google Play, and then offers an upgraded experience to the same user through their Action.
In December of last year we announced the addition of Built-in Device Actions to the Google Assistant SDK for devices. This feature allows developers to extend any Google Assistant that is embedded in their device using traits and grammars that are maintained by Google and are largely focused on home automation. For example "turn on", "turn off" and "turn the temperature down".
Today we're announcing the addition of Custom Device Actions which are more flexible Device Actions, allowing developers to specify any grammar and command to be executed by their device. Once you build these Custom Device Actions, users will be able to activate specific capabilities through the Google Assistant. This leads to more natural ways in which users interact with their Assistant-enabled devices, including the ability to utilize more specific device capabilities.
Before:
"Ok Google, turn on the oven"
"Ok, turning on the oven"
After:
"Ok Google, set the oven to convection and preheat to 350 degrees"
"Ok, setting the oven to convection and preheating to 350 degrees"
To give you a sense of how this might work in the real world, check out a prototype, Talk to the Light from the talented Red Paper Heart team, that shows a zany use of this functionality. Then, check out our documentation to learn more about how you can start building these for your devices. We've provided a technical case study from Red Paper Heart and their code repository in case you want to see how they built it.
In addition to Custom Device Actions, we've also integrated device registration into the Actions on Google console, allowing developers to get up and running more quickly. To get started checkout the latest documentation and console.
Similarly, we teamed up with a few cutting-edge teams to explore the creative potential of the Actions on Google platform. Following the Voice experiments the Google Creative Lab released a few months ago, these teams released four new experiments:
The code for all of these Actions is open source and is accompanied by in-depth technical case studies from each team that shares their learnings when developing Actions.
Ready to build? Take a look at our three new case studies with KLM Royal Dutch Airlines, Domino's, and Ticketmaster. Learn about their development journey with Dialogflow and how the Actions they built help them stay ahead of the conversational technology curve, be where their customers are, and assist throughout the entire user journey:
We hope these updates get your creative juices flowing and inspire you to build even more Actions and embed the Google Assistant on more devices. Don't forget that once you publish your first Action you can join our community program* and receive your exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit. Thanks for being a part of our community, and as always, if you have ideas or requests that you'd like to share with our team, don't hesitate to join the conversation.
*Some countries are not eligible to participate in the developer community program, please review the terms and conditions
To kick off the new year, we're pleased to announce the first round of Open Source Peer Bonus winners. First started by the Google Open Source team seven years ago, this program encourages Google employees to express their gratitude to open source contributors.
As part of the program, Googlers nominate open source contributors outside of the company for their contributions to open source projects, including those used by Google. Nominees are reviewed by a team of volunteers and the winners receive our heartfelt thanks with a token of our appreciation.
So far more than 600 contributors from dozens of countries have received Open Source Peer Bonuses for volunteering their time and talent to over 400 open source projects. You can find some of the previous winners in these blog posts.
We'd like to recognize the latest round of winners and the projects they worked on. Listed below are the individuals who gave us permission to thank them publicly:
To each and every one of you: thank you for your contributions to the open source community and congratulations!
The Grow with Google Developer Scholarship—a US-focused program offering learning opportunities to tens of thousands of aspiring developers—has given rise to a wealth of powerful stories from amazing individuals who are using their scholarships to pursue their goals and achieve their dreams. Some are creating new beginnings in new places. Others are reinventing their paths and transforming their futures. Still others are advancing their careers and growing their businesses.
Rei Blanco, Paul Koutroulakis, and Mary Weidner exemplify what the scholarship program is all about.
Rei Blanco immigrated to Lansing, Michigan from Cuba seven years ago. He began learning English, and found opportunities to practice his skills in jobs ranging from housekeeping to customer support. Today, as a Grow with Google Developer Sscholarship recipient, he is learning a whole new language—Javascript—as well as HTML and CSS. Rei earned himself a spot as a student in the Front-End Web Developer challenge course, and is now fully-immersed, and loving every part of his journey to becoming a developer.
"When I get home, I immediately go to the basement and start coding!"
Rei studies several hours every night. He credits his partner for the non-stop encouragement she gives him. He embraces a daily workout routine that keeps him focused and energized. He also praises the student community for helping him to advance successfully through the program.
"The live help channel in our Slack workspace is great. Once you get stuck, you get immediate help or you can help out others."
As his skills grow, so does his confidence. A year ago, when he first began taking online coding courses, he felt out of place attending a local developer meetup. These days, he's a busy member of a student group working on outside projects, and has plans to attend many more in-person events. Rei is taking his developer career step-by-step—he's bolstering his chances of earning freelance work by steadily adding new projects to his portfolio, and has his sights set on a full-time job in front-end web development.
Paul Koutroulakis was a 20-year restaurant industry success story. For 10 of those years, he even owned his own establishment. But like it was for so many others, 2008 was a terrible year. Sales dropped, and the burden became too great. Paul lost his restaurant, and ultimately, his home.
Despite the hardships, Paul retained the spirit that had made him a success in the first place, and he was determined to persevere. But he also saw the writing on the wall, and knew he needed to make a change.
"This was a wakeup call with my resume. I didn't want to be an old man managing a restaurant."
From his research, he learned that demand for web developers was growing rapidly, and he recognized the opportunity he was looking for. From that moment forward, Paul focused his energy on becoming a developer.
He worked daytime hours at a logistics company, and started taking computer programming classes at night at a local technical college. Paul earned his associates degree, but he wasn't done. He felt the pressure to go the extra mile, and made the commitment to do so by competing for, and ultimately earning, a Grow with Google Developer Sscholarship.
"I need to make myself more marketable. I would like to show that age doesn't matter and that anyone can make a great contribution to a company or field if they are passionate about learning."
Today, Paul is focused on building a project portfolio, and wants to land a job as an entry-level web developer. His long-term goal is to enter the field of cybersecurity. Despite the hard work and long hours, he's excited by the skills he's learning, and by the transformation he's undergone. Best of all, he knows it's all worth it.
"Even if it's a late night of studying, it's better than coming home at one or two in the morning after a long shift at the restaurant."
Mary Weidner's degree was in finance, and after graduating, she went right into the field, spending several years in a series of finance-related roles. Simultaneously, she was nurturing an interest in coding, even going so far as to take a few free online courses. Everything changed for her when a friend asked her to join him as co-founder for a fitness app he was developing. She was intrigued, and agreed to take the leap. As one-half of a two-person team, she found herself immediately supporting all aspects of the fledgling operation, from launching the database, to filming videos.
Mary's hobbyist-level interest in coding transformed into a primary focus, as she realized early on that building her tech skills would significantly enhance her ability to grow the business. But there was more than just operational necessity at work—Mary recognized she was facing an additional set of challenges.
"Not only do I want to learn how to code in order to help my company, I also want to be more respected in the industry. Being a woman and a non-technical co-founder is not the easiest place to be in tech."
As a Grow with Google Developer Sscholarship recipient, Mary is now engaged in an intensive learning program, and her skills are accelerating accordingly.
Strongr Fastr officially launched in January 2018, and has already been downloaded by thousands of users, boasting a user rating of 4.7 stars. It's an impressive start, but neither Mary nor her partner are resting on their laurels. They're motivated to grow and improve, and are focused on "finding traction channels that work, and trying to find that scalable groove."
Despite her head-down determination and focus, Mary's approach to learning is a spirited one, and she's enjoying every minute of her big leap forward.
"I'm loving it. It's really cool to have apps on my phone that I've made, even if they're the most simple thing. It's very empowering and just ... cool!"
Grow with Google is a new initiative to help people get the skills they need to find a job. Udacity is excited to partner with Google on this powerful effort, and to offer the Ddeveloper Sscholarship program.
Grow with Google Developer scholars come from different backgrounds, live in different cities, and are pursuing different goals in the midst of different circumstances, but they are united by their efforts to advance their lives and careers through hard work, and a commitment to self-empowerment through learning. We're honored to support their efforts, and to share the stories of scholars like Rei, Paul, and Mary.
We recently introduced Google Slides Add-ons so developers can add functionality from their apps to ours. Here are examples of Slides Add-ons that some of our partners have already built—remember, you can also add functionality to other apps outside of Slides, like Docs, Sheets, Gmail and more.
When it comes to Slides, if your users are delivering a presentation or watching one, sometimes it's good to know how far along you are in the deck. Wouldn't it be great if Slides featured progress bars?
In the latest episode of the G Suite Dev Show, G Suite engineer Grant Timmerman and I show you how to do exactly that—implement simple progress bars using a Slides Add-on.
Using Google Apps Script, we craft this add-on which lets users turn on or hide progress bars in their presentations. The progress bars are represented as appropriately-sized rectangles at the bottom of slide pages. Here's a snippet of code for createBars(), which adds the rectangle for each slide.
createBars()
var BAR_ID = 'PROGRESS_BAR_ID'; var BAR_HEIGHT = 10; // px var presentation = SlidesApp.getActivePresentation(); function createBars() { var slides = presentation.getSlides(); deleteBars(); for (var i = 0; i < slides.length; ++i) { var ratioComplete = (i / (slides.length - 1)); var x = 0; var y = presentation.getPageHeight() - BAR_HEIGHT; var barWidth = presentation.getPageWidth() * ratioComplete; if (barWidth > 0) { var bar = slides[i].insertShape(SlidesApp.ShapeType.RECTANGLE, x, y, barWidth, BAR_HEIGHT); bar.getBorder().setTransparent(); bar.setLinkUrl(BAR_ID); } } }
To learn more about this sample and see all of the code, check out the Google Slides Add-on Quickstart. This is just one example of what you can build using Apps Script and add-ons; here's another example where you can create a slide presentation from a collection of images using a Slides Add-on.
If you want to learn more about Apps Script, check out the video library or view more examples of programmatically accessing Google Slides here. To learn about using Apps Script to create other add-ons, check out this page in the docs.
Today, we're happy to share our Machine Learning Crash Course (MLCC) with the world. MLCC is one of the most popular courses created for Google engineers. Our engineering education team has delivered this course to more than 18,000 Googlers, and now you can take it too! The course develops intuition around fundamental machine learning concepts.
MLCC covers many machine learning fundamentals, starting with loss and gradient descent, then building through classification models and neural nets. The programming exercises introduce TensorFlow. You'll watch brief videos from Google machine learning experts, read short text lessons, and play with educational gadgets devised by instructional designers and engineers.
MLCC is free.
We believe that the potential of machine learning is so vast that every technical person should learn machine learning fundamentals. We're offering the course in English, Spanish, Korean, Mandarin, and French.
Yes, MLCC ends with short lessons on designing real-world machine learning systems. MLCC also contains sections enabling you to learn from the mistakes that our experts have made.
Understanding a little algebra and a little elementary statistics (mean and standard deviation) is helpful. If you understand calculus, you'll get a bit more out of the course, but calculus is not a requirement. MLCC contains a helpful section to refresh your memory on the background math.
MLCC contains some Python programming exercises. However, those exercises comprise only a small percentage of the course, which non-programmers may safely skip.
Many of the Google engineers who took MLCC didn't know any Python but still completed the exercises. That's because you'll write only a few lines of code during the programming exercises. Instead of writing code from scratch, you'll primarily manipulate the values of existing variables. That said, the code will be easier to understand if you can program in Python.
MLCC relies on a variety of media and hands-on interactive tools to build intuition in fundamental machine learning concepts. You need a technical mind, but you don't need programming skills.
As your knowledge about Machine Learning grows, you can test your skill by helping others. We're also kicking off a Kaggle competition to help DonorsChoose.org. DonorsChoose.org is an organization that empowers public school teachers from across the country to request materials and experiences they need to help their students grow. Teachers submit hundreds of thousands of project proposals each year; 500,000 proposals are expected in 2018.
Currently, DonorsChoose.org relies on a large number of volunteers to screen the proposals. The Kaggle competition hopes to help DonorsChoose.org use ML to accelerate the screening process, which will enable volunteers to make better use of their time. In addition, this work should help increase the consistency of decisions about projects.
MLCC is merely one of many ways to learn about machine learning. To explore the universe of machine learning educational opportunities from Google, see our new Learn with Google AI program at g.co/learnwithgoogleai. To start on MLCC, see g.co/machinelearningcrashcourse.