Posted by Aimin Zhu, University Relations Manager, Google China
Following the announcement of the 2018 China-U.S. Young Maker Competition, we are very excited that there are already over 1000 participants with over a month left before the final submission deadline! Project submissions are open to all makers, developers, and students age 18-40 in the United States. Check out the projects others are developing on the project submissions page.
Participants may choose to develop their projects using any platform. Makers and students in the US are encouraged to consider the many Google technologies and platforms available to build innovative solutions:
The project submission deadline is June 22, so there is still plenty of time to join the competition! If you have additional questions about the competition or the project submission process, please visit the contest FAQ.
The top 10 projects selected by the judges will win an all-expenses-paid trip to Beijing, China, to join the finals with Chinese makers on August 13-17. We look forward to meeting you at the final event!
For more details, please see US divitional contest landing page hosted by Hackster.io.
Posted by Karin Levi, Product Marketing, ARCore
A few weeks ago at Google I/O we released a major update to ARCore, Google's AR development platform. We added new APIs like Cloud Anchors, that enable multi-user, collaborative AR experiences and Augmented Images that enable activation of 2D images into 3D objects. All of these updates are going to change the way we use AR today and enable developers to create richer, more immersive AR apps.
With these new capabilities, we decided to put our platform to the test. So we built real experiences to showcase how these all come to life. All demos were presented at the I/O AR & VR sandbox area. We open sourced them to make sure you can see how simple it is to build these experiences. We're pretty happy with how they turned out and would love to share with you some learning and insights from behind the scenes.
Light Board is an AR multiplayer tabletop game where two players on floating game boards launch colored projectiles at each other.
While building Light Board it was important for us to keep in mind who the end users are. We wanted it to be a simple/fun game for developers to try out while visiting the I/O sandbox. The developers would only have a couple minutes to play while passing through, so it needed to allow players (even non-gamers) to pick it up and play with very little setup.
The artwork for Light Board was a major focus. Our mission for the look of the game was to align with the design and decor of I/O 2018. This way, our app would feel like an extension of everything the attendees saw around them. As a result, our design philosophy had 3 goals; bright accent colors, simple graphic shapes and natural physical materials.
Left: Design for AR/VR Sandbox at I/O 2018. Right: Key art for Light Board game boards
The artwork was created in Maya and Cinema 4D. We created physically based materials for our models using Substance Painter. Just as continuous iteration is crucial for engineering, it is also important when creating art assets. With that in mind, we kept careful track of our content pipeline, even for this relatively simple project. This allowed us to quickly try out different looks and board styles before settling on our final design.
On the engineering front we selected the Unity game engine as our dev environment. Unity gives us a couple of important advantages. First, it is easy to get great looking 3D graphics up and running right away. Second, the engine component is already complete, so we could immediately start iterating on gameplay code. As with the artwork, this allowed us to test gameplay options before we made a final decision. Additionally, Unity gave us support for both Android and iOS with only a little extra work.
To handle the multiplayer aspect we used Firebase Realtime Database. We were concerned with network performance at the event, and felt that the persistent nature of a database would make it more tolerant of poor networks. As it turned out, it worked very well and we got the ability to quit and rejoin games for free!
We had a lot of fun building Light Board and we hope people can use it as an example of how easy it can be to not only build AR apps, but to use really cool features like Cloud Anchors. Please check out our open source repo and give Light Board a try!
In March, we released Just a Line, an Android app that lets you draw in the air with your phone. It's a simple experiment meant to showcase the power of ARCore. At Google I/O, we added Cloud Anchors to the app so that two people can draw at once in the same space, even if one of them is using Android and the other iOS.
Both apps were built natively: The Android version was written in Android Studio, and the iOS version was built in xCode. ARCore's Cloud Anchors enable Just a Line to pair two phones, allowing users to draw simultaneously in a shared space. Pairing works across Android and iOS devices, and drawings are synchronized live through a Firebase Realtime Database. You can find the open-source code for iOS here and for Android here.
"Illusive Images" demo is an augmented gallery consisting of 3 artworks, each exploring a different augmented image use case and user experience. As one walks from side to side, around the object, or gazes in a specific direction, 2D artworks are married with 3D, inviting the viewer to enter into the space of the artwork spanning well beyond the physical frame.
Due to the visual design nature of our augmented images, we experimented a lot with creating databases with varying degrees of features. In order to get the best results, we iterated quickly by resizing the canvas for the artwork. We also moved and stretched the brightness and contrast levels. These variations helped to achieve the most optimal image without compromising design intent.
The app was built in Unity with ARCore, with the majority of assets created in Cinema 4D. Mograph animations were imported into Unity as an fbx, and driven entirely by the position of the user in relation to the artwork. An example project can be found here.
To make your development experience easier, we open sourced all the demos our team built. We hope you find this useful! You can also visit our website to learn more and start building AR experiences today.
Data Studio is Google's free next gen business intelligence and data visualization platform. Community Connectors for Data Studio let you build connectors to any internet-accessible data source using Google Apps Script. You can build Community Connectors for commercial, enterprise, and personal use. Learn how to build Community Connectors using the Data Studio Community Connector Codelab.
The Community Connector Codelab explains how Community Connectors work and provides a step by step tutorial for creating your first Community Connector. You can get started if you have a basic understanding of Javascript and web APIs. You should be able to build your first connector in 30 mins using the Codelab.
If you have previously imported data into Google Sheets using Apps Script, you can use this Codelab to get familiar with the Community Connectors and quickly port your code to fetch your data directly into Data Studio.
Community Connectors can help you to quickly deliver an end-to-end visualization solution that is user-friendly and delivers high user value with low development efforts. Community Connectors can help you build a reporting solution for personal, public, enterprise, or commercial data, and also do explanatory visualizations.
By building a Community Connector, you can go from scratch to a push button customized dashboard solution for your service in a matter of hours.
The following dashboard uses Community Connectors to fetch data from Stack Overflow, GitHub, and Twitter. Try using the date filter to view changes across all sources:
This dashboard uses the following Community Connectors:
You can build your own connector to any preferred service and publish it in the Community Connector gallery. The Community Connector gallery now has over 90 Partner Connectors connecting to more than 450 data sources.
Once you have completed the Codelab, view the Community Connector documentation and sample code on the Data Studio open source repository to build your own connector.
Posted by Przemek Pardel, Developer Relations Program Manager, Regional Lead
This summer, the Google Developers team is touring 10 countries and 14 cities in Europe in a colorful community bus. We'll be visiting university campuses and technology parks to meet you locally and talk about our programs for developers and start-ups.
Join us to find out how Google supports developer communities. Learn about Google Developer Groups, Women Techmakers program and the various ways we engage with the broader developer community in Europe and around the world.
Our bus will stop in the following locations between 12.00 and 4pm:
Want to meet us on the way? Sign up for the event in your city here.
Are you interested in starting a new developer community or are you an organizer who would like to join the global Google Community Program? Let us know and receive an invitation-only pass to our private events.
Posted by Mertcan Mermerkaya, Software Engineer
We have great news for web developers that use Firebase Cloud Messaging to send notifications to clients! The FCM v1 REST API has integrated fully with the Web Notifications API. This integration allows you to set icons, images, actions and more for your Web notifications from your server! Better yet, as the Web Notifications API continues to grow and change, these options will be immediately available to you. You won't have to wait for an update to FCM to support them!
Below is a sample payload you can send to your web clients on Push API supported browsers. This notification would be useful for a web app that supports image posting. It can encourage users to engage with the app.
{ "message": { "webpush": { "notification": { "title": "Fish Photos 🐟", "body": "Thanks for signing up for Fish Photos! You now will receive fun daily photos of fish!", "icon": "firebase-logo.png", "image": "guppies.jpg", "data": { "notificationType": "fishPhoto", "photoId": "123456" }, "click_action": "https://example.com/fish_photos", "actions": [ { "title": "Like", "action": "like", "icon": "icons/heart.png" }, { "title": "Unsubscribe", "action": "unsubscribe", "icon": "icons/cross.png" } ] } }, "token": "<APP_INSTANCE_REGISTRATION_TOKEN>" } }
Notice that you are able to set new parameters, such as actions, which gives the user different ways to interact with the notification. In the example below, users have the option to choose from actions to like the photo or to unsubscribe.
To handle action clicks in your app, you need to add an event listener in the default firebase-messaging-sw.js file (or your custom service worker). If an action button was clicked, event.action will contain the string that identifies the clicked action. Here's how to handle the "like" and "unsubscribe" events on the client:
like
unsubscribe
// Retrieve an instance of Firebase Messaging so that it can handle background messages. const messaging = firebase.messaging(); // Add an event listener to handle notification clicks self.addEventListener('notificationclick', function(event) { if (event.action === 'like') { // Like button was clicked const photoId = event.notification.data.photoId; like(photoId); } else if (event.action === 'unsubscribe') { // Unsubscribe button was clicked const notificationType = event.notification.data.notificationType; unsubscribe(notificationType); } event.notification.close(); });
The SDK will still handle regular notification clicks and redirect the user to your click_action link if provided. To see more on how to handle click actions on the client, check out the guide.
Since different browsers support different parameters in different platforms, it's important to check out the browser compatibility documentation to ensure your notifications work as intended. Want to learn more about what the Send API can do? Check out the FCM Send API documentation and the Web Notifications API documentation. If you're using the FCM Send API and you incorporate the Web Notifications API in a cool way, then let us know! Find Firebase on Twitter at @Firebase, and Facebook and Google+ by searching "Firebase".
Over one billion people in the world have some form of disability.
That's why we make accessibility a core consideration when we develop new products—from concept to launch and beyond. It's good for users and good for business: Building products that don't consider a diverse range of needs could mean missing a substantial group of potential users and customers.
But impairments and disabilities are as varied as people themselves. For designers, developers, marketers or small business owners, making your products and designs more accessible might seem like a daunting task. How can you make sure you're being more inclusive? Where do you start?
Today, Global Accessibility Awareness Day, we're launching a new suite of resources to help creators, marketers, and designers answer those questions and build more inclusive products and designs.
The first step is learning about accessibility. Simply start by downloading the Google Primer app and search "accessibility." You'll find five-minute lessons that help you better understand accessibility, and learn practical tips to start making your own business, products and designs more accessible, like key design principles for building a more accessible website. You may even discover that addressing accessibility issues can improve the user experience for everyone. For instance, closed captions can make your videos accessible to more people whether they have a hearing impairment or are sitting in a crowded room.
Next, visit the Google Accessibility page and discover free tools that can help you make your site or app more accessible for more people. The Android Developers site also contains a wide range of suggestions to help you improve the accessibility of your app.
We hope these resources will help you join us in designing and building for a more inclusive future. After all, an accessible web and world is a better one—both for people and for business.
"Excited to see the new lessons on accessibility that Primer launched today. They help us learn how to start making websites and products more accessible. With over 1 billion people in the world with some form of disability, building a more inclusive web is the right thing to do both for people and for business." - Ari Balogh, VP Engineering
Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite
We recently introduced Hangouts Chat to general availability. This next-generation messaging platform gives G Suite users a new place to communicate and to collaborate in teams. It features archive & search, tighter G Suite integration, and the ability to create separate, threaded chat rooms. The key new feature for developers is a bot framework and API. Whether it's to automate common tasks, query for information, or perform other heavy-lifting, bots can really transform the way we work.
In addition to plain text replies, Hangouts Chat can also display bot responses with richer user interfaces (UIs) called cards which can render header information, structured data, images, links, buttons, etc. Furthermore, users can interact with these components, potentially updating the displayed information. In this latest episode of the G Suite Dev Show, developers learn how to create a bot that features an updating interactive card.
As you can see in the video, the most important thing when bots receive a message is to determine the event type and take the appropriate action. For example, a bot will perform any desired "paperwork" when it is added to or removed from a room or direct message (DM), generically referred to as a "space" in the vernacular.
Receiving an ordinary message sent by users is the most likely scenario; most bots do "their thing" here in serving the request. The last event type occurs when a user clicks on an interactive card. Similar to receiving a standard message, a bot performs its requisite work, including possibly updating the card itself. Below is some pseudocode summarizing these four event types and represents what a bot would likely do depending on the event type:
function processEvent(req, rsp) { var event = req.body; // event type received var message; // JSON response message if (event.type == 'REMOVED_FROM_SPACE') { // no response as bot removed from room return; } else if (event.type == 'ADDED_TO_SPACE') { // bot added to room; send welcome message message = {text: 'Thanks for adding me!'}; } else if (event.type == 'MESSAGE') { // message received during normal operation message = responseForMsg(event.message.text); } else if (event.type == 'CARD_CLICKED') { // user-click on card UI var action = event.action; message = responseForClick( action.actionMethodName, action.parameters); } rsp.send(message); };
The bot pseudocode as well as the bot featured in the video respond synchronously. Bots performing more time-consuming operations or those issuing out-of-band notifications, can send messages to spaces in an asynchronous way. This includes messages such as job-completed notifications, alerts if a server goes down, and pings to the Sales team when a new lead is added to the CRM (Customer Relationship Management) system.
Hangouts Chat supports more than JavaScript or Python and Google Apps Script or Google App Engine. While using JavaScript running on Apps Script is one of the quickest and simplest ways to get a bot online within your organization, it can easily be ported to Node.js for a wider variety of hosting options. Similarly, App Engine allows for more scalability and supports additional languages (Java, PHP, Go, and more) beyond Python. The bot can also be ported to Flask for more hosting options. One key takeaway is the flexibility of the platform: developers can use any language, any stack, or any cloud to create and host their bot implementations. Bots only need to be able to accept HTTP POST requests coming from the Hangouts Chat service to function.
At Google I/O 2018 last week, the Hangouts Chat team leads and I delivered a longer, higher-level overview of the bot framework. This comprehensive tour of the framework includes numerous live demos of sample bots as well as in a variety of languages and platforms. Check out our ~40-minute session below.
To help you get started, check out the bot framework launch post. Also take a look at this post for a deeper dive into the Python App Engine version of the vote bot featured in the video. To learn more about developing bots for Hangouts Chat, review the concepts guides as well as the "how to" for creating bots. You can build bots for your organization, your customers, or for the world. We look forward to all the exciting bots you're going to build!
On May 1 we announced .app, the newest top-level domain (TLD) from Google Registry. It's now open for general registration so you can register your desired .app name right now. Check out what some of our early adopters are already doing on .app around the globe.
We begin our journey with sitata.app, which provides real-time travel information about events like protests or transit strikes. Looks all clear, so our first stop is the Caribbean, where we use thelocal.app and start exploring. After getting some sun, we fly to the Netherlands, where we're feeling hungry. Luckily, picnic.app delivers groceries, right to our hotel. With our bellies full, it's time to head to India, where we use myra.app to order the medicine, hygiene, and baby products that we forgot to pack. Did we mention this was a business trip? Good thing lola.app helped make such a complex trip stress free. Time to head home now, so we slip on a hoodie we bought on ov.app and enjoy the ride.
We hope these apps inspire you to also find your home on .app! Visit get.app to choose a registrar partner to register your domain.
Posted by Brahim Elbouchikhi, Product Manager
In today's fast-moving world, people have come to expect mobile apps to be intelligent - adapting to users' activity or delighting them with surprising smarts. As a result, we think machine learning will become an essential tool in mobile development. That's why on Tuesday at Google I/O, we introduced ML Kit in beta: a new SDK that brings Google's machine learning expertise to mobile developers in a powerful, yet easy-to-use package on Firebase. We couldn't be more excited!
Getting started with machine learning can be difficult for many developers. Typically, new ML developers spend countless hours learning the intricacies of implementing low-level models, using frameworks, and more. Even for the seasoned expert, adapting and optimizing models to run on mobile devices can be a huge undertaking. Beyond the machine learning complexities, sourcing training data can be an expensive and time consuming process, especially when considering a global audience.
With ML Kit, you can use machine learning to build compelling features, on Android and iOS, regardless of your machine learning expertise. More details below!
If you're a beginner who just wants to get the ball rolling, ML Kit gives you five ready-to-use ("base") APIs that address common mobile use cases:
With these base APIs, you simply pass in data to ML Kit and get back an intuitive response. For example: Lose It!, one of our early users, used ML Kit to build several features in the latest version of their calorie tracker app. Using our text recognition based API and a custom built model, their app can quickly capture nutrition information from product labels to input a food's content from an image.
ML Kit gives you both on-device and Cloud APIs, all in a common and simple interface, allowing you to choose the ones that fit your requirements best. The on-device APIs process data quickly and will work even when there's no network connection, while the cloud-based APIs leverage the power of Google Cloud Platform's machine learning technology to give a higher level of accuracy.
See these APIs in action on your Firebase console:
Heads up: We're planning to release two more APIs in the coming months. First is a smart reply API allowing you to support contextual messaging replies in your app, and the second is a high density face contour addition to the face detection API. Sign up here to give them a try!
If you're seasoned in machine learning and you don't find a base API that covers your use case, ML Kit lets you deploy your own TensorFlow Lite models. You simply upload them via the Firebase console, and we'll take care of hosting and serving them to your app's users. This way you can keep your models out of your APK/bundles which reduces your app install size. Also, because ML Kit serves your model dynamically, you can always update your model without having to re-publish your apps.
But there is more. As apps have grown to do more, their size has increased, harming app store install rates, and with the potential to cost users more in data overages. Machine learning can further exacerbate this trend since models can reach 10's of megabytes in size. So we decided to invest in model compression. Specifically, we are experimenting with a feature that allows you to upload a full TensorFlow model, along with training data, and receive in return a compressed TensorFlow Lite model. The technology behind this is evolving rapidly and so we are looking for a few developers to try it and give us feedback. If you are interested, please sign up here.
Since ML Kit is available through Firebase, it's easy for you to take advantage of the broader Firebase platform. For example, Remote Config and A/B testing lets you experiment with multiple custom models. You can dynamically switch values in your app, making it a great fit to swap the custom models you want your users to use on the fly. You can even create population segments and experiment with several models in parallel.
Other examples include:
We can't wait to see what you'll build with ML Kit. We hope you'll love the product like many of our early customers:
Get started with the ML Kit beta by visiting your Firebase console today. If you have any thoughts or feedback, feel free to let us know - we're always listening!
The Google Assistant is becoming even more conversational and visual – helping people get things done, save time and be more present. And developers like you have been a big part of this story, making the Assistant more useful across more than 500 million devices. Starbucks, Disney, Zyrtec, Singapore Airlines and many others are engaging with users through the Actions they've built. In total, the Google Assistant is ready to help with over 1 million Actions, built by Google and all of you.
Ever since we launched Actions on Google, our mission has been to give you the tools you need to create engaging Actions, making them a part of people's everyday lives. Just over the past six months we've made significant upgrades to our platform to bring us closer to that vision. We made improvements to help your Actions get discovered, opened Actions on Google to more languages, took a few steps toward making your Actions more creative and visually appealing, launched a new conversation design site, and last week announced a new program to invest in startups that push the Assistant ecosystem forward.
Today, I want to share how we're making it even easier for app and web developers to get started with the Google Assistant.
We've seen a lot of great Android developers build Actions that complement their mobile apps. You can already create a personal, connected experience across your Android app and the Actions you build for the Assistant. Now we're making it possible to extend your Android app experiences to the Assistant in even more ways.
Think of your Actions for the Google Assistant as a companion experience to your app that users can access at home or on the go, across phones, smart speakers, TVs, cars, watches, headphones, and, soon, Smart Displays. If you want to personalize some of the experiences from your Android app, account linking lets your users have a consistent experience whether they're in your app or interacting with your Action.
We added support for seamless digital subscriptions so your users can enjoy the content and digital goods they bought in the Google Play Store right in your Assistant Action. For example, since I'm a premium subscriber in the Economist's app, I can now enjoy their premium content on any Assistant-enabled device.
And while you can already help users complete transactions for physical goods, soon you will be able to offer digital goods and subscriptions directly from your Actions.
The Assistant blends conversation with rich visual interactions for phones, Smart Displays and TVs. We've made it so your Actions already work on these visual surfaces with no extra work. Starting today, you can take this a step further and better customize the appearance of your Actions for visual surfaces by, among other things, controlling the background image, defining the typeface, and setting color themes used in your Action. Just head to the Actions console, make your changes and test them in the simulator today. These changes will be available on phones, TVs and Smart Displays, when they launch.
Here's an example screenshot from a demo Action:
And below, you can see how Volley was able to create a full screen immersive experience for their game "King for a Day." The ability to create customizable edge-to-edge visuals will launch for developers in the next few months.
In the Android keynote today, we announced a new feature called App Actions. App Actions are a new way to raise the visibility of your Android app to users as they start their tasks. We look forward to creating another channel to reach more users that can engage with your App Actions in the Google Assistant.
App Actions will be available for all developers to try soon; please sign up here if you'd like to be notified.
After you've built an Action for the Assistant, you want to get lots of people engaged with your experience. You can already prompt your users to sign up for Action Notifications on their phones, and soon, we'll be expanding support so users can get notifications on smart speakers and Smart Displays. Today we're also announcing three updates aimed at helping more users discover your Actions and keeping them engaged on a daily basis.
Map your Actions to users' queries with built-in intents
Over the past 20 years, Google has helped connect people with the information, services and content they're looking for by organizing, ranking, and showing the most relevant experience for users. With built-in intents, we're bringing this expertise to use in the Google Assistant. When someone says "Hey Google, let's play a maps quiz" they expect the Assistant to suggest relevant games that might pertain to geography. For that to happen, we need to understand the user's fundamental intent. This can be pretty difficult; just think of the thousands of ways a user could ask for a game.
To handle this complexity, we're beginning to map all the ways that people can ask for things into a taxonomy of built-in intents. Today, we're making the first set of these intents available to you so you can give the Assistant a deeper understanding of what your Action can do. As a result, the Assistant will be able to better understand and recommend Actions to meet a user's intent. We'll be rolling out hundreds of built-in intents in the coming months.
Today you can implement built-in intents in your action and test them in the simulator. You'll be able to use these in production soon.
Promote your Actions from anywhere a link works We're now making it easier to drive traffic to your Actions with Action Links. These are hyperlinks you can use anywhere—your website, emails, blog, even social media channels like Facebook and Twitter—that deep link directly into your Action. Now, when a developer like Headspace has something new to share, they can spread the word and drive engagement directly into their Action from across the web. Users can click on the link and jump into their Action's experience on phones and Smart Displays, and if they click the Action Link while on desktop, they can choose which Assistant-enabled device they'd like to use – from smart speakers to TVs. Go see an example on Headspace's website, or give their Action Link a try here.
If you've already built an Action and want to spread the word, starting today you can visit the Actions console to find your Action Links and get going.
Become a part of your users' daily routines
To consistently re-engage with users, you need to become a part of their daily habits. Google Assistant users can already use routines to execute multiple Actions with a single command, perfect for those times when users wake up in the morning, head out of the house, get ready for bed or many of the other tasks we perform throughout the day. Now, with Routine Suggestions, after someone engages with your Action, you can prompt them to add your Action to their routines with just a couple of taps.
So when I leave the house for work each morning, I can have my Assistant order my Americano from Starbucks and play that premium content from the Economist.
You can enable your Action for Routine Suggestions in the console today, and it will be working in production soon.
And more...
Before you run off and start sharing Actions links to all of your followers on social media, check out some of the other announcements we're making here at I/O:
Extend your experiences to the Google Assistant We're delighted to see that many of you are starting to test the waters in this emerging era of conversational computing. If you're already building mobile or web apps but haven't tried building conversational Actions for the Google Assistant just yet, now is the perfect time to get started. Start thinking of the companion experiences that could be a fit for the Google Assistant. We have easy-to-follow guides and a community program with rewards and Google Cloud credits to get you up and running in no time. We can't wait to try out your Actions soon!
Posted by Jan-Felix Schmakeit, Google Photos Developer Lead
People create and consume photos and videos in many different ways, and we think it should be easier to do more with the photos you've taken, across all the apps and devices you use.
That's why we're introducing a new Google Photos partner program that gives you the tools and APIs to build photo and video experiences in your products that are smarter, faster and more helpful.
With the Google Photos Library API, your users can seamlessly access their photos whenever they need them.
Whether you're a mobile, web, or backend developer, you can use this REST API to utilize the best of Google Photos and help people connect, upload, and share from inside your app.
Your user is always in the driver's seat. Here are a few things you can help them to do:
With the Library API, you don't have to worry about maintaining your own storage and infrastructure, as photos and videos remain safely backed up in Google Photos.
Putting machine intelligence to work in your app is simple too. You can use smart filters, like content categories, to narrow down or exclude certain types of photos and videos and make it easier for your users to find the ones they're looking for.
We've also aimed to take the hassle out of building a smooth user experience. Features like thumbnailing and cross-platform deep-links mean you can offload common tasks and focus on what makes your product unique.
Today, we're launching a developer preview of the Google Photos Library API. You can start building and testing it in your own projects right now.
Get started by visiting our developer documentation where you can also express your interest in joining the Google Photos partner program. Some of our early partners, including HP, Legacy Republic, NixPlay, Xero and TimeHop are already building better experiences using the API.
If you are following Google I/O, you can also join us for our session to learn more.
We're excited for the road ahead and look forward to working with you to develop new apps that work with Google Photos.
Posted by the Flutter Team at Google
This week at Google I/O, we're announcing the third beta release of Flutter, our mobile app SDK for creating high-quality, native user experiences on iOS and Android, along with showcasing new tooling partners, usage of Flutter by several high-profile customers, and announcing official support from the Material team.
We believe mobile development needs an upgrade. All too often, developers are forced to compromise between quality and productivity: either building the same application twice on both iOS and Android, or settling for a cross-platform solution that makes it hard to deliver the native experience that customers demand. This is why we built Flutter: to offer a new path for mobile development, focused foremost on native performance, advanced visuals, and dramatically improving developer velocity and productivity.
Just twelve months ago at Google I/O 2017, we announced Flutter and delivered an early alpha of the toolkit. Over the last year, we've invested tens of thousands of engineering hours preparing Flutter for production use. We've rewritten major parts of the engine for performance, added support for developing on Windows, published tooling for Android Studio and Visual Studio Code, integrated Dart 2 and added support for more Firebase APIs, added support for inline video, ads and charts, internationalization and accessibility, addressed thousands of bugs and published hundreds of pages of documentation. It's been a busy year and we're thrilled to share the latest beta release with you!
Flutter offers:
As evidence of the power that Flutter can offer applications, 2Dimensions are this week releasing a preview of a new tool for creating powerful interactive animations with Flutter. Here's an example of the output of their software:
What you are seeing here is Flutter rendering 2D skeletal mesh animations on the phone in real-time. Achieving this level of graphical horsepower is thanks to Flutter's use of the hardware-accelerated Skia engine that draws every pixel to the screen, paired with the blazingly fast ahead-of-time compiled Dart language. But it gets better: note how the demo slider widget is translucently overlaid on the animation. Flutter seamlessly combines user interface widgets with 60fps animated graphics generated in real time, with the same code running on iOS and Android.
Here's what Luigi Rosso, co-founder of 2Dimensions, says about Flutter:
"I love the friction-free iteration with Flutter. Hot Reload sets me in a feedback loop that keeps me focused and in tune with my work. One of my biggest productivity inhibitors are tools that run slower than the developer. Flutter finally resets that bar."
One common challenge for mobile application creators is the transition from early design sketches to an interactive prototype that can be piloted or tested with customers. This week at Google I/O, Infragistics, one of the largest providers of developer tooling and components, are announcing their commitment to Flutter and demonstrating how they've set out to close the designer/developer gap even further with supportive tooling. Indigo Design to Code Studio enables designers to add interactivity to a Sketch design, and generate a pixel-perfect Flutter application.
We launched Flutter Beta 1 just ten weeks ago at Mobile World Congress, and it is exciting to see the momentum since then, both on Github, and in the number of published Flutter applications. Even though we're still building out Flutter, we're pleasantly surprised to see strong early adoption of the SDK, with some high-profile customer examples already published. One of the most popular is the companion app to the award-winning Hamilton Broadway musical, built by Posse Digital, with millions of monthly users, and an average rating of 4.6 on the Play Store.
This week, Alibaba is announcing their adoption of Flutter for Xianyu, one of their flagship applications with over twenty million monthly active users. Alibaba praises Flutter for its consistency across platforms, the ease of generating UI code from designer redlines, and the ease with which their native developers have learned Flutter. They are currently rolling out this updated version to their customers.
Another company now using Flutter is Groupon, who is prototyping and building new code for their merchant application. Here's what they say about using it:
"I love the fact that Flutter integrates with our existing app and our team has to write code just once to provide a native experience for both our apps. This significantly reduces our time to market and helps us deliver more features to our customers." Varun Menghani, Head of Merchant Product Management, Groupon
In the short time since the Beta 1 launch, we've seen hundreds of Flutter apps published to the app stores, across a wide variety of application categories. Here are a few examples of the diversity of apps being created with Flutter:
Closer to home, Google continues to use Flutter extensively. One new example announced at I/O comes from Google Ads, who are previewing their new Flutter-based AdWords app that allows businesses to track and optimize their online advertising campaigns. Sridhar Ramaswamy, SVP for Ads and Commerce, says:
"Flutter provides a modern reactive framework that enabled us to unify the codebase and teams for our Android and iOS applications. It's allowed the team to be much more productive, while still delivering a native application experience to both platforms. Stateful hot reload has been a game changer for productivity."
Flutter Beta 3, shipping today at I/O, continues us on the glidepath towards our eventual 1.0 release with new features that complete core scenarios. Dart 2, our reboot of the Dart language with a focus on client development, is now fully enabled with a terser syntax for building Flutter UIs. Beta 3 is world-ready with localization support including right-to-left languages, and also provides significantly improved support for building highly-accessible applications. New tooling provides a powerful widget inspector that makes it easier to see the visual tree for your UI and preview how widgets will look during development. We have emerging support for integrating ads through Firebase. And Visual Studio Code is now fully supported as a first-class development tool, with a dedicated Flutter extension.
The Material Design team has worked with us extensively since the start. We're happy to announce that as of today, Flutter is a first-class toolkit for Material, which means the Material and Flutter teams will partner to deliver even more support for Material Design. Of course, you can continue to use Flutter to build apps with a wide range of design aesthetics to express your brand.
More information about the new features in Flutter Beta 3 can be found at the Flutter blog on Medium. If you already have Flutter installed, just one command -- flutter upgrade -- gets you on the latest build. Otherwise, you can follow our getting started guide to install Flutter on macOS, Windows or Linux.
flutter upgrade
Flutter has long been used in production at Google and by the public, even though we haven't yet released "1.0." We're approaching our 1.0 quality bar, and in the coming months you'll see us focus on some specific areas:
Like every software project, the trade-offs are between time, quality and features. We are targeting a 1.0 release within the next year, but we will continue to adjust the schedule as necessary. As we're an open source project, our open issues are public and work scheduled for upcoming milestones can be viewed on our Github repo at any time. We welcome your help along this journey to make mobile development amazing.
Whether you're at Google I/O in person or watching remotely, we have plenty of technical content to help you get up and running. In particular, we have numerous sessions on Flutter and Material Design, as well as a new series of Flutter codelabs and a Udacity course that is now open for registration.
Since last year, we've been on a great journey together with a community of early adopters. We get an electric feeling when we see the range of apps, experiments, plug-ins, and supporting tools that developers are starting to produce using Flutter, and we're only just getting started. Now is a great time to join us. Connect with us through the website at https://flutter.io, via Twitter at @flutterio, and in our Google group and Gitter chat room. We're excited to see what you build!
Posted by Manfred Ernst, Software Engineer
Great VR experiences make you feel like you're really somewhere else. To create deeply immersive experiences, there are a lot of factors that need to come together: amazing graphics, spatialized audio, and the ability to move around and feel like the world is responding to you.
Last year at I/O, we announced Seurat as a powerful tool to help developers and creators bring high-fidelity graphics to standalone VR headsets with full positional tracking, like the Lenovo Mirage Solo with Daydream. Seurat is a scene simplification technology designed to process very complex 3D scenes into a representation that renders efficiently on mobile hardware. Here's how ILMxLAB was able to use Seurat to bring an incredibly detailed 'Rogue One: A Star Wars Story' scene to a standalone VR experience.
Today, we're open sourcing Seurat to the developer community. You can now use Seurat to bring visually stunning scenes to your own VR applications and have the flexibility to customize the tool for your own workflows.
Behind the scenes - How Seurat works
Seurat works by taking advantage of the fact that VR scenes are typically viewed from within a limited viewing region, and leverages this to optimize the geometry and textures in your scene. It takes RGBD images (color and depth) as input and generates a textured mesh, targeting a configurable number of triangles, texture size, and fill rate, to simplify scenes beyond what traditional methods can achieve.
To demonstrate what Seurat can do, here's a snippet from Blade Runner: Revelations, which launched today with the Lenovo Mirage Solo.
Blade Runner: Revolution by Alcon Interactive and Seismic Games
The Blade Runner universe is known for its stunning worlds, and in Revelations, you get to unravel a mystery around fugitive Replicants in the futuristic but gritty streets. To create the look and feel for Revelations, Seismic used Seurat to bring a scene of 46.6 million triangles down to only 307,000, improving performance by more than 100x with almost no loss in visual quality:
Original scene:
Seurat-processed scene:
If you're interested in learning more about Seurat or trying it out yourself, visit the Seurat GitHub page to access the documentation and source code. We're looking forward to seeing what you build!
Posted by Kerry Murrill, Google Developers Marketing
I/O is just a couple of days away! As we get closer, we hope you've had the chance to explore the schedule to make the most of the three festival days. In addition to customizing your schedule on google.com/io/schedule, you can now browse through our 150+ Sessions, and dozens of Office Hours, App Reviews, and Codelabs via the Google I/O 2018 mobile app or Action for the Assistant.
Apps: Android, iOS, Web (add to your mobile homescreen), Action for the Assistant
Here is a breakdown of all the things you can do with the mobile app this year:
Schedule on iOS
Session details on Android
Map on Android
Action on the Assistant
Browse, filter, and find Sessions, Office Hours, Codelabs, App Reviews and the recently added Meetups across 18 product areas.
Be sure to reserve seats for your favorite Sessions either in the app or at google.com/io/schedule. You can reserve as many Sessions as you'd like per day, but only one reservation per time slot is allowed. Reservations will be open until 1 hour before the start time for each Session. If a Session is full, you can join the waitlist and we'll automatically change your reservation status if any spots open up (you can now check your waitlist position on the I/O website). A portion of seats will still be available first-come, first-served for those who aren't able to reserve a seat in advance.
Most Sessions will be livestreamed and recordings will be available soon after. Want to celebrate I/O with your community? Find an I/O Extended viewing party near you.
In addition to attending Sessions, and participating in Office Hours and App Reviews, you'll have the opportunity to talk directly with Google engineers throughout the Sandbox space, which will feature multiple product demos and activations, and during Codelabs where you can complete self-paced tutorials.
Remember to save some energy for the evening! On Day 1, attendees are invited to the After Hours Block Party from 7-10PM. It will include dinner, drinks, and lots of fun, interactive experiences throughout the Sandbox space: a magic show, a diner, throwback treats, an Android themed Bouncy World, MoDA 2.0, the I/O Totem stage and lots of music throughout! On Day 2, don't miss out on the After Hours Concert from 8-10PM, with food and drinks available throughout. The concert will be livestreamed so you can join from afar, too. Stay tuned to find out who's performing this year!
To make things easy for you, your starred and reserved events will always be synced from your account across mobile, desktop, and the Assistant, so you can switch back and forth as needed. You can also filter for just your starred and reserved events to see just the events you want.
Guide yourself throughout Shoreline with the interactive map. Find your way to your next Session or see what's inside the Sandbox domes.
Find more information about onsite WiFi, content formats, plus travel tips to get to Shoreline, including the shuttle schedule.
Keeping up with the tradition, the mobile app and Action for the Assistant will be open sourced after I/O. Until then, we hope the mobile app and Action will help you navigate the schedule and grounds for a great experience.
T-4 days… See you soon!