We're delighted to announce that TensorFlow 1.5 is now public! Install it now to get a bunch of new features that we hope you'll enjoy!
First off, Eager Execution for TensorFlow is now available as a preview. We've heard lots of feedback about the programming style of TensorFlow, and how developers really want an imperative, define-by-run programming style. With Eager Execution for TensorFlow enabled, you can execute TensorFlow operations immediately as they are called from Python. This makes it easier to get started with TensorFlow, and can make research and development more intuitive.
For example, think of a simple computation like a matrix multiplication. Today, in TensorFlow it would look something like this:
x = tf.placeholder(tf.float32, shape=[1, 1]) m = tf.matmul(x, x) with tf.Session() as sess: print(sess.run(m, feed_dict={x: [[2.]]}))
If you enable Eager Execution for TensorFlow, it will look more like this:
x = [[2.]] m = tf.matmul(x, x) print(m)
You can learn more about Eager Execution for TensorFlow here (check out the user guide linked at the bottom of the page, and also this presentation) and the API docs here.
The Developer preview of TensorFlow Lite is built into version 1.5. TensorFlow Lite, TensorFlow's lightweight solution for mobile and embedded devices, lets you take a trained TensorFlow model and convert it into a .tflite file which can then be executed on a mobile device with low-latency. Thus the training doesn't have to be done on the device, nor does the device need to upload data to the cloud to have it worked upon. So, for example, if you want to classify an image, a trained model could be deployed to the device and classification of the image is done on-device directly.
TensorFlow Lite includes a sample app to get you started. This app uses the MobileNet model of 1001 unique image categories. It recognizes an image and matches it to a number of categories, listing the top 3 that it recognizes. The app is available on both Android and iOS.
You can learn more about TensorFlow Lite, and how to convert your models to be available on mobile here.
If you are using GPU Acceleration on Windows or Linux, TensorFlow 1.5 now has CUDA 9 and cuDNN 7 support built-in.
To learn more about NVidia's Compute Unified Device Architecture (CUDA) 9, check out NVidia's site here.
This is enhanced by the CUDA Deep Neural Network Library (cuDNN), the latest release of which is version 7. Support for this is now included in TensorFlow 1.5.
Here are some Medium articles on GPU support on Windows and Linux, and how to install them on your workstation (if it supports the requisite hardware)
In line with this release we've also overhauled the documentation site, including an improved Getting Started flow that will get you from no knowledge to building a neural network to classify different types of iris in a very short time. Check it out!
Beyond these features, there's lots of other enhancements to Accelerated Linear Algebra (XLA), updates to RunConfig and much more. Check the release notes here.
To get TensorFlow 1.5, you can use the standard pip installation (or pip3 if you use python3)
$ pip install --ignore-installed --upgrade tensorflow
We're pleased to announce the availability of the Google Play Games Services C++ SDK version 3.0. The highlights of this release are:
More details can be found in the release notes on the downloads page.
The SDK can be downloaded from: https://developers.google.com/games/services/downloads/sdks
Samples using this SDK can be downloaded from GitHub: https://github.com/playgameservices/cpp-android-basic-samples
Thanks and happy coding!
PageSpeed Insights provides information about how well a page adheres to a set of best practices. In the past, these recommendations were presented without the context of how fast the page performed in the real world, which made it hard to understand when it was appropriate to apply these optimizations. Today, we're announcing that PageSpeed Insights will use data from the Chrome User Experience Report to make better recommendations for developers and the optimization score has been tuned to be more aligned with the real-world data.
The PSI report now has several different elements:
For more details on these changes, see About PageSpeed Insights. As always, if you have any questions or feedback, please visit our forums and please remember to include the URL that is being evaluated.
With the Google Assistant and Actions on Google, we're excited for 2018 and look forward to continuing the developer momentum you've helped us build. To start the year off right, we're at the Consumer Electronics Show in Las Vegas showcasing the Assistant at home, on the go and in the car—and all the ways it can help in each of those places. You can learn more here. For developers like you, we're building upon those same areas to extend the ways you can reach users in those places, too.
Today we're introducing a new web directory and an updated directory experience with the Assistant on phones. These directories give users even more visibility into everything your app can help them do. They also make it even easier for users to share links to your apps. And together with your help, we're adding Actions all the time including those that are coming soon from SpotHero and Starbucks.
Even better, when you publish your first app, you'll become eligible for our developer community program, that supports you with up to $200 in monthly Google Cloud credit and an Assistant t-shirt - with the perks and opportunities growing the more you do, including earning a Google Home.
With the Assistant, your apps are available across many devices and this year, we're making them even more available with new integrations at home, on the go and in the car.
For the home, we announced that smart displays with the Assistant built in are coming later this year. Smart displays come with the added benefit of a touch screen, they can help provide a visual experience for users.
Beyond smart displays, we also have the Assistant coming to new speakers and TVs with the Assistant built in, as well as new headphones that are optimized for the Assistant.
Finally, starting later this week, we're bringing the Assistant to Android Auto, allowing users to project Android Auto, and with it the Assistant, onto the screen in their compatible car.
The best part is that compatible apps will be available to users on all these devices without additional work. With that said, to ensure the best user experience, here are a few tips:
In addition to the enhanced home experience with built in devices, we're also updating our home control experience, making it easier than ever to build for smart homes. The Google Assistant already works with more than 1,500 smart devices from 200+ brands, but this is still just the start for the number of devices we anticipate will be built for the smart home.
We first launched the smart home Actions at I/O this year and we started with support for things like lights, plugs and thermostats. Now, we're excited to announce we've added direct support for a number of new device types, including: cameras, dishwashers, dryers, vacuums and washers. This means that users can control all kinds of appliances in their home just by asking the Google Assistant. And in order to support these new integrations, we're also expanding the supported device traits to include: camerastream, dock, modes, runcycle, scene, start/stop and toggles. With all these new devices, it is a good thing we have made it even easier to build smart home Actions with a streamlined development flow and insightful analytics to help you improve your smart home Action. Ready to begin? Start here!
And that's our news for now. Thanks for everything you do to make the Assistant more helpful, fun and interactive! It's been an exciting year to see the platform expand to new languages and devices and to see what you've all created. We can't wait to see what you build and the new ways users are able to get things done as a result. Here's to a great year!
Google Data Studio lets users build live, interactive dashboards with beautiful data visualizations, for free. Users can fetch their data from a variety of sources and create unlimited reports in Data Studio, with full editing and sharing capabilities.
Community Connectors is a new feature for Data Studio that lets you use Apps Script to build connectors to any internet accessible data source. You can share Community Connectors with other people so they can access their own data from within Data Studio.
For example, if you are providing a web-based service to your customers, you can create a Community Connector with a template dashboard to fetch data from your API. In just 3 to 4 clicks, your customers can log into your web app, authenticate with Data Studio, and see their individualized data displayed in a beautiful interactive dashboard.
Here's an example Data Studio dashboard that uses a Community Connector to show live data using the Stack Overflow API:
Try out this Stack Overflow Community Connector yourself or view the code.
Leverage Data Studio as a reporting platform for your customers. Provide significant value to your customers by providing them with a ready-made reporting platform. With a minimal development investment, you can rely on Data Studio as a free and powerful dashboarding and analysis solution for your customers.
Reach a larger audience and also monetize your connector. Publish and promote your Community Connector in the Data Studio Community Connector gallery that is visible to all Data Studio users. Published connectors are also directly accessible from the public Community Connector Gallery. There are also multiple approaches if you want to monetize your connector.
Develop customized enterprise solutions for your business. Fetch your business data from a variety of sources (e.g. BigQuery, CloudSQL, web API etc.) and create a customized solution specifically for your business. By providing templates with your connectors, you can significantly cut down dashboard building time.
Benefit from Apps Script features and use your existing code. Since Community Connectors are developed using Google Apps Script, you can benefit from features such as caching, storage, translation, authentication etc. If you already have a Google Sheets connector, it is easy to reuse that same code for a Community Connector.
Did we mention it's free? Data Studio is completely free to use. And there is no cost for developing or publishing Community Connectors.
The Get Started Guide can help you to build your first Community Connector. Since Apps Script is a subset of Javascript, you can easily build a connector even if you have not worked with Apps Script before.
You can also jump ahead and view specific steps of the typical development life cycle of a Community Connector:
You can keep your connector private or share them with other users. You also have the option to publish your connector. Publishing will feature your connector both in Data Studio as well as in the public Community Connector gallery. This enables you to reach all Data Studio users and showcase your service. Furthermore, we encourage you to submit your connector to our Open Source repo so that the community can benefit from it.
If you have any interesting connector stories, ideas, or if you'd like to share some amazing reports you've created using Community Connectors, give us a shout or send us your story at community-connector-feedback@google.com.