The Google Assistant SDK lets developers like you embed the Google Assistant into any device with a microphone and speaker. Since we first introduced the SDK, you've created innovative projects and delightful applications with Voice Kits. Your fun side projects and practical applications have captivated our imagination, and we'll continue working with companies—big and small—to develop and launch new products to extend the availability of the Google Assistant.
To help you take your products to the next level, today we're happy to introduce several new features to the Google Assistant SDK.
Supporting users globally is important for the Google Assistant and as of the latest release you can now programmatically configure the API, or configure your device within the Assistant app, to use any of the following languages/locales: English (Australia, Canada, UK, US), French (Canada, France), German, and Japanese.
Many aspects of the Google Assistant can be customized by end-users in the Settings screen within the Assistant on their phone. SDK-based devices are not only discoverable within this experience, but they also support the same level of customization, including changing the device's language, location, nickname, and enabling personalized results -- for example, "Ok Google, what's on my calendar?"
In terms of location, SDK-based devices can now be configured as a street address in the Google Assistant on your phone, or as a latitude and longitude via the API. With this ability, SDK-based devices can return more location-specific answers to queries such as "Ok Google, where's the nearest coffee shop?" or "Ok Google, what's today's weather?"
Voice-in and voice-out was a natural first step for the Google Assistant SDK, but we have heard from many developers that other input and output mechanisms are needed. Today we're happy to announce that the Google Assistant SDK now supports text-based queries and responses. Both of these updates build upon the already-supported voice query and voice response API.
When we first launched the Google Assistant SDK one of the most prominent questions we received was "how can I ask the Assistant to control my device?" With the latest SDK, you can utilize the new Device Action functionality to build Actions directly into your Assistant-enabled SDK devices.
When you register a device you can now specify what traits the device itself supports – on/off or temperature setting, for example. When users then ask the device, "Ok Google, set the temperature to 78 degrees," the Google Assistant will turn such queries into structured intents via cloud-based automated speech recognition (ASR) and natural language understanding (NLU). All you need to provide is the client-side code for actually fulfilling the Device Action itself – no other code is needed. The SDK supports a set of device traits that are supported by Smart Home.
To help get you up and running with Device Actions, we are launching a new management API to help you register and manage your SDK devices. With this API you are able to easily register, unregister, and see all devices that you have registered. We're also introducing a device model which represents a set of devices with the same type and traits.
Get started with all this new functionality, by checking out the documentation and samples.
If you're interested in building a commercial product with the Google Assistant, we encourage you to reach out and contact us.
As always, there are great conversations happening within StackOverflow, as well as the Assistant SDK and hackster.io communities. We encourage everyone to take part!
Welcome to Part 3 of a blog series that introduces TensorFlow Datasets and Estimators. Part 1 focused on pre-made Estimators, while Part 2 discussed feature columns. Here in Part 3, you'll learn how to create your own custom Estimators. In particular, we're going to demonstrate how to create a custom Estimator that mimics DNNClassifier's behavior when solving the Iris problem.
DNNClassifier
If you are feeling impatient, feel free to compare and contrast the following full programs:
As Figure 1 shows, pre-made Estimators are subclasses of the tf.estimator.Estimator base class, while custom Estimators are an instantiation of tf.estimator.Estimator:
tf.estimator.Estimator
tf.estimator.Estimator:
Pre-made Estimators are fully-baked. Sometimes though, you need more control over an Estimator's behavior. That's where custom Estimators come in.
You can create a custom Estimator to do just about anything. If you want hidden layers connected in some unusual fashion, write a custom Estimator. If you want to calculate a unique metric for your model, write a custom Estimator. Basically, if you want an Estimator optimized for your specific problem, write a custom Estimator.
A model function (model_fn) implements your model. The only difference between working with pre-made Estimators and custom Estimators is:
model_fn
Your model function could implement a wide range of algorithms, defining all sorts of hidden layers and metrics. Like input functions, all model functions must accept a standard group of input parameters and return a standard group of output values. Just as input functions can leverage the Dataset API, model functions can leverage the Layers API and the Metrics API.
Before demonstrating how to implement Iris as a custom Estimator, we wanted to remind you how we implemented Iris as a pre-made Estimator in Part 1 of this series. In that Part, we created a fully connected, deep neural network for the Iris dataset simply by instantiating a pre-made Estimator as follows:
# Instantiate a deep neural network classifier. classifier = tf.estimator.DNNClassifier( feature_columns=feature_columns, # The input features to our model. hidden_units=[10, 10], # Two layers, each with 10 neurons. n_classes=3, # The number of output classes (three Iris species). model_dir=PATH) # Pathname of directory where checkpoints, etc. are stored.
The preceding code creates a deep neural network with the following characteristics:
PATH
Figure 2 illustrates the input layer, hidden layers, and output layer of the Iris model. For reasons pertaining to clarity, we've only drawn 4 of the nodes in each hidden layer.
Let's see how to solve the same Iris problem with a custom Estimator.
One of the biggest advantages of the Estimator framework is that you can experiment with different algorithms without changing your data pipeline. We will therefore reuse much of the input function from Part 1:
def my_input_fn(file_path, repeat_count=1, shuffle_count=1): def decode_csv(line): parsed_line = tf.decode_csv(line, [[0.], [0.], [0.], [0.], [0]]) label = parsed_line[-1] # Last element is the label del parsed_line[-1] # Delete last element features = parsed_line # Everything but last elements are the features d = dict(zip(feature_names, features)), label return d dataset = (tf.data.TextLineDataset(file_path) # Read text file .skip(1) # Skip header row .map(decode_csv, num_parallel_calls=4) # Decode each line .cache() # Warning: Caches entire dataset, can cause out of memory .shuffle(shuffle_count) # Randomize elems (1 == no operation) .repeat(repeat_count) # Repeats dataset this # times .batch(32) .prefetch(1) # Make sure you always have 1 batch ready to serve ) iterator = dataset.make_one_shot_iterator() batch_features, batch_labels = iterator.get_next() return batch_features, batch_labels
Notice that the input function returns the following two values:
batch_features
batch_labels
Refer to Part 1 for full details on input functions.
As detailed in Part 2 of our series, you must define your model's feature columns to specify the representation of each feature. Whether working with pre-made Estimators or custom Estimators, you define feature columns in the same fashion. For example, the following code creates feature columns representing the four features (all numerical) in the Iris dataset:
feature_columns = [ tf.feature_column.numeric_column(feature_names[0]), tf.feature_column.numeric_column(feature_names[1]), tf.feature_column.numeric_column(feature_names[2]), tf.feature_column.numeric_column(feature_names[3]) ]
We are now ready to write the model_fn for our custom Estimator. Let's start with the function declaration:
def my_model_fn( features, # This is batch_features from input_fn labels, # This is batch_labels from input_fn mode): # Instance of tf.estimator.ModeKeys, see below
The first two arguments are the features and labels returned from the input function; that is, features and labels are the handles to the data your model will use. The mode argument indicates whether the caller is requesting training, predicting, or evaluating.
features
labels
mode
To implement a typical model function, you must do the following:
If your custom Estimator generates a deep neural network, you must define the following three layers:
Use the Layers API (tf.layers) to define hidden and output layers.
tf.layers
If your custom Estimator generates a linear model, then you only have to generate a single layer, which we'll describe in the next section.
Call tf.feature_column.input_layer to define the input layer for a deep neural network. For example:
tf.feature_column.input_layer
# Create the layer of input input_layer = tf.feature_column.input_layer(features, feature_columns)
The preceding line creates our input layer, reading our features through the input function and filtering them through the feature_columns defined earlier. See Part 2 for details on various ways to represent data through feature columns.
feature_columns
To create the input layer for a linear model, call tf.feature_column.linear_model instead of tf.feature_column.input_layer. Since a linear model has no hidden layers, the returned value from tf.feature_column.linear_model serves as both the input layer and output layer. In other words, the returned value from this function is the prediction.
tf.feature_column.linear_model
If you are creating a deep neural network, you must define one or more hidden layers. The Layers API provides a rich set of functions to define all types of hidden layers, including convolutional, pooling, and dropout layers. For Iris, we're simply going to call tf.layers.Dense twice to create two dense hidden layers, each with 10 neurons. By "dense," we mean that each neuron in the first hidden layer is connected to each neuron in the second hidden layer. Here's the relevant code:
tf.layers.Dense
# Definition of hidden layer: h1 # (Dense returns a Callable so we can provide input_layer as argument to it) h1 = tf.layers.Dense(10, activation=tf.nn.relu)(input_layer) # Definition of hidden layer: h2 # (Dense returns a Callable so we can provide h1 as argument to it) h2 = tf.layers.Dense(10, activation=tf.nn.relu)(h1)
The inputs parameter to tf.layers.Dense identifies the preceding layer. The layer preceding h1 is the input layer.
inputs
h1
Figure 3. The input layer feeds into hidden layer 1.
The preceding layer to h2 is h1. So, the string of layers now looks like this:
h2
Figure 4. Hidden layer 1 feeds into hidden layer 2.
The first argument to tf.layers.Dense defines the number of its output neurons—10 in this case.
The activation parameter defines the activation function—Relu in this case.
activation
Note that tf.layers.Dense provides many additional capabilities, including the ability to set a multitude of regularization parameters. For the sake of simplicity, though, we're going to simply accept the default values of the other parameters. Also, when looking at tf.layers you may encounter lower-case versions (e.g. tf.layers.dense). As a general rule, you should use the class versions which start with a capital letter (tf.layers.Dense).
tf.layers.dense
We'll define the output layer by calling tf.layers.Dense yet again:
# Output 'logits' layer is three numbers = probability distribution # (Dense returns a Callable so we can provide h2 as argument to it) logits = tf.layers.Dense(3)(h2)
Notice that the output layer receives its input from h2. Therefore, the full set of layers is now connected as follows:
Figure 5. Hidden layer 2 feeds into the output layer.
When defining an output layer, the units parameter specifies the number of possible output values. So, by setting units to 3, the tf.layers.Dense function establishes a three-element logits vector. Each cell of the logits vector contains the probability of the Iris being Setosa, Versicolor, or Virginica, respectively.
units
3
Since the output layer is a final layer, the call to tf.layers.Dense omits the optional activation parameter.
The final step in creating a model function is to write branching code that implements prediction, evaluation, and training.
The model function gets invoked whenever someone calls the Estimator's train, evaluate, or predict methods. Recall that the signature for the model function looks like this:
train
evaluate
predict
Focus on that third argument, mode. As the following table shows, when someone calls train, evaluate, or predict, the Estimator framework invokes your model function with the mode parameter set as follows:
mode parameter
train()
ModeKeys.TRAIN
evaluate()
ModeKeys.EVAL
predict()
ModeKeys.PREDICT
For example, suppose you instantiate a custom Estimator to generate an object named classifier. Then, you might make the following call (never mind the parameters to my_input_fn at this time):
classifier
my_input_fn
classifier.train( input_fn=lambda: my_input_fn(FILE_TRAIN, repeat_count=500, shuffle_count=256))
The Estimator framework then calls your model function with mode set to ModeKeys.TRAIN.
model
Your model function must provide code to handle all three of the mode values. For each mode value, your code must return an instance of tf.estimator.EstimatorSpec, which contains the information the caller requires. Let's examine each mode.
tf.estimator.EstimatorSpec
When model_fn is called with mode == ModeKeys.PREDICT, the model function must return a tf.estimator.EstimatorSpec containing the following information:
mode == ModeKeys.PREDICT
tf.estimator.ModeKeys.PREDICT
The model must have been trained prior to making a prediction. The trained model is stored on disk in the directory established when you instantiated the Estimator.
For our case, the code to generate the prediction looks as follows:
# class_ids will be the model prediction for the class (Iris flower type) # The output node with the highest value is our prediction predictions = { 'class_ids': tf.argmax(input=logits, axis=1) } # Return our prediction if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode, predictions=predictions)
The block is surprisingly brief--the lines of code are simply the bucket at the end of a long hose that catches the falling predictions. After all, the Estimator has already done all the heavy lifting to make a prediction:
The output layer is a logits vector that contains the value of each of the three Iris species being the input flower. The tf.argmax method selects the Iris species in that logits vector with the highest value.
logits
tf.argmax
Notice that the highest value is assigned to a dictionary key named class_ids. We return that dictionary through the predictions parameter of tf.estimator.EstimatorSpec. The caller can then retrieve the prediction by examining the dictionary passed back to the Estimator's predict method.
class_ids
When model_fn is called with mode == ModeKeys.EVAL, the model function must evaluate the model, returning loss and possibly one or more metrics.
mode == ModeKeys.EVAL
We can calculate loss by calling tf.losses.sparse_softmax_cross_entropy. Here's the complete code:
tf.losses.sparse_softmax_cross_entropy
# To calculate the loss, we need to convert our labels # Our input labels have shape: [batch_size, 1] labels = tf.squeeze(labels, 1) # Convert to shape [batch_size] loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
Now let's turn our attention to metrics. Although returning metrics is optional, most custom Estimators return at least one metric. TensorFlow provides a Metrics API (tf.metrics) to calculate different kinds of metrics. For brevity's sake, we'll only return accuracy. The tf.metrics.accuracy compares our predictions against the "true labels", that is, against the labels provided by the input function. The tf.metrics.accuracy function requires the labels and predictions to have the same shape (which we did earlier). Here's the call to tf.metrics.accuracy:
tf.metrics
tf.metrics.accuracy
# Calculate the accuracy between the true labels, and our predictions accuracy = tf.metrics.accuracy(labels, predictions['class_ids'])
When the model is called with mode == ModeKeys.EVAL, the model function returns a tf.estimator.EstimatorSpec containing the following information:
tf.estimator.ModeKeys.EVAL
So, we'll create a dictionary containing our sole metric (my_accuracy). If we had calculated other metrics, we would have added them as additional key/value pairs to that same dictionary. Then, we'll pass that dictionary in the eval_metric_ops argument of tf.estimator.EstimatorSpec. Here's the block:
my_accuracy
eval_metric_ops
# Return our loss (which is used to evaluate our model) # Set the TensorBoard scalar my_accurace to the accuracy # Obs: This function only sets value during mode == ModeKeys.EVAL # To set values during training, see tf.summary.scalar if mode == tf.estimator.ModeKeys.EVAL: return tf.estimator.EstimatorSpec( mode, loss=loss, eval_metric_ops={'my_accuracy': accuracy})
When model_fn is called with mode == ModeKeys.TRAIN, the model function must train the model.
mode == ModeKeys.TRAIN
We must first instantiate an optimizer object. We picked Adagrad (tf.train.AdagradOptimizer) in the following code block only because we're mimicking the DNNClassifier, which also uses Adagrad. The tf.train package provides many other optimizers—feel free to experiment with them.
tf.train.AdagradOptimizer
tf.train
Next, we train the model by establishing an objective on the optimizer, which is simply to minimize its loss. To establish that objective, we call the minimize method.
loss
minimize
In the code below, the optional global_step argument specifies the variable that TensorFlow uses to count the number of batches that have been processed. Setting global_step to tf.train.get_global_step will work beautifully. Also, we are calling tf.summary.scalar to report my_accuracy to TensorBoard during training. For both of these notes, please see the section on TensorBoard below for further explanation.
global_step
tf.train.get_global_step
tf.summary.scalar
optimizer = tf.train.AdagradOptimizer(0.05) train_op = optimizer.minimize( loss, global_step=tf.train.get_global_step()) # Set the TensorBoard scalar my_accuracy to the accuracy tf.summary.scalar('my_accuracy', accuracy[1])
When the model is called with mode == ModeKeys.TRAIN, the model function must return a tf.estimator.EstimatorSpec containing the following information:
tf.estimator.ModeKeys.TRAIN
Here's the code:
# Return training operations: loss and train_op return tf.estimator.EstimatorSpec( mode, loss=loss, train_op=train_op)
Our model function is now complete!
After creating your new custom Estimator, you'll want to take it for a ride. Start by
instantiating the custom Estimator through the Estimator base class as follows:
Estimator
classifier = tf.estimator.Estimator( model_fn=my_model_fn, model_dir=PATH) # Path to where checkpoints etc are stored
The rest of the code to train, evaluate, and predict using our estimator is the same as for the pre-made DNNClassifier described in Part 1. For example, the following line triggers training the model:
As in Part 1, we can view some training results in TensorBoard. To see this reporting, start TensorBoard from your command-line as follows:
# Replace PATH with the actual path passed as model_dir tensorboard --logdir=PATH
Then browse to the following URL:
localhost:6006
All the pre-made Estimators automatically log a lot of information to TensorBoard. With custom Estimators, however, TensorBoard only provides one default log (a graph of loss) plus the information we explicitly tell TensorBoard to log. Therefore, TensorBoard generates the following from our custom Estimator:
Figure 6. TensorBoard displays three graphs.
In brief, here's what the three graphs tell you:
tf.train.get_global_step()
eval_metric_ops={'my_accuracy': accuracy})
EVAL
EstimatorSpec
tf.summary.scalar('my_accuracy', accuracy[1])
TRAIN
Note the following in the my_accuracy and loss graphs:
During TRAIN, orange values are recorded continuously as batches are processed, which is why it becomes a graph spanning x-axis range. By contrast, EVAL produces only a single value from processing all the evaluation steps.
As suggested in Figure 7, you may see and also selectively disable/enable the reporting for training and evaluation the left side. (Figure 7 shows that we kept reporting on for both:)
Figure 7. Enable or disable reporting.
In order to see the orange graph, you must specify a global step. This, in combination with getting global_steps/sec reported, makes it a best practice to always register a global step by passing tf.train.get_global_step() as an argument to the optimizer.minimize call.
global_steps/sec
optimizer.minimize
Although pre-made Estimators can be an effective way to quickly create new models, you will often need the additional flexibility that custom Estimators provide. Fortunately, pre-made and custom Estimators follow the same programming model. The only practical difference is that you must write a model function for custom Estimators. Everything else is the same!
For more details, be sure to check out:
input_layer
Until next time - Happy TensorFlow coding!
This past year we worked hard to make the Google Assistant better for users and developers like you, but we also wanted to find new ways to reward you for doing what you love – building great apps for the Google Assistant.
So at I/O 2017, we announced our first Actions on Google Developer Challenge encouraging you to build helpful, entertaining apps for the Assistant. Today, we're announcing the competition's winners, chosen from thousands of entries.
In addition to the top three prize winners, we also selected winners among various categories including "best app by students," "best parenting app," "best life hack" and more. You can read up on all of the winners' apps here. Congratulations to our winners and to all those who submitted an app as part of the contest – we can't wait for users to check them out!
Happy holidays and happy New Year. We can't wait to see what the next year has in store.
Be sure to follow us on Twitter and check out the Google Assistant developer community program to stay in the know for 2018!
Correction: [January 4, 2018] Two previously announced winners were found ineligible according to the competition's terms. Updated winners available here.
On November 14th, we announced the developer preview of TensorFlow Lite, TensorFlow's lightweight solution for mobile and embedded devices.
Today, in collaboration with Apple, we are happy to announce support for Core ML! With this announcement, iOS developers can leverage the strengths of Core ML for deploying TensorFlow models. In addition, TensorFlow Lite will continue to support cross-platform deployment, including iOS, through the TensorFlow Lite format (.tflite) as described in the original announcement.
Support for Core ML is provided through a tool that takes a TensorFlow model and converts it to the Core ML Model Format (.mlmodel).
For more information, check out the TensorFlow Lite documentation pages, and the Core ML converter. The pypi pip installable package is available here: https://pypi.python.org/pypi/tfcoreml/0.1.0.
Stay tuned for more updates.
Happy TensorFlow Lite coding!
As developers, we all know that having the right assets is crucial to the success of a 3D application, especially with AR and VR apps. Since we launched Poly a few weeks ago, many developers have been downloading and using Poly models in their apps and games. To make this process easier and more powerful, today we launched the Poly API, which allows applications to dynamically search and download 3D assets at both edit and run time.
The API is REST-based, so it's inherently cross-platform. To help you make the API calls and convert the results into objects that you can display in your app, we provide several toolkits and samples for some common game engines and platforms. Even if your engine or platform isn't included in this list, remember that the API is based on HTTP, which means you can call it from virtually any device that's connected to the Internet.
Here are some of the things the API allows you to do:
If you are using Unity, we offer Poly Toolkit for Unity, a plugin that includes all the necessary functionality to automatically wrap the API calls and download and convert assets, exposing it through a simple C# API. For example, you can fetch and import an asset into your scene at runtime with a single line of code:
PolyApi.GetAsset(ASSET_ID, result => { PolyApi.Import(result.Value, PolyImportOptions.Default()); });
Poly Toolkit optionally also handles authentication for you, so that you can list the signed in user's own private assets, or the assets that the user has liked on the Poly website.
In addition, Poly Toolkit for Unity also comes with an editor window, where you can search for and import assets from Poly into your Unity scene directly from the editor.
If you are using Unreal, we also offer Poly Toolkit for Unreal, which wraps the API and performs automatic download and conversion of OBJs and Blocks models from Poly. It allows you to query for assets and filter results, download assets and import assets as ready-to-use Unreal actors that you can use in your game.
Not using a game engine? No problem! If you are developing for Android, check out our Android sample code, which includes a basic sample with no external dependencies, and also a sample that shows how to use the Poly API in conjunction with ARCore. The samples include:
If you are an iOS developer, we have two samples for you as well: one using SceneKit and one using ARKit, showing how to build an iOS app that downloads and imports models from Poly. This includes all the logic necessary to open an HTTP connection, make the API requests, parse the results, build the 3D objects from the data and place them on the scene.
For web developers, we also offer a complete WebGL sample using Three.js, showing how to get and display a particular asset, or perform searches. There is also a sample showing how to import and display Tilt Brush sketches.
No matter what engine or platform you are using, we hope that the Poly API will help bring high quality assets to your app and help you increase engagement with your users! You can find more information about the Poly API and our toolkits and samples on our developers site.
Since we released AIY Voice Kit, we've been inspired by the thousands of amazing builds coming in from the maker community. Today, the AIY Team is excited to announce our next project: the AIY Vision Kit — an affordable, hackable, intelligent camera.
Much like the Voice Kit, our Vision Kit is easy to assemble and connects to a Raspberry Pi computer. Based on user feedback, this new kit is designed to work with the smaller Raspberry Pi Zero W computer and runs its vision algorithms on-device so there's no cloud connection required.
The kit materials list includes a VisionBonnet, a cardboard outer shell, an RGB arcade-style button, a piezo speaker, a macro/wide lens kit, flex cables, standoffs, a tripod mounting nut and connecting components.
The VisionBonnet is an accessory board for Raspberry Pi Zero W that features the Intel® Movidius™ MA2450, a low-power vision processing unit capable of running neural networks. This will give makers visual perception instead of image sensing. It can run at speeds of up to 30 frames per second, providing near real-time performance.
Bundled with the software image are three neural network models:
For those of you who have your own models in mind, we've included the original TensorFlow code and a compiler. Take a new model you have (or train) and run it on the the Intel® Movidius™ MA2450.
The AIY Vision Kit is completely hackable:
We hope you'll use it to solve interesting challenges, such as:
AIY Vision Kits will be available in December, with online pre-sales at Micro Center starting today.
*** Please note that AIY Vision Kit requires Raspberry Pi Zero W, Raspberry Pi Camera V2 and a micro SD card, which must be purchased separately.
We're listening — let us know how we can improve our kits and share what you're making using the #AIYProjects hashtag on social media. We hope AIY Vision Kit inspires you to build all kinds of creative devices.
To cap off another amazing year for Launchpad Accelerator, we're excited to announce the 5th class of our hands-on mentorship program. This includes a diverse group of startups from all over the world looking to tackle everything from streamlining medical records in Africa to improving breast cancer screenings.
Launchpad Accelerator is Google's six month program that includes an intensive two week bootcamp in San Francisco and mentoring from 30+ teams across Google and expert mentors from top technology companies and VCs in Silicon Valley and globally. Participants receive equity-free support, credits for Google products and media training, and continue to work closely with Google back in their home country.
Class 5 kicks off January 29th, 2018 at the Google Developers Launchpad Space in San Francisco and will include 2 weeks of all-expense-paid training, as part of the the full 6-month program.
Here's the full list of participating startups (by region):
We recently launched a new YouTube video series focused on teaching developers best practices for the Actions on Google platform.
Apps for the Google Assistant are the gateway for users to engage with your services through Google Home, Android phones, iPhones, and in the future, through every experience where the Google Assistant is available.
The goal of the video series is to show you how to use the Google Assistant platform in the best way. You will learn more from Ido Green, Developer Advocate at Google, who will touch on topics like:
Tune in to learn how to build, or improve your apps for the Google Assistant so your users can benefit from more meaningful, interactive experiences.
And if you'd like to keep the conversation going, please join our developer community at: https://g.co/actionsdev or @actionsongoogle
See you!
The Google Container Tools team originally built container-diff, a new project to help uncover differences between container images, to aid our own development with containers. We think it can be useful for anyone building containerized software, so we're excited to release it as open source to the development community.
Containers and the Dockerfile format help make customization of an application's runtime environment more approachable and easier to understand. While this is a great advantage of using containers in software development, a major drawback is that it can be hard to visualize what changes in a container image will result from a change in the respective Dockerfile. This can lead to bloated images and make tracking down issues difficult.
Imagine a scenario where a developer is working on an application, built on a runtime image maintained by a third-party. During development someone releases a new version of that base image with updated system packages. The developer rebuilds their application and picks up the latest version of the base image, and suddenly their application stops working; it depended on a previous version of one of the installed system packages, but which one? What version was it on before? With no currently existing tool to easily determine what changed between the two base image versions, this totally stalls development until the developer can track down the package version incompatibility.
container-diff helps users investigate image changes by computing semantic diffs between images. What this means is that container-diff figures out on a low-level what data changed, and then combines this with an understanding of package manager information to output this information in a format that's actually readable to users. The tool can find differences in system packages, language-level packages, and files in a container image.
Users can specify images in several formats - from local Docker daemon (using the prefix `daemon://` on the image path), a remote registry (using the prefix `remote://`), or a file in the .tar in the format exported by "docker save" command. You can also combine these formats to compute the diff between a local version of an image and a remote version. This can be useful when experimenting with new builds of an image that you might not be quite ready to push yet. container-diff supports image tarballs and the registry protocol natively, enabling it to run in environments without a Docker daemon.
Here is a basic Dockerfile that installs Python inside our Debian base image. Running container-diff on the base image and the new one with Python, users can see all the apt packages that were installed as dependencies of Python.
➜ debian_with_python cat Dockerfile FROM gcr.io/google-appengine/debian8 RUN apt-get update && apt-get install -qq --force-yes python ➜ debian_with_python docker build -q -t debian_with_python . sha256:be2cd1ae6695635c7041be252589b73d1539a858c33b2814a66fe8fa4b048655 ➜ debian_with_python container-diff diff gcr.io/google-appengine/debian8:latest daemon://debian_with_python:latest -----Apt----- Packages found only in gcr.io/google-appengine/debian8:latest: None Packages found only in debian_with_python:latest: NAME VERSION SIZE -file 1:5.22 15-2+deb8u3 76K -libexpat1 2.1.0-6 deb8u4 386K -libffi6 3.1-2 deb8u1 43K -libmagic1 1:5.22 15-2+deb8u3 3.1M -libpython-stdlib 2.7.9-1 54K -libpython2.7-minimal 2.7.9-2 deb8u1 2.6M -libpython2.7-stdlib 2.7.9-2 deb8u1 8.2M -libsqlite3-0 3.8.7.1-1 deb8u2 877K -mime-support 3.58 146K -python 2.7.9-1 680K -python-minimal 2.7.9-1 163K -python2.7 2.7.9-2 deb8u1 360K -python2.7-minimal 2.7.9-2 deb8u1 3.7M Version differences: None
And below is a Dockerfile that inherits from our Python base runtime image, and then installs the mock and six packages inside of it. Running container-diff with the pip differ, users can see all the Python packages that have either been installed or changed as a result of this:
mock
six
➜ python_upgrade cat Dockerfile FROM gcr.io/google-appengine/python RUN pip install -U six ➜ python_upgrade docker build -q -t python_upgrade . sha256:7631573c1bf43727d7505709493151d3df8f98c843542ed7b299f159aec6f91f ➜ python_upgrade container-diff diff gcr.io/google-appengine/python:latest daemon://python_upgrade:latest --types=pip -----Pip----- Packages found only in gcr.io/google-appengine/python:latest: None Packages found only in python_upgrade:latest: NAME VERSION SIZE -funcsigs 1.0.2 51.4K -mock 2.0.0 531.2K -pbr 3.1.1 471.1K Version differences: PACKAGE IMAGE1 (gcr.io/google-appengine/python:latest) IMAGE2 (python_upgrade:latest) -six 1.8.0, 26.7K
This can be especially useful when it's unclear which packages might have been installed or changed incidentally as a result of dependency management of Python modules.
These are just a few examples. The tool currently has support for Python and Node.js packages installed via pip and npm, respectively, as well as comparison of image filesystems and Docker history. In the future, we'd like to see support added for additional runtime and language differs, including Java, Go, and Ruby. External contributions are welcome! For more information on contributing to container-diff, see this how-to guide.
Now that we've seen container-diff compare two images in action, it's easy to imagine how the tool may be integrated into larger workflows to aid in development:
container-diff's default output mode is "human-readable," but also supports output to JSON, allowing for easy automated parsing and processing by users.
In addition to comparing two images, container-diff has the ability to analyze a single image on its own. This can enable users to get a quick glance at information about an image, such as its system and language-level package installations and filesystem contents.
Let's take a look at our Debian base image again. We can use the tool to easily view a list of all packages installed in the image, along with each one's installed version and size:
➜ Development container-diff analyze gcr.io/google-appengine/debian8:latest -----Apt----- Packages found in gcr.io/google-appengine/debian8:latest: NAME VERSION SIZE -acl 2.2.52-2 258K -adduser 3.113 nmu3 1M -apt 1.0.9.8.4 3.1M -base-files 8 deb8u9 413K -base-passwd 3.5.37 185K -bash 4.3-11 deb8u1 4.9M -bsdutils 1:2.25.2-6 181K -ca-certificates 20141019 deb8u3 367K -coreutils 8.23-4 13.9M -dash 0.5.7-4 b1 191K -debconf 1.5.56 deb8u1 614K -debconf-i18n 1.5.56 deb8u1 1.1M -debian-archive-keyring 2017.5~deb8u1 137K
We could use this to verify compatibility with an application we're building, or maybe sort the packages by size in another one of our images and see which ones are taking up the most space.
For more information about this tool as well as a breakdown with examples, uses, and inner workings of the tool, please take a look at documentation on our GitHub page. Happy diffing!
Special thanks to Colette Torres and Abby Tisdale, our software engineering interns who helped build the tool from the ground up.
Welcome to Part 2 of a blog series that introduces TensorFlow Datasets and Estimators. We're devoting this article to feature columns—a data structure describing the features that an Estimator requires for training and inference. As you'll see, feature columns are very rich, enabling you to represent a diverse range of data.
In Part 1, we used the pre-made Estimator DNNClassifier to train a model to predict different types of Iris flowers from four input features. That example created only numerical feature columns (of type tf.feature_column.numeric_column). Although those feature columns were sufficient to model the lengths of petals and sepals, real world data sets contain all kinds of non-numerical features. For example:
tf.feature_column.numeric_column)
How can we represent non-numerical feature types? That's exactly what this blogpost is all about.
Let's start by asking what kind of data can we actually feed into a deep neural network? The answer is, of course, numbers (for example, tf.float32). After all, every neuron in a neural network performs multiplication and addition operations on weights and input data. Real-life input data, however, often contains non-numerical (categorical) data. For example, consider a product_class feature that can contain the following three non-numerical values:
tf.float32
product_class
kitchenware
electronics
sports
ML models generally represent categorical values as simple vectors in which a 1 represents the presence of a value and a 0 represents the absence of a value. For example, when product_class is set to sports, an ML model would usually represent product_class as [0, 0, 1], meaning:
So, although raw data can be numerical or categorical, an ML model represents all features as either a number or a vector of numbers.
As Figure 2 suggests, you specify the input to a model through the feature_columns argument of an Estimator (DNNClassifier for Iris). Feature Columns bridge input data (as returned by input_fn) with your model.
input_fn
To represent features as a feature column, call functions of the tf.feature_column package. This blogpost explains nine of the functions in this package. As Figure 3 shows, all nine functions return either a Categorical-Column or a Dense-Column object, except bucketized_column which inherits from both classes:
tf.feature_column
bucketized_column
Let's look at these functions in more detail.
The Iris classifier called tf.numeric_column() for all input features: SepalLength, SepalWidth, PetalLength, PetalWidth. Although tf.numeric_column() provides optional arguments, calling the function without any arguments is a perfectly easy way to specify a numerical value with the default data type (tf.float32) as input to your model. For example:
tf.numeric_column()
# Defaults to a tf.float32 scalar. numeric_feature_column = tf.feature_column.numeric_column(key="SepalLength")
Use the dtype argument to specify a non-default numerical data type. For example:
dtype
# Represent a tf.float64 scalar. numeric_feature_column = tf.feature_column.numeric_column(key="SepalLength", dtype=tf.float64)
By default, a numeric column creates a single value (scalar). Use the shape argument to specify another shape. For example:
shape
# Represent a 10-element vector in which each cell contains a tf.float32. vector_feature_column = tf.feature_column.numeric_column(key="Bowling", shape=10) # Represent a 10x5 matrix in which each cell contains a tf.float32. matrix_feature_column = tf.feature_column.numeric_column(key="MyMatrix", shape=[10,5])
Often, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. To do so, create a bucketized column. For example, consider raw data that represents the year a house was built. Instead of representing that year as a scalar numeric column, we could split year into the following four buckets:
The model will represent the buckets as follows:
Why would you want to split a number—a perfectly valid input to our model—into a categorical value like this? Well, notice that the categorization splits a single input number into a four-element vector. Therefore, the model now can learn four individual weights rather than just one. Four weights creates a richer model than one. More importantly, bucketizing enables the model to clearly distinguish between different year categories since only one of the elements is set (1) and the other three elements are cleared (0). When we just use a single number (a year) as input, the model can't distinguish categories. So, bucketing provides the model with additional important information that it can use to learn.
The following code demonstrates how to create a bucketized feature:
# A numeric column for the raw input. numeric_feature_column = tf.feature_column.numeric_column("Year") # Bucketize the numeric column on the years 1960, 1980, and 2000 bucketized_feature_column = tf.feature_column.bucketized_column( source_column = numeric_feature_column, boundaries = [1960, 1980, 2000])
Note the following:
tf.feature_column.bucketized_column()
boundaries
Categorical identity columns are a special case of bucketized columns. In traditional bucketized columns, each bucket represents a range of values (for example, from 1960 to 1979). In a categorical identity column, each bucket represents a single, unique integer. For example, let's say you want to represent the integer range [0, 4). (That is, you want to represent the integers 0, 1, 2, or 3.) In this case, the categorical identity mapping looks like this:
So, why would you want to represent values as categorical identity columns? As with bucketized columns, a model can learn a separate weight for each class in a categorical identity column. For example, instead of using a string to represent the product_class, let's represent each class with a unique integer value. That is:
0="kitchenware"
1="electronics"
2="sport"
Call tf.feature_column.categorical_column_with_identity() to implement a categorical identity column. For example:
tf.feature_column.categorical_column_with_identity()
# Create a categorical output for input "feature_name_from_input_fn", # which must be of integer type. Value is expected to be >= 0 and < num_buckets identity_feature_column = tf.feature_column.categorical_column_with_identity( key='feature_name_from_input_fn', num_buckets=4) # Values [0, 4) # The 'feature_name_from_input_fn' above needs to match an integer key that is # returned from input_fn (see below). So for this case, 'Integer_1' or # 'Integer_2' would be valid strings instead of 'feature_name_from_input_fn'. # For more information, please check out Part 1 of this blog series. def input_fn(): ...<code>... return ({ 'Integer_1':[values], ..<etc>.., 'Integer_2':[values] }, [Label_values])
We cannot input strings directly to a model. Instead, we must first map strings to numeric or categorical values. Categorical vocabulary columns provide a good way to represent strings as a one-hot vector. For example:
As you can see, categorical vocabulary columns are kind of an enum version of categorical identity columns. TensorFlow provides two different functions to create categorical vocabulary columns:
tf.feature_column.categorical_column_with_vocabulary_list()
tf.feature_column.categorical_column_with_vocabulary_file()
The tf.feature_column.categorical_column_with_vocabulary_list() function maps each string to an integer based on an explicit vocabulary list. For example:
# Given input "feature_name_from_input_fn" which is a string, # create a categorical feature to our model by mapping the input to one of # the elements in the vocabulary list. vocabulary_feature_column = tf.feature_column.categorical_column_with_vocabulary_list( key="feature_name_from_input_fn", vocabulary_list=["kitchenware", "electronics", "sports"])
The preceding function has a significant drawback; namely, there's way too much typing when the vocabulary list is long. For these cases, call tf.feature_column.categorical_column_with_vocabulary_file() instead, which lets you place the vocabulary words in a separate file. For example:
# Given input "feature_name_from_input_fn" which is a string, # create a categorical feature to our model by mapping the input to one of # the elements in the vocabulary file vocabulary_feature_column = tf.feature_column.categorical_column_with_vocabulary_file( key="feature_name_from_input_fn", vocabulary_file="product_class.txt", vocabulary_size=3) # product_class.txt should have one line for vocabulary element, in our case: kitchenware electronics sports
So far, we've worked with a naively small number of categories. For example, our product_class example has only 3 categories. Often though, the number of categories can be so big that it's not possible to have individual categories for each vocabulary word or integer because that would consume too much memory. For these cases, we can instead turn the question around and ask, "How many categories am I willing to have for my input?" In fact, the tf.feature_column.categorical_column_with_hash_buckets() function enables you to specify the number of categories. For example, the following code shows how this function calculates a hash value of the input, then puts it into one of the hash_bucket_size categories using the modulo operator:
tf.feature_column.categorical_column_with_hash_buckets()
hash_bucket_size
# Create categorical output for input "feature_name_from_input_fn". # Category becomes: hash_value("feature_name_from_input_fn") % hash_bucket_size hashed_feature_column = tf.feature_column.categorical_column_with_hash_bucket( key = "feature_name_from_input_fn", hash_buckets_size = 100) # The number of categories
At this point, you might rightfully think: "This is crazy!" After all, we are forcing the different input values to a smaller set of categories. This means that two, probably completely unrelated inputs, will be mapped to the same category, and consequently mean the same thing to the neural network. Figure 7 illustrates this dilemma, showing that kitchenware and sports both get assigned to category (hash bucket) 12:
As with many counterintuitive phenomena in machine learning, it turns out that hashing often works well in practice. That's because hash categories provide the model with some separation. The model can use additional features to further separate kitchenware from sports.
The last categorical column we'll cover allows us to combine multiple input features into a single one. Combining features, better known as feature crosses, enables the model to learn separate weights specifically for whatever that feature combination means.
More concretely, suppose we want our model to calculate real estate prices in Atlanta, GA. Real-estate prices within this city vary greatly depending on location. Representing latitude and longitude as separate features isn't very useful in identifying real-estate location dependencies; however, crossing latitude and longitude into a single feature can pinpoint locations. Suppose we represent Atlanta as a grid of 100x100 rectangular sections, identifying each of the 10,000 sections by a cross of its latitude and longitude. This cross enables the model to pick up on pricing conditions related to each individual section, which is a much stronger signal than latitude and longitude alone.
Figure 8 shows our plan, with the latitude & longitude values for the corners of the city:
For the solution, we used a combination of some feature columns we've looked at before, as well as the tf.feature_columns.crossed_column() function.
tf.feature_columns.crossed_column()
# In our input_fn, we convert input longitude and latitude to integer values # in the range [0, 100) def input_fn(): # Using Datasets, read the input values for longitude and latitude latitude = ... # A tf.float32 value longitude = ... # A tf.float32 value # In our example we just return our lat_int, long_int features. # The dictionary of a complete program would probably have more keys. return { "latitude": latitude, "longitude": longitude, ...}, labels # As can be see from the map, we want to split the latitude range # [33.641336, 33.887157] into 100 buckets. To do this we use np.linspace # to get a list of 99 numbers between min and max of this range. # Using this list we can bucketize latitude into 100 buckets. latitude_buckets = list(np.linspace(33.641336, 33.887157, 99)) latitude_fc = tf.feature_column.bucketized_column( tf.feature_column.numeric_column('latitude'), latitude_buckets) # Do the same bucketization for longitude as done for latitude. longitude_buckets = list(np.linspace(-84.558798, -84.287259, 99)) longitude_fc = tf.feature_column.bucketized_column( tf.feature_column.numeric_column('longitude'), longitude_buckets) # Create a feature cross of fc_longitude x fc_latitude. fc_san_francisco_boxed = tf.feature_column.crossed_column( keys=[latitude_fc, longitude_fc], hash_bucket_size=1000) # No precise rule, maybe 1000 buckets will be good?
You may create a feature cross from either of the following:
dict
categorical_column_with_hash_bucket
When feature columns latitude_fc and longitude_fc are crossed, TensorFlow will create 10,000 combinations of (latitude_fc, longitude_fc) organized as follows:
latitude_fc
longitude_fc
(0,0),(0,1)... (0,99) (1,0),(1,1)... (1,99) …, …, ... (99,0),(99,1)...(99, 99)
The function tf.feature_column.crossed_column performs a hash calculation on these combinations and then slots the result into a category by performing a modulo operation with hash_bucket_size. As discussed before, performing the hash and modulo function will probably result in category collisions; that is, multiple (latitude, longitude) feature crosses will end up in the same hash bucket. In practice though, performing feature crosses still provides significant value to the learning capability of your models.
tf.feature_column.crossed_column
Somewhat counterintuitively, when creating feature crosses, you typically still should include the original (uncrossed) features in your model. For example, provide not only the (latitude, longitude) feature cross but also latitude and longitude as separate features. The separate latitude and longitude features help the model separate the contents of hash buckets containing different feature crosses.
latitude, longitude)
latitude
longitude
See this link for a full code example for this. Also, the reference section at the end of this post for lots more examples of feature crossing.
Indicator columns and embedding columns never work on features directly, but instead take categorical columns as input.
When using an indicator column, we're telling TensorFlow to do exactly what we've seen in our categorical product_class example. That is, an indicator column treats each category as an element in a one-hot vector, where the matching category has value 1 and the rest have 0s:
Here's how you create an indicator column:
categorical_column = ... # Create any type of categorical column, see Figure 3 # Represent the categorical column as an indicator column. # This means creating a one-hot vector with one element for each category. indicator_column = tf.feature_column.indicator_column(categorical_column)
Now, suppose instead of having just three possible classes, we have a million. Or maybe a billion. For a number of reasons (too technical to cover here), as the number of categories grow large, it becomes infeasible to train a neural network using indicator columns.
We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an embedding column represents that data as a lower-dimensional, ordinary vector in which each cell can contain any number, not just 0 or 1. By permitting a richer palette of numbers for every cell, an embedding column contains far fewer cells than an indicator column.
Let's look at an example comparing indicator and embedding columns. Suppose our input examples consists of different words from a limited palette of only 81 words. Further suppose that the data set provides the following input words in 4 separate examples:
In that case, Figure 10 illustrates the processing path for embedding columns or Indicator columns.
When an example is processed, one of the categorical_column_with... functions maps the example string to a numerical categorical value. For example, a function maps "spoon" to [32]. (The 32 comes from our imagination—the actual values depend on the mapping function.) You may then represent these numerical categorical values in either of the following two ways:
categorical_column_with...
"spoon"
[32]
32
How do the values in the embeddings vectors magically get assigned? Actually, the assignments happen during training. That is, the model learns the best way to map your input numeric categorical values to the embeddings vector value in order to solve your problem. Embedding columns increase your model's capabilities, since an embeddings vector learns new relationships between categories from the training data.
Why is the embedding vector size 3 in our example? Well, the following "formula" provides a general rule of thumb about the number of embedding dimensions:
embedding_dimensions = number_of_categories**0.25
That is, the embedding vector dimension should be the 4th root of the number of categories. Since our vocabulary size in this example is 81, the recommended number of dimensions is 3:
3 = 81**0.25
Note that this is just a general guideline; you can set the number of embedding dimensions as you please.
Call tf.feature_column.embedding_column to create an embedding_column. The dimension of the embedding vector depends on the problem at hand as described above, but common values go as low as 3 all the way to 300 or even beyond:
tf.feature_column.embedding_column
categorical_column = ... # Create any categorical column shown in Figure 3. # Represent the categorical column as an embedding column. # This means creating a one-hot vector with one element for each category. embedding_column = tf.feature_column.embedding_column( categorical_column=categorical_column, dimension=dimension_of_embedding_vector)
Embeddings is a big topic within machine learning. This information was just to get you started using them as feature columns. Please see the end of this post for more information.
Still there? I hope so, because we only have a tiny bit left before you've graduated from the basics of feature columns.
As we saw in Figure 1, feature columns map your input data (described by the feature dictionary returned from input_fn) to values fed to your model. You specify feature columns as a list to a feature_columns argument of an estimator. Note that the feature_columns argument(s) vary depending on the Estimator:
LinearClassifier
LinearRegressor
indicator_column
embedding_column
DNNLinearCombinedClassifier
DNNLinearCombinedRegressor
linear_feature_columns
dnn_feature_columns
DNNRegressor
The reason for the above rules are beyond the scope of this introductory post, but we will make sure to cover it in a future blogpost.
Use feature columns to map your input data to the representations you feed your model. We only used numeric_column in Part 1 of this series , but working with the other functions described in this post, you can easily create other feature columns.
numeric_column
For more details on feature columns, be sure to check out:
If you want to learn more about embeddings: