Learn the steps to build an app that detects crop diseases

October 15, 2020


Link copied to clipboard

Posted by Laurence Moroney, TensorFlow Developer Advocate at Google

On October 16-18, thousands of developers from all over the world are coming together for DevFest 2020, the largest virtual weekend of community-led learning on Google technologies.

For DevFest this year, a few familiar faces from Google and the community came together to show you how to build an app using multiple Google Developer tools to detect crop diseases, from scratch, in just a few minutes. This is one example of how developers can leverage a number of Google tools to solve a real-world problem. Watch the full demo video here or learn more below.

Creating the Android app

Image of Chet Haase

Chet Haase, Android Developer Advocate, begins by creating an Android app that recognizes information about plants. To do that, he needs camera functionality, and also machine learning inference.

The app is written in Kotlin, uses CameraX to take the pictures and MLKit for on-device Machine Learning analysis. The core functionality revolves around taking a picture, analyzing it, and displaying the results.

[Code showing how the app takes a picture, analyzes it, and displays the results.]

MLKIt makes it easy to recognize the contents of an image using its ImageLabeler object, so Chet just grabs a frame from CameraX and uses that. When this succeeds, we receive a collection of ImageLabels, which we turn into text strings and display a toast with the results.

[Demo of what the app detecting that the image is a plant.]

Setting up the Machine Learning model

To dig a little deeper, Gus Martins, Google Developer Advocate for TensorFlow, shows us how to set up a Machine Learning model to detect diseases in bean plants.

Gus uses Google Colab, a cloud-hosted development tool to do transfer learning from an existing ML model hosted on TensorFlow.Hub

He then puts it all together and uses a tool called Tensorflow Lite Model Maker to train the model using our custom dataset.

Setting up the Android app to recognize and build classes

The Model Gus created includes all the metadata needed for Android Studio to recognize it and build classes from it that can run inference on the model using TensorFlow Lite. To do so, Annyce Davis, Google Developer Expert for Android, updates the app to use TensorFlow Lite.

Image of Annyce Davis

She uses the model with an image from the camera to get an inference about a bean leaf to see if it is diseased or not.

Now, when we run our app, instead of telling us it’s looking at a leaf, it can tell us if our bean is healthy or, if not, can give us a diagnosis.

(Demo of the app detecting whether or not the plant is healthy)

Transforming the demo into a successful app using Firebase, Design, and Responsible AI principles

This is just a raw demo. But to transform it into a successful app, Todd Kerpelman, Google Developer Advocate for Firebase, suggests using the Firebase plugin for Android Studio to add some Analytics, so we can find out exactly how our users are interacting with our app.

Image of Toff Kerpelman

There's a lot of ways to get at this data -- it will start showing up in the Firebase dashboard, but one really fun way of viewing this data is to use StreamView, which gives you a real-time sample of what kinds of analytics results we're seeing.

[Firebase Streamview allows you to view real-time analytics.]

Using Firebase, you could also, for example, add A/B testing to your app to choose the best model for your users; have remote configuration to keep your app up to date; have easy sign-in to your app if you want users to log in, and a whole lot more!

Di Dang, UX Designer & Design Advocate, reminds us that if we were to productize this app, it’s important to keep in mind how our AI design decisions impact users.

Image of Di Dang

For instance, we need to consider if and/or how it makes sense to display confidence intervals. Or consider how you design the onboarding experience to set user expectations for the capabilities and limitations of your ML-based app, which is vital to app adoption and engagement. For more guidance on AI design decisions, check out the People + AI Guidebook.

[You can learn more about AI design decisions at the People & AI Guidebook]

This use case focuses on plant diseases, but for this case and others, where our ML-based predictions intersect with people or communities, we absolutely need to think about responsible AI themes like privacy and fairness. Learn more here.

Building a Progressive Web App

Paul Kinlan, Developer Advocate for Web, reminds us to not forget about the web!

Image of Paul Kinlan

Paul shows us how to build a PWA that allows users to install an app across all platforms, which can combine the camera with TensorFlow.js to integrate Machine Learning to build an amazing experience that runs in the browser - no additional download required.

After setting up the project with a standard layout (with an HTML file, manifest, and Service Worker to make it a PWA) and a data folder that contains our TensorFlow configuration, we’ll wait until all of the JS and CSS has loaded in order to initialize the app. We then set up the camera with our helper object, and load the TensorFlow model. After it becomes active, we can then set up the UI.

The PWA is now ready and waiting for us to use.

PWA image

(The PWA tells us whether or not the plant is healthy - no app download necessary!)

The importance of Open Source

And finally, Puuja Rajan, Google Developer Expert for TensorFlow and Women Techmakers lead, reminds us that we might also want to open source this project, too, so that developers can suggest improvements, optimizations and even additional features by filing an issue or sending a pull request. It’s a great way to get your hard work in front of even more people. You can learn more about starting an Open Source project here.

Image of Pujaa Rajan

In fact, we’ve already open sourced this project, which you can find here.

So now you have the platform for building a real app -- with the tooling from Android Studio, CameraX, Jetpack, ML Kit, Colab, TensorFlow, Firebase, Chrome and Google Cloud, you have a lot of things that just work better together. This isn’t a finished project by any means, just a proof of concept for how a minimum viable product with a roadmap to completion can be put together using Google’s Developer Tools.


Join us online this weekend at a DevFest near you. Sign up here.