Build a driver distraction application using the Watson Visual Recognition service

September 9, 2019 devadvin

Each day in the United States, approximately 9 people are killed and more than 1,000 are injured in crashes caused by distracted driving. This is an alarming number of accidents for any country, and a prevention method must be used.

Artificial intelligence can be used to help with this problem. The government can install IoT cameras around the city to monitor these drivers and promote safer driving by applying strict rules. This blog explains how easy is to implement this project by using IBM Watson Studio on IBM Cloud.

What is Watson Studio

IBM Watson™ Studio is a full suite of data science tools that can be used by any data scientist to implement an AI project with minimal coding. The tool includes tools such as Data Refinery for data preparation, tools like SPSS® Modeler to create a machine learning model without any code. In this blog, I use Watson Visual Recognition to create an image classifier within 30 minutes without any code.

How to create the application

We start off by creating the service on Watson Studio and building the model on it. We then integrate the model into our end application to see it in action. Let’s get started.

Visual recognition using Watson Studio

  1. Create a Watson Studio Visual Recognition project.

    Creating a Studio project

  2. Get the image data set from Kaggle.

  3. Drag the ZIP folder to train the model. You will see all 10 classes of distraction, including one negative class (c0- safe driver).

    Distraction classes

  4. Send the data for training and wait for the results

Dashboard to check class imbalance (optional)

One main problem in data science is class imbalance. Class imbalance occurs when the data in each model class does not have an equal number of examples for training. This can lower the model accuracy and return inaccurate results. A simple dashboard is created to see how varied the image dataset is. This dashboard was created using Cognos Dashboards.

Dashboard

Model integration – Android application

To use the visual recognition model created, we need a front end. This is a sample application created to demonstrate the model integration capabilities of Watson Studio visual recognition models.

Steps

  1. Download the Android project.

  2. Edit the visual recognition credentials in the values/strings.xml file.

    Editing credentials

  3. Edit the model ID in the MainActivity.java file.

    Model ID

Demo of the application in action

Demo

Full demo video

Watch the video to understand how Watson Studio on IBM Cloud is used to build this project.

Previous Article
Master the Mainframe 2019 contest — Register today!
Master the Mainframe 2019 contest — Register today!

Master the Mainframe, the largest student mainframe competition in the world has returned for its 15th year...

Next Article
Microsurvival Part 3: Hello Docker
Microsurvival Part 3: Hello Docker

Appy's parent, Dev, receives tips on how to work with Docker and Dockerfiles, including building and runnin...

×

Want our latest news? Subscribe to our blog!

Last Name
First Name
Thank you!
Error - something went wrong!