Explainable AI: How do I trust model predictions?

August 7, 2019 devadvin

We are used to providing explanations to different groups of people – our friends, our kids, our colleagues. We use different words and expressions to covey our thoughts to these various groups.

Today, something similar is happening in the machine learning world as more models are being deployed to make predictions in domains such as finance, telecommunication, healthcare, and others. In many cases, businesses don’t fully understand how machine learning models make their predictions. And this lack of understanding can be problematic.

Often, businesses and policy makers, especially in banking, insurance, and healthcare, need to be able explain how the models are making their predictions. This often becomes more complicated in context of ensemble models and deep neural networks.

A recent Gartner study identifying the top 10 trends in data and analytics, notes that ‘Explainable AI’ is gaining in importance. It says:

“To build trust with users and stakeholders, application leaders must make these models more interpretable and explainable. Unfortunately, most of these advanced AI models are complex black boxes that are not able to explain why they reached a specific recommendation or a decision.”

Announcing AI Explainability 360

To address this gap between machine learning models and business users, we are launching AI Explainability 360, which is a collection of algorithms that can help explain machine learning models and their predictions.

These include five different classes of algorithms:

  • Data explanation: understand the data
  • Global direct explanation: the model itself is understandable
  • Local direct explanation: the individual prediction is meaningful
  • Global post hoc explanation: an understandable model explains the black box model
  • Local post hoc explanation: an explanation is created for the individual prediction

Currently, there are eight algorithms spread across these classes available in the toolkit.

Image showing the eight algorithms

Take a look at the algorithms, use them in your AI technology, and contribute your own explainability algorithms.

Additional offerings in AI Explainability 360 include:

  • The AI Explainability 360 Python package includes proxy explainability metrics.
  • An interactive demo provides an introduction to the concepts and capabilities by walking through an example use case from the perspective of different consumer personas.
  • The tutorials and other notebooks offer a deeper, data scientist-oriented introduction.
  • The complete API is also available.

Get started building the bridge toward trusted AI

AI Explainability 360 Toolkit is an open source project. To get started, clone the repository and get the pip package installed. You can use the following flowchart for choosing the path you want to take.

Data explanation

IBM has a long history of supporting open source technologies that enable enterprise developers to be more productive and build reliable, innovative systems.

In partnership with the IBM Center for Open-Source Data and Artificial Intelligence Technologies (CODAIT), IBM Research also previously released two projects in the trusted AI space, AI Fairness 360 and the Adversarial Robustness Toolbox. If you are a business user, use Watson OpenScale to get trusted AI capabilities around bias detection, explainability, drift detection, and more.

Get started, provide feedback, extend the technology, and contribute back to the community! With any open source project, its quality is only as good as the contributions it receives from the community.

Previous Article
LoopBack earns the ‘Best in API Middleware’ award
LoopBack earns the ‘Best in API Middleware’ award

LoopBack was awarded the 2019 API Award for the 'Best in API Middleware' category. Learn more about the awa...

Next Article
Team CALMH addresses mental health during and after natural disasters
Team CALMH addresses mental health during and after natural disasters

A team from Persistent Systems answered the call with their solution to address mental health during and af...

×

Want our latest news? Subscribe to our blog!

Last Name
First Name
Thank you!
Error - something went wrong!