AI Fairness 360: Attacking bias from all angles!

September 19, 2018 Neil MacKinnon

With the power of machine learning, AI is now making decisions for us, from recommendation systems that can personalize our habits to credit scores that rank us based on our behaviors. As AI becomes more common, powerful, and able to make critical decisions in areas such as criminal justice and hiring, there’s a growing demand for AI to be fair, transparent, and accountable for everyone.

Under-representation of data sets and misinterpretation of data can lead to major flaws and bias in critical decision-making for many industries. These flaws and biases may not be easy to detect without the right tool. We at IBM are deeply committed to delivering services that are unbiased, explainable, value-aligned, and transparent. And to back up that commitment, we are pleased to announce the launch of the AI Fairness 360, an open source library to help detect and remove bias in machine learning models and data sets.

The AI Fairness 360 Python package includes a comprehensive set of metrics for data sets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in data sets and models. Containing over 30 fairness metrics and 9 state-of-the-art bias mitigation algorithms developed by the research community, it’s designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education.

To understand the motivation and the research efforts behind this project’s launch, please refer to AI Fairness 360: Raise AI right! by Aleksandra Mojsilovic, Angel Diaz, and Ruchir Puri. In this post, we’re going to walk through different ways you can use this capability.

1. Using the AIF360 open source toolkit.

The easiest way to get started with the AIF 360 library itself is to do a pip install:

pip install aif360

Or clone the code and run pip install from within the folder:

git clone https://github.com/IBM/AIF360
pip install aif360
pip install -r requirements.txt


Now you can get started with the open source tutorials. AIF360 open source directory contains a diverse collection of Jupyter notebooks that you can use in various ways; for example, the Gender classification of face images tutorial provides a comprehensive use case for detecting and mitigating bias in the automatic gender classification of facial images.

The tutorial demonstrates the use of AIF360 to study the differential performance of a custom classifier. It uses several fairness metrics (statistical parity difference, disparate impact, equal opportunity difference, average odds difference, and Theil index) and the reweighing mitigation algorithm. It works with the UTK data set.

In a nutshell, we follow these steps in the tutorial:

Data from UTK data set

  • Process images and load them as an AIF360 data set
  • Learn a baseline classifier and obtain fairness metric
  • Call the Reweighing algorithm to obtain instance weights
  • Learn a new classifier with the instance weights and obtain updated fairness metrics

2. Using the hosted AIF360 web application

By using our hosted web application, you can choose a sample data set and associated demos. Bias occurs in data used to train a model. We have provided three sample data sets that you can use to explore bias checking and mitigation. Each data set contains attributes that should be protected to avoid bias. For example, by running this toolkit on “Adult census income” with default thresholds, you will detect bias against unprivileged groups (non-white or female) in some metrics.

3. Using AIF360 code patterns

To simplify your development process and streamline your search for open source code, IBM has released over a hundred code patterns. These patterns do the dirty work for the developer. They include curated packages of code, one-click GitHub repos, documentation, and resources that address some of the most popular areas of development, including AI, blockchain, containers, and IoT. For example, let’s say you want create a chatbot for any industry that has a Slack front-end and a transactional back end, which is a common design pattern today. By leveraging an IBM code pattern, you can start at the point of a Slack front-end and a transactional back-end and focus on your application — and not what it takes to stand it up and make it work.

As part of our collection of Artificial Intelligence and Data Analytics code patterns, we’ve created a new pattern, Ensuring fairness when processing loan applications, to help you get started with AIF360. This pattern shows you how to launch a Jupyter Notebook locally or in the IBM Cloud and use it to run AIF360. In short:

  • The user launches a Jupyter Notebook (either locally or on Watson Studio)
  • Notebook imports the AIF360 toolkit
  • Data is loaded into the notebook
  • The user runs the notebook, which uses the AIF360 toolkit to assess the fairness of a machine learning model

We’re planning to have many other code patterns on AI Fairness 360 available soon!

Flow diagram from AIF360 code pattern

Start freeing your AI systems from bias today!

The AI Fairness 360 toolkit, in addition to the Adversarial Robustness Toolbox (ART), Fabric for Deep learning (FfDL), and Model Assset Exchange (MAX), are available on GitHub to deploy, use, and extend.

There are additional code patterns available around each of these open source projects, so get started today!

We are looking forward to your feedback! Join us to free our next generation AI systems of any inherent biases, and create trusted and transparent AI pipelines!

The post AI Fairness 360: Attacking bias from all angles! appeared first on IBM Developer.

Previous Article
AI Fairness 360: Raise AI right!
AI Fairness 360: Raise AI right!

The AI Fairness 360 toolkit addresses problems of AI bias through fairness metrics and bias mitigators. It ...

Next Article
Enhancing fantasy football and the US Open with AI and IBM Watson
Enhancing fantasy football and the US Open with AI and IBM Watson

  At an IBM Developer Outreach event in Lower Manhattan in late August, curious developers were treated to ...

×

Want our latest news? Subscribe to our blog!

Last Name
First Name
Thank you!
Error - something went wrong!