logo

Mitigating Bias in Models with SHAP and Fairlearn

2022-06-22

Authors:   Sean Owen


Abstract

Issues of "fairness" in machine learning are rightfully at the forefront today. It's not enough to have an accurate model; practitioners increasingly need to assess when and why a predictive model's results are unfair, often to groups of people. While much has already been said about detecting unfairness or bias, relatively less attention has been given to what to do about it. My model output is "unfair"; now what? This session will examine how open source tools like SHAP and Microsoft's Fairlearn can be used to actually correct for model bias. It will also discuss what "fair" even means and the tradeoffs that different answers imply. In the accompanying technical demo, these tools will be used, along with xgboost and MLflow, to show how two different mitigation strategies can be retrofit to modeling pipelines.

Materials: