logo

Flagging *and* Fixing Bias in ML

2022-06-23

Authors:   Bhakti Radharapu


Abstract

How do I measure fairness? Is my ML model biased? How do I remediate bias in my model? This talk presents an overview of the main concepts of identifying, measuring and remediating bias in ML systems at scale. We begin by discussing how to measure fairness in production models and causes of algorithmic bias in systems. We then deep-dive into performing bias remediation at all steps of the ML life-cycle: data collection, pre-processing, in-training, and post-processing. We will focus on a gamut of open source tools and techniques in the ecosystem that can be used to create comprehensive fairness workflows. These have not only been vetted by the academic ML community but have also scaled very well for industry-level challenges. We hope that by the end of this talk, ML developers will not only be able to "flag" fairness issues in ML but also "fix" them by incorporating these tools and techniques in their ML workflows.

Materials: