logo
Dates

Author


Conferences

Tags

Sort by:  

Authors: Bhakti Radharapu
2022-06-23

How do I measure fairness? Is my ML model biased? How do I remediate bias in my model? This talk presents an overview of the main concepts of identifying, measuring and remediating bias in ML systems at scale. We begin by discussing how to measure fairness in production models and causes of algorithmic bias in systems. We then deep-dive into performing bias remediation at all steps of the ML life-cycle: data collection, pre-processing, in-training, and post-processing. We will focus on a gamut of open source tools and techniques in the ecosystem that can be used to create comprehensive fairness workflows. These have not only been vetted by the academic ML community but have also scaled very well for industry-level challenges. We hope that by the end of this talk, ML developers will not only be able to "flag" fairness issues in ML but also "fix" them by incorporating these tools and techniques in their ML workflows.
Authors: Sean Owen
2022-06-22

Issues of "fairness" in machine learning are rightfully at the forefront today. It's not enough to have an accurate model; practitioners increasingly need to assess when and why a predictive model's results are unfair, often to groups of people. While much has already been said about detecting unfairness or bias, relatively less attention has been given to what to do about it. My model output is "unfair"; now what? This session will examine how open source tools like SHAP and Microsoft's Fairlearn can be used to actually correct for model bias. It will also discuss what "fair" even means and the tradeoffs that different answers imply. In the accompanying technical demo, these tools will be used, along with xgboost and MLflow, to show how two different mitigation strategies can be retrofit to modeling pipelines.
Conference:  Transform X 2021
Authors: Ya Xu
2021-10-07

tldr - powered by Generative AI

LinkedIn is using responsible AI to create economic opportunities for its members across the world. The company focuses on best practices in fairness and privacy to build a better and more equitable user experience.
  • LinkedIn is applying responsible AI to create economic opportunities for its members across 200 countries worldwide
  • The company focuses on best practices in fairness and privacy to build a better and more equitable user experience
  • LinkedIn follows Microsoft's responsible AI principles and the six values that they strive to build into their products
  • LinkedIn has made the most progress on fairness and privacy
  • Fairness is a hard problem to solve, but LinkedIn has developed a framework to quantify it
  • LinkedIn uses experimentation to ensure that changes made to their product benefit every member and do not introduce unintended consequences