logo
Dates

Author


Conferences

Tags

Sort by:  

Authors: Anne Gentle
2022-10-27

tldr - powered by Generative AI

The presentation discusses the importance of removing problematic language from code and documentation in order to promote inclusivity and avoid potential legal issues. It provides examples of how to categorize and prioritize the work of removing problematic language, as well as tools and strategies for doing so.
  • Problematic language in code and documentation can create legal and ethical issues and exclude certain groups of people
  • Categorizing and prioritizing the work of removing problematic language can help teams plan and execute the work more effectively
  • Tools such as VS Code extensions and automation checks can help identify and eliminate problematic language
  • Product documentation can work ahead of product development in some cases
  • Compliance with language policies should be defined clearly and consistently across products and teams
Authors: Bhakti Radharapu
2022-06-23

How do I measure fairness? Is my ML model biased? How do I remediate bias in my model? This talk presents an overview of the main concepts of identifying, measuring and remediating bias in ML systems at scale. We begin by discussing how to measure fairness in production models and causes of algorithmic bias in systems. We then deep-dive into performing bias remediation at all steps of the ML life-cycle: data collection, pre-processing, in-training, and post-processing. We will focus on a gamut of open source tools and techniques in the ecosystem that can be used to create comprehensive fairness workflows. These have not only been vetted by the academic ML community but have also scaled very well for industry-level challenges. We hope that by the end of this talk, ML developers will not only be able to "flag" fairness issues in ML but also "fix" them by incorporating these tools and techniques in their ML workflows.
Conference:  Transform X 2021
Authors: Nicol Turner Lee
2021-10-07

Join Nicol Turner-Lee, Senior Fellow at The Brookings Institution, as she explores the societal need for fair AI that is equitable to all. She describes the individual and enterprise consequences of bias in AI and the need for multi-disciplinary and multi-methodological approaches to preventing it. How does unintentional bias in AI occur? How do you self-regulate your AI algorithms, so they benefit everyone in the demographics you are trying to serve? How can the fields of data science, law, and sociology join forces to ensure that AI is unbiased? Join this session to hear a thoughtful and research-driven exploration about the risks, impacts, and approaches to mitigating unintended bias in AI.
Conference:  Transform X 2021
Authors: Safiya U. Noble, Mark MacCarthy, Aylin Caliskan
2021-10-07

tldr - powered by Generative AI

The panel discusses the importance of developers understanding the broader impact of their work and having agency to object, complain, and focus attention on ethical and lawful practices in technology development. They also emphasize the need for interdisciplinary perspectives, demographic perspectives, and conscientious objectors in the development community.
  • Developers need to understand the broader impact of their work and consider whether some of their work should be done
  • Technology workers have agency to object, complain, and focus attention on ethical and lawful practices in technology development
  • Interdisciplinary perspectives and demographic perspectives are important in technology development
  • Conscientious objectors in the development community can be leaders in helping us think about where the limits should be and what alternatives could be
  • Developers should focus on algorithmic or machine learning hygiene to avoid missteps that result in foreclosed opportunities on various populations