In this workshop, you’ll discover how critical AI incidents take shape from the inside. Learn how vast groups of people (including yourselves) are outliers in some data and can face negative consequences because of algorithmic bias. This workshop will teach you practical methods for measuring fairness and you’ll receive a structured project scoping framework you can use for building future ML/AI systems.
- Understand how Fairness-related Harms happen in AI
- Identify outliers and detect if they represent marginalized people
- Discover how hypothesis testing can be used for government policy
- Learn how to leverage social and technical talent for more responsible AI systems