Publication | Open Access
FAIRVIS: Visual Analytics for Discovering Intersectional Bias in Machine Learning
15
Citations
29
References
2019
Year
Unknown Venue
Artificial IntelligenceEngineeringMachine LearningComputational Social ScienceData ScienceData MiningBiasStatisticsVisual AnalyticsAlgorithmic BiasKnowledge DiscoveryVisual Data MiningComputer ScienceBias DetectionDataset BiasSimilar SubgroupsAlgorithmic FairnessSpecific SubgroupsCertain Demographic Subgroups
Machine learning’s widespread use in real‑world domains brings benefits but also risks of embedding implicit and explicit biases that disadvantage demographic subgroups, and identifying these biases is difficult because of many fairness definitions and numerous potential subgroups. FAIRVIS is a mixed‑initiative visual analytics system that enables users to audit machine learning model fairness by discovering and investigating subgroups. It combines a novel subgroup discovery technique with coordinated visual views that let users apply domain knowledge to generate known subgroups, explore suggested and similar subgroups, and drill down from a high‑level overview to detailed subgroup performance. Experiments on income and recidivism prediction datasets show that FAIRVIS reveals previously hidden biases, illustrating how interactive visualization can aid data scientists and the public in building more equitable algorithms.
The growing capability and accessibility of machine learning has led to its application to many real-world domains and data about people. Despite the benefits algorithmic systems may bring, models can reflect, inject, or exacerbate implicit and explicit societal biases into their outputs, disadvantaging certain demographic subgroups. Discovering which biases a machine learning model has introduced is a great challenge, due to the numerous definitions of fairness and the large number of potentially impacted subgroups. We present FAIRVIS, a mixed-initiative visual analytics system that integrates a novel subgroup discovery technique for users to audit the fairness of machine learning models. Through FAIRVIS, users can apply domain knowledge to generate and investigate known subgroups, and explore suggested and similar subgroups. FAIRVIS's coordinated views enable users to explore a high-level overview of subgroup performance and subsequently drill down into detailed investigation of specific subgroups. We show how FAIRVIS helps to discover biases in two real datasets used in predicting income and recidivism. As a visual analytics system devoted to discovering bias in machine learning, FAIRVIS demonstrates how interactive visualization may help data scientists and the general public understand and create more equitable algorithmic systems.
| Year | Citations | |
|---|---|---|
Page 1
Page 1