Publication | Open Access
The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making
131
Citations
6
References
2016
Year
Machine learning methods are increasingly being used to inform, or sometimes even directly to make, important decisions about humans.A number of recent works have focussed on the fairness of the outcomes of such decisions, particularly on avoiding decisions that affect users of different sensitive groups (e.g., race, gender) disparately.In this paper, we propose to consider the fairness of the process of decision making.Process fairness can be measured by estimating the degree to which people consider various features to be fair to use when making an important legal decision.We examine the task of predicting whether or not a prisoner is likely to commit a crime again once released by analyzing the dataset considered by ProPublica relating to the COMPAS system.We introduce new measures of people's discomfort with using various features, show how these measures can be estimated, and consider the effect of removing the uncomfortable features on prediction accuracy and on outcome fairness.Our empirical analysis suggests that process fairness may be achieved with little cost to outcome fairness, but that some loss of accuracy is unavoidable.
| Year | Citations | |
|---|---|---|
2012 | 3.3K | |
2015 | 1.7K | |
2013 | 992 | |
2016 | 585 | |
2011 | 270 | |
2016 | 246 |
Page 1
Page 1