Concepedia

Publication | Open Access

Fairness Beyond Disparate Treatment & Disparate Impact: Learning\n Classification without Disparate Mistreatment

585

Citations

15

References

2016

Year

Abstract

Automated data-driven decision making systems are increasingly being used to\nassist, or even replace humans in many settings. These systems function by\nlearning from historical decisions, often taken by humans. In order to maximize\nthe utility of these systems (or, classifiers), their training involves\nminimizing the errors (or, misclassifications) over the given historical data.\nHowever, it is quite possible that the optimally trained classifier makes\ndecisions for people belonging to different social groups with different\nmisclassification rates (e.g., misclassification rates for females are higher\nthan for males), thereby placing these groups at an unfair disadvantage. To\naccount for and avoid such unfairness, in this paper, we introduce a new notion\nof unfairness, disparate mistreatment, which is defined in terms of\nmisclassification rates. We then propose intuitive measures of disparate\nmistreatment for decision boundary-based classifiers, which can be easily\nincorporated into their formulation as convex-concave constraints. Experiments\non synthetic as well as real world datasets show that our methodology is\neffective at avoiding disparate mistreatment, often at a small cost in terms of\naccuracy.\n

References

YearCitations

Page 1