Publication | Open Access
Rawlsian Fairness for Machine Learning.
49
Citations
15
References
2016
Year
Artificial IntelligenceEngineeringMachine LearningTechnical DefinitionGame TheoryDiscrimination IndexData ScienceBiasExperimental EconomicsFairness (Computer Systems)Decision TheoryMechanism DesignAlgorithmic BiasOnline AlgorithmFair Resource AllocationComputer ScienceExploration V ExploitationOnline Decision MakingContextual BanditAlgorithmic FairnessBusinessRawlsian FairnessDecision Science
Motivated by concerns that automated decision-making procedures can unintentionally lead to discriminatory behavior, we study a technical definition of fairness modeled after John Rawls' notion of equality of opportunity. In the context of a simple model of online decision making, we give an algorithm that satisfies this fairness constraint, while still being able to learn at a rate that is comparable to (but necessarily worse than) that of the best algorithms absent a fairness constraint. We prove a regret bound for algorithms in the linear contextual bandit framework that is a significant improvement over our technical companion paper [16], which gives black-box reductions in a more general setting. We analyze our algorithms both theoretically and experimentally. Finally, we introduce the notion of a discrimination index, and show that standard algorithms for our problem exhibit structured discriminatory behavior, whereas the fair algorithms we develop do not.
| Year | Citations | |
|---|---|---|
2012 | 3.3K | |
2010 | 2.4K | |
1952 | 2.2K | |
2016 | 1K | |
2011 | 911 | |
2011 | 383 | |
2016 | 369 | |
2014 | 313 | |
2011 | 270 | |
2016 | 192 |
Page 1
Page 1