Concepedia

Abstract

Automated data-driven decision systems are ubiquitous across a wide variety of online services, from online social networking and e-commerce to e-government. These systems rely on complex learning methods and vast amounts of data to optimize the service functionality, satisfaction of the end user and profitability. However, there is a growing concern that these automated decisions can lead to user discrimination, even in the absence of intent, leading to a lack of fairness, i.e., their outcomes have a disproportionally large adverse impact on particular groups of people sharing one or more sensitive attributes (e.g., race, sex). In this paper, we introduce a flexible mechanism to design fair classifiers in a principled manner. Then, we instantiate this mechanism on three well-known classifiers -- logistic regression, hinge loss and linear and nonlinear support vector machines. Experiments on both synthetic and real-world data show that our mechanism allows for a fine-grained control of the level of fairness, often at a minimal cost in terms of accuracy, and it provides more flexibility than alternatives.

References

YearCitations

2007

22K

2012

3.3K

2015

1.7K

2016

1K

2013

992

2010

760

2009

396

2012

310

2013

292

2011

270

Page 1