Concepedia

TLDR

Algorithmic decision‑making systems are ubiquitous and rely on complex learning and large data, yet they can unintentionally produce unfair outcomes that disproportionately affect groups defined by sensitive attributes. The paper proposes a flexible mechanism that uses an intuitive measure of decision‑boundary unfairness to design fair classifiers. The mechanism is instantiated with logistic regression and support vector machines, enabling fine‑grained fairness control with only a modest accuracy trade‑off on real‑world data. Experiments on real‑world data demonstrate that the mechanism achieves fine‑grained fairness control with only a small accuracy cost.

Abstract

Algorithmic decision making systems are ubiquitous across a wide variety of online as well as offline services. These systems rely on complex learning methods and vast amounts of data to optimize the service functionality, satisfaction of the end user and profitability. However, there is a growing concern that these automated decisions can lead, even in the absence of intent, to a lack of fairness, i.e., their outcomes can disproportionately hurt (or, benefit) particular groups of people sharing one or more sensitive attributes (e.g., race, sex). In this paper, we introduce a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness. We instantiate this mechanism with two well-known classifiers, logistic regression and support vector machines, and show on real-world data that our mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.