Publication | Open Access
Data preprocessing techniques for classification without discrimination
1.2K
Citations
22
References
2011
Year
Artificial IntelligenceEngineeringMachine LearningBiometricsDiscriminationClassification MethodData ScienceData MiningPattern RecognitionManagementSensitive AttributesBiased Decision ProcessAlgorithmic BiasKnowledge DiscoveryDisparate ImpactComputer ScienceData ClassificationData Preprocessing TechniquesAlgorithmic FairnessData TreatmentClassificationBinary Sensitive AttributeData Modeling
The Discrimination‑Aware Classification Problem was introduced to address training data that exhibit unlawful discrimination toward sensitive attributes such as gender or ethnicity, and it is relevant when data are generated by a biased process or when the sensitive attribute proxies unobserved features. The study aims to learn a classifier that maximizes accuracy while eliminating discrimination in test‑data predictions, focusing on a single binary sensitive attribute and a two‑class classification setting. We analyze the optimal trade‑off between accuracy and non‑discrimination for pure classifiers and propose algorithmic solutions that preprocess data—by suppressing the sensitive attribute, massaging class labels, and reweighing or resampling—to remove discrimination before learning a classifier. The preprocessing techniques were implemented in a modified version of Weka, and experiments on real‑life data demonstrate their effectiveness.
Recently, the following Discrimination-Aware Classification Problem was introduced: Suppose we are given training data that exhibit unlawful discrimination; e.g., toward sensitive attributes such as gender or ethnicity. The task is to learn a classifier that optimizes accuracy, but does not have this discrimination in its predictions on test data. This problem is relevant in many settings, such as when the data are generated by a biased decision process or when the sensitive attribute serves as a proxy for unobserved features. In this paper, we concentrate on the case with only one binary sensitive attribute and a two-class classification problem. We first study the theoretically optimal trade-off between accuracy and non-discrimination for pure classifiers. Then, we look at algorithmic solutions that preprocess the data to remove discrimination before a classifier is learned. We survey and extend our existing data preprocessing techniques, being suppression of the sensitive attribute, massaging the dataset by changing class labels, and reweighing or resampling the data to remove discrimination without relabeling instances. These preprocessing techniques have been implemented in a modified version of Weka and we present the results of experiments on real-life data.
| Year | Citations | |
|---|---|---|
Page 1
Page 1