Concepedia

TLDR

The study investigates the origins of algorithmic bias and evaluates interventions to reduce it. Approximately 400 AI engineers were randomly assigned to develop predictive models for OECD standardized test scores under varying conditions, and the resulting algorithms were evaluated using actual test outcomes and audit‑like input manipulations, with a focus on how engineer demographics influence bias detection and reduction. The paper outlines the experimental design and motivation, with full results posted at https://ssrn.com/abstract=3615404.

Abstract

Why do biased algorithmic predictions arise, and what interventions can prevent them? We examine this topic with a field experiment about using machine learning to predict human capital. We randomly assign approximately 400 AI engineers to develop software under different experimental conditions to predict standardized test scores of OECD residents. We then assess the resulting predictive algorithms using the realized test performances, and through randomized audit-like manipulations of algorithmic inputs. We also used the diversity of our subject population to measure whether demographically non-traditional engineers were more likely to notice and reduce algorithmic bias, and whether algorithmic prediction errors are correlated within programmer demographic groups. This document describes our experimental design and motivation; the full results of our experiment are available at https://ssrn.com/abstract=3615404.

References

YearCitations

Page 1