Concepedia

TLDR

Interest in algorithmic hiring has surged as a potential bias‑mitigation tool, yet little is known about its real‑world use. The study seeks to understand how algorithmic hiring assessments are constructed, validated, and scrutinized for bias. The authors survey vendors of algorithmic pre‑employment assessments, cataloging their development and validation disclosures, and evaluate their bias‑mitigation practices from technical and legal angles.

Abstract

There has been rapidly growing interest in the use of algorithms in hiring, especially as a means to address or mitigate bias. Yet, to date, little is known about how these methods are used in practice. How are algorithmic assessments built, validated, and examined for bias? In this work, we document and analyze the claims and practices of companies offering algorithms for employment assessment. In particular, we identify vendors of algorithmic pre-employment assessments (i.e., algorithms to screen candidates), document what they have disclosed about their development and validation procedures, and evaluate their practices, focusing particularly on efforts to detect and mitigate bias. Our analysis considers both technical and legal perspectives. Technically, we consider the various choices vendors make regarding data collection and prediction targets, and explore the risks and trade-offs that these choices pose. We also discuss how algorithmic de-biasing techniques interface with, and create challenges for, antidiscrimination law.

References

YearCitations

2007

11.7K

1963

2.5K

2016

2.1K

2015

1.7K

2017

1.1K

2008

672

2017

569

2019

550

2017

462

2017

356

Page 1