Publication | Open Access
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and\n Mitigating Unwanted Algorithmic Bias
259
Citations
0
References
2018
Year
Fairness is an increasingly important concern as machine learning models are\nused to support decision making in high-stakes applications such as mortgage\nlending, hiring, and prison sentencing. This paper introduces a new open source\nPython toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released\nunder an Apache v2.0 license {https://github.com/ibm/aif360). The main\nobjectives of this toolkit are to help facilitate the transition of fairness\nresearch algorithms to use in an industrial setting and to provide a common\nframework for fairness researchers to share and evaluate algorithms.\n The package includes a comprehensive set of fairness metrics for datasets and\nmodels, explanations for these metrics, and algorithms to mitigate bias in\ndatasets and models. It also includes an interactive Web experience\n(https://aif360.mybluemix.net) that provides a gentle introduction to the\nconcepts and capabilities for line-of-business users, as well as extensive\ndocumentation, usage guidance, and industry-specific tutorials to enable data\nscientists and practitioners to incorporate the most appropriate tool for their\nproblem into their work products. The architecture of the package has been\nengineered to conform to a standard paradigm used in data science, thereby\nfurther improving usability for practitioners. Such architectural design and\nabstractions enable researchers and developers to extend the toolkit with their\nnew algorithms and improvements, and to use it for performance benchmarking. A\nbuilt-in testing infrastructure maintains code quality.\n