Publication | Open Access
Algorithmic Impact Assessments and Accountability
138
Citations
60
References
2021
Year
Unknown Venue
OrganizationsEngineeringAlgorithmic AccountabilityLawSoftware EngineeringSoftware AnalysisProgram EvaluationResponsible AiManagementImpact AssessmentAlgorithmic GovernmentalityAlgorithmic Impact AssessmentsPublic PolicyAlgorithmic BiasAlgorithmic ImpactsAlgorithmic TransparencyAutomated Decision-makingImpact AssessmentsAccountabilityDecision ScienceSocial Responsibility
Algorithmic impact assessments (AIAs) are an emerging accountability tool modeled after impact assessments in other domains, where impacts are evaluative constructs used to identify and mitigate harms, and each domain sets distinct expectations, norms, and responsibilities for defining harms, conducting assessments, and enforcing changes. The authors aim to examine AIAs relative to other domains and propose that the FAccT community treat impacts as co‑constructed accountability objects, align them closely with real harms, and involve diverse expertise and affected communities. The study concludes with lessons for assembling cross‑expertise consensus to co‑construct impacts and build robust accountability relationships.
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. They are modeled after impact assessments in other domains. Our study of the history of impact assessments shows that "impacts" are an evaluative construct that enable actors to identify and ameliorate harms experienced because of a policy decision or system. Every domain has different expectations and norms around what constitutes impacts and harms, how potential harms are rendered as impacts of a particular undertaking, who is responsible for conducting such assessments, and who has the authority to act on them to demand changes to that undertaking. By examining proposals for AIAs in relation to other domains, we find that there is a distinct risk of constructing algorithmic impacts as organizationally understandable metrics that are nonetheless inappropriately distant from the harms experienced by people, and which fall short of building the relationships required for effective accountability. As impact assessments become a commonplace process for evaluating harms, the FAccT community, in its efforts to address this challenge, should A) understand impacts as objects that are co-constructed accountability relationships, B) attempt to construct impacts as close as possible to actual harms, and C) recognize that accountability governance requires the input of various types of expertise and affected communities. We conclude with lessons for assembling cross-expertise consensus for the co-construction of impacts and building robust accountability relationships.
| Year | Citations | |
|---|---|---|
Page 1
Page 1