Concepedia

Publication | Open Access

Algorithmic Decision-Making and the Control Problem

123

Citations

37

References

2019

Year

TLDR

The control problem—human operators becoming complacent or over‑reliant on autonomous systems—has long been recognized by industrial psychologists and engineers, yet its relevance to machine‑learning contexts has been largely overlooked. This paper addresses that gap by proposing three strategies to mitigate the control problem, particularly a complementary coupling between advanced algorithmic tools and human agents. The authors recommend a dynamic, complementary human–machine partnership and outline six key design principles that all such systems should embody. They conclude that algorithmic decision tools should only be used in high‑stakes or safety‑critical contexts when they are demonstrably superior to humans, and that the six principles provide a framework for evaluating and guiding human–machine system design.

Abstract

Abstract The danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years by industrial psychologists and engineers studying the human operators of complex machines. We call it “the control problem”, understood as the tendency of the human within a human–machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system. While the control problem has been investigated for some time, up to this point its manifestation in machine learning contexts has not received serious attention. This paper aims to fill that gap. We argue that, except in certain special circumstances, algorithmic decision tools should not be used in high-stakes or safety-critical decisions unless the systems concerned are significantly “better than human” in the relevant domain or subdomain of decision-making. More concretely, we recommend three strategies to address the control problem, the most promising of which involves a complementary (and potentially dynamic ) coupling between highly proficient algorithmic tools and human agents working alongside one another. We also identify six key principles which all such human–machine systems should reflect in their design. These can serve as a framework both for assessing the viability of any such human–machine system as well as guiding the design and implementation of such systems generally.

References

YearCitations

Page 1