Publication | Open Access
Counterfactual Explanations for Machine Learning: A Review.
222
Citations
62
References
2020
Year
Artificial IntelligenceEngineeringMachine LearningCounterfactual ExplainabilityMachine Learning ModelsCausal InferenceData ScienceInterpretabilityCognitive SciencePredictive AnalyticsKnowledge DiscoveryDecision Support SystemsComputer ScienceAutomated Decision-makingExplanation-based LearningAutomated ReasoningBusinessModel InterpretabilityExplainable Ai
Machine learning is widely used in decision systems, yet its opaque decision processes hinder stakeholder understanding, prompting a growing research effort to define explainability goals and methods, including counterfactual explanations that link legal doctrine to high‑impact domains such as finance and healthcare. The paper reviews and categorizes counterfactual explanation research, aiming to design a rubric that captures desirable algorithmic properties. The authors evaluate all existing counterfactual explanation algorithms against the rubric, enabling systematic comparison. The rubric facilitates comparison of approaches, highlights major research themes, and identifies gaps and promising future directions.
Machine learning plays a role in many deployed decision systems, often in ways that are difficult or impossible to understand by human stakeholders. Explaining, in a human-understandable way, the relationship between the input and output of machine learning models is essential to the development of trustworthy machine-learning-based systems. A burgeoning body of research seeks to define the goals and methods of explainability in machine learning. In this paper, we seek to review and categorize research on counterfactual explanations, a specific class of explanation that provides a link between what could have happened had input to a model been changed in a particular way. Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries, making them appealing to fielded systems in high-impact areas such as finance and healthcare. Thus, we design a rubric with desirable properties of counterfactual explanation algorithms and comprehensively evaluate all currently-proposed algorithms against that rubric. Our rubric provides easy comparison and comprehension of the advantages and disadvantages of different approaches and serves as an introduction to major research themes in this field. We also identify gaps and discuss promising research directions in the space of counterfactual explainability.
| Year | Citations | |
|---|---|---|
2001 | 27.3K | |
1959 | 23.5K | |
2017 | 20.1K | |
2016 | 14K | |
2024 | 13.1K | |
2016 | 10.6K | |
2018 | 5.4K | |
2001 | 4.8K | |
2004 | 3.4K | |
1986 | 3K |
Page 1
Page 1