Publication | Closed Access
Designing Theory-Driven User-Centric Explainable AI
798
Citations
78
References
2019
Year
Unknown Venue
Artificial IntelligenceEngineeringIntelligent SystemsInterpretabilityAi HealthcarePublic HealthDecision TheoryEthics In Knowledge RepresentationCognitive ScienceBehavioral SciencesExplainable AiDecision Support SystemsComputer ScienceAutomated Decision-makingHuman Decision MakingBuilding XaiExplanation-based LearningAutomated ReasoningHealth Informatics
Artificial intelligence increasingly supports high‑consequence decisions across domains such as healthcare and criminal justice, prompting the rise of explainable AI. The study aims to strengthen empirical, application‑specific XAI research by exploring human decision‑making theories from philosophy and psychology and proposing a human‑centered, decision‑theory‑driven framework. Using this framework, the authors identify cognitive pathways that shape XAI needs, mitigate biases, and implement it in an explainable clinical diagnostic tool for intensive‑care phenotyping through a clinician co‑design exercise. The framework links algorithmic explanations to human decision theories, yielding insights and implications for XAI design and development.
From healthcare to criminal justice, artificial intelligence (AI) is increasingly supporting high-consequence human decisions. This has spurred the field of explainable AI (XAI). This paper seeks to strengthen empirical application-specific investigations of XAI by exploring theoretical underpinnings of human decision making, drawing from the fields of philosophy and psychology. In this paper, we propose a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across these fields. Drawing on this framework, we identify pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases. We then put this framework into practice by designing and implementing an explainable clinical diagnostic tool for intensive care phenotyping and conducting a co-design exercise with clinicians. Thereafter, we draw insights into how this framework bridges algorithm-generated explanations and human decision-making theories. Finally, we discuss implications for XAI design and development.
| Year | Citations | |
|---|---|---|
Page 1
Page 1