Concepedia

TLDR

Translating machine learning models into clinical practice requires establishing clinicians’ trust, for which explainability is considered essential, yet the field lacks concrete definitions of usable explanations across settings. To identify explainability aspects that build trust, the authors surveyed clinicians in intensive care and emergency departments and used their feedback to characterize when explainability improves trust. They further identified the classes of explanations most relevant to clinicians and defined concrete metrics for rigorously evaluating clinical explainability methods.

Abstract

Translating machine learning (ML) models effectively to clinical practice requires establishing clinicians' trust. Explainability, or the ability of an ML model to justify its outcomes and assist clinicians in rationalizing the model prediction, has been generally understood to be critical to establishing trust. However, the field suffers from the lack of concrete definitions for usable explanations in different settings. To identify specific aspects of explainability that may catalyze building trust in ML models, we surveyed clinicians from two distinct acute care specialties (Intenstive Care Unit and Emergency Department). We use their feedback to characterize when explainability helps to improve clinicians' trust in ML models. We further identify the classes of explanations that clinicians identified as most relevant and crucial for effective translation to clinical practice. Finally, we discern concrete metrics for rigorous evaluation of clinical explainability methods. By integrating perceptions of explainability between clinicians and ML researchers we hope to facilitate the endorsement and broader adoption and sustained use of ML systems in healthcare.

References

YearCitations

Page 1