Publication | Open Access
Towards A Rigorous Science of Interpretable Machine Learning
3.1K
Citations
21
References
2017
Year
Artificial IntelligenceRigorous EvaluationInterpretable Machine LearningRigorous ScienceEngineeringMachine LearningData ScienceAutomated ReasoningMachine Learning ToolPredictive AnalyticsKnowledge DiscoveryAi SafetyInterpretabilityComputer ScienceExplainable Ai
Interpretable machine learning has gained popularity as a means to explain model outputs and assess safety and fairness, yet consensus on its definition and measurement remains elusive. This paper defines interpretability and delineates when it is appropriate. It proposes a taxonomy for rigorous evaluation and identifies open questions to advance the science of interpretable machine learning.
As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and describe when interpretability is needed (and when it is not). Next, we suggest a taxonomy for rigorous evaluation and expose open questions towards a more rigorous science of interpretable machine learning.
| Year | Citations | |
|---|---|---|
Page 1
Page 1