Publication | Closed Access
Explaining Neural Networks Semantically and Quantitatively
53
Citations
45
References
2019
Year
Unknown Venue
Artificial IntelligenceConvolutional Neural NetworkCognitive ScienceEngineeringMachine LearningData ScienceExplanation-based LearningComputational NeuroscienceNeural Networks SemanticallyAi FoundationQuantitative ExplanationCnn PredictionSocial SciencesInterpretabilityComputer ScienceDeep LearningExplainable Ai
This paper presents a method to pursue a semantic and quantitative explanation for the knowledge encoded in a convolutional neural network (CNN). The estimation of the specific rationale of each prediction made by the CNN presents a key issue of understanding neural networks, and it is of significant values in real applications. In this study, we propose to distill knowledge from the CNN into an explainable additive model, which explains the CNN prediction quantitatively. We discuss the problem of the biased interpretation of CNN predictions. To overcome the biased interpretation, we develop prior losses to guide the learning of the explainable additive model. Experimental results have demonstrated the effectiveness of our method.
| Year | Citations | |
|---|---|---|
Page 1
Page 1