Publication | Open Access
Interpretability Beyond Feature Attribution: Quantitative Testing with\n Concept Activation Vectors (TCAV)
732
Citations
0
References
2017
Year
The interpretation of deep learning models is a challenge due to their size,\ncomplexity, and often opaque internal state. In addition, many systems, such as\nimage classifiers, operate on low-level features rather than high-level\nconcepts. To address these challenges, we introduce Concept Activation Vectors\n(CAVs), which provide an interpretation of a neural net's internal state in\nterms of human-friendly concepts. The key idea is to view the high-dimensional\ninternal state of a neural net as an aid, not an obstacle. We show how to use\nCAVs as part of a technique, Testing with CAVs (TCAV), that uses directional\nderivatives to quantify the degree to which a user-defined concept is important\nto a classification result--for example, how sensitive a prediction of "zebra"\nis to the presence of stripes. Using the domain of image classification as a\ntesting ground, we describe how CAVs may be used to explore hypotheses and\ngenerate insights for a standard image classification network as well as a\nmedical application.\n