Publication | Closed Access
Selective Brain Damage: Measuring the Disparate Impact of Model Pruning
14
Citations
0
References
2019
Year
Unknown Venue
Convolutional Neural NetworkNeuropsychologyEngineeringMachine LearningBrain FunctionObject CategorizationNeural NetworkSelective Brain DamageAttentionSocial SciencesImage ClassificationImage AnalysisData SciencePattern RecognitionSparse Neural NetworkBrain InjuryCognitive NeuroscienceCognitive ScienceCortical RemodelingNeuroinformaticsNeuroimagingComputer ScienceLower Image QualityDeep LearningModel CompressionComputer VisionPie ImagesNeuroscienceBrain Modeling
Neural network pruning techniques have demonstrated it is possible to remove the majority of weights in a network with surprisingly little degradation to test set accuracy. However, this measure of performance conceals significant differences in how different classes and images are impacted by pruning. We find that certain examples, which we term pruning identified exemplars (PIEs), and classes are systematically more impacted by the introduction of sparsity. Removing PIE images from the test-set greatly improves top-1 accuracy for both pruned and non-pruned models. These hard-to-generalize-to images tend to be mislabelled, of lower image quality, depict multiple objects or require fine-grained classification. These findings shed light on previously unknown trade-offs, and suggest that a high degree of caution should be exercised before pruning is used in sensitive domains.