Publication | Closed Access
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
11.5K
Citations
47
References
2018
Year
Unknown Venue
Convolutional Neural NetworkEngineeringMachine LearningObject CategorizationVgg NetworkRobust FeatureDeep FeaturesImage AnalysisData ScienceVideo TransformerVision RecognitionSynthetic Image GenerationCognitive ScienceMachine VisionFeature LearningVision Language ModelComputer ScienceDeep LearningComputer VisionDeep Learning CommunityClassic Metrics
Human perception of image similarity is intuitive yet complex, and conventional metrics like PSNR and SSIM are shallow and miss many perceptual nuances, whereas deep VGG features trained on ImageNet have shown remarkable effectiveness as a perceptual loss. The study asks how perceptually aligned these deep “perceptual losses” are and which elements drive their success. To investigate, the authors created a human similarity judgment dataset and systematically compared deep features from various architectures and supervision levels against classic metrics. They found that deep features far surpass prior metrics, a result that generalizes across architectures and supervision levels, indicating that perceptual similarity emerges broadly in deep visual representations.
While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.
| Year | Citations | |
|---|---|---|
2017 | 75.5K | |
2014 | 75.4K | |
2004 | 54.1K | |
2015 | 39.5K | |
2017 | 12K | |
2016 | 7.4K | |
1977 | 7.2K | |
2002 | 6.7K | |
2016 | 5.9K | |
2016 | 5.9K |
Page 1
Page 1