Publication | Closed Access
Return of the Devil in the Details: Delving Deep into Convolutional Nets
3.1K
Citations
23
References
2014
Year
Unknown Venue
Convolutional Neural NetworkEngineeringMachine LearningImage ClassificationImage AnalysisData SciencePattern RecognitionVideo TransformerData AugmentationMachine VisionFeature LearningObject DetectionComputer ScienceDeep LearningComputer VisionDeep Neural NetworksScene InterpretationConvolutional Neural NetworksScene UnderstandingConvolutional NetsLatest Generation
The latest generation of CNNs has achieved impressive results on image recognition and object detection benchmarks, yet it remains unclear how different CNN methods compare to each other and to earlier shallow representations such as Bag‑of‑Visual‑Words and Improved Fisher Vector. The study rigorously evaluates new CNN techniques, comparing diverse deep architectures on a common benchmark while uncovering key implementation details. The authors evaluate multiple deep architectures on a shared benchmark, systematically comparing them and revealing critical implementation details. They find that CNN output dimensionality can be substantially reduced without harming performance, that deep and shallow methods share transferable aspects, that data augmentation benefits both, and they provide publicly available code and models.
The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available.
| Year | Citations | |
|---|---|---|
Page 1
Page 1