Publication | Closed Access
Going deeper with convolutions
46.2K
Citations
18
References
2015
Year
Unknown Venue
Image ClassificationDeep Neural NetworksImage AnalysisMachine LearningData ScienceMachine VisionPattern RecognitionHebbian PrincipleEngineeringFeature LearningConvolutional Neural NetworkComputer ScienceLayers Deep NetworkVideo TransformerDeep LearningNeural Architecture SearchComputing ResourcesComputer Vision
The architecture improves computational resource utilization within the network. The authors propose the Inception architecture to achieve state‑of‑the‑art performance on ImageNet classification and detection. They increased depth and width while keeping computational budget constant, guided by Hebbian principles and multi‑scale processing, producing the 22‑layer GoogLeNet. The architecture achieves new state‑of‑the‑art performance on ImageNet classification and detection.
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.
| Year | Citations | |
|---|---|---|
2017 | 75.5K | |
1998 | 56.5K | |
2014 | 31.2K | |
1989 | 11.6K | |
2012 | 6.6K | |
2013 | 3.5K | |
2014 | 3.2K | |
2012 | 2.9K | |
1992 | 1.8K | |
2007 | 1.7K |
Page 1
Page 1