Publication | Closed Access
Residual Attention Network for Image Classification
3.7K
Citations
33
References
2017
Year
Unknown Venue
Image ClassificationConvolutional Neural NetworkMachine VisionImage AnalysisMachine LearningEngineeringObject DetectionVision Language ModelVisual Question AnsweringAttention MechanismDeep LearningResidual Attention NetworkVideo TransformerAttention Residual LearningComputer Vision
The authors introduce a Residual Attention Network, a convolutional neural network that integrates an attention mechanism with state‑of‑the‑art feed‑forward architectures and can be trained end‑to‑end, including an attention residual learning scheme that enables very deep models. The network is constructed by stacking attention modules that generate adaptive, attention‑aware features through a bottom‑up, top‑down feed‑forward structure, allowing the model to scale to hundreds of layers while preserving efficient training. On CIFAR‑10, CIFAR‑100, and ImageNet, the Residual Attention Network attains state‑of‑the‑art accuracy (3.90 % error on CIFAR‑10, 20.45 % on CIFAR‑100, 4.8 % top‑5 on ImageNet) and outperforms ResNet‑200 by 0.6 % top‑1 accuracy with 46 % fewer trunk layers and 69 % fewer FLOPs, while remaining robust to noisy labels.
In this work, we propose Residual Attention Network, a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.
| Year | Citations | |
|---|---|---|
Page 1
Page 1