Publication | Closed Access
Bilinear CNN Models for Fine-Grained Visual Recognition
2K
Citations
33
References
2015
Year
Unknown Venue
Convolutional Neural NetworkEngineeringMachine LearningImage ClassificationImage AnalysisData SciencePattern RecognitionBilinear ModelsVideo TransformerBilinear Cnn ModelsVision RecognitionMachine VisionFeature LearningRecognition ArchitectureComputer ScienceDeep LearningComputer VisionFeature ExtractorsObject Recognition
The bilinear CNN architecture models local pairwise feature interactions invariantly, making it well suited for fine‑grained categorization, and generalizes orderless texture descriptors like Fisher vector, VLAD, and O2P. The authors propose bilinear CNN models that combine two feature extractors via outer products and pool the results, and conduct experiments to analyze fine‑tuning effects and network choices. They implement the bilinear form by multiplying outputs of two CNN feature extractors at each spatial location, pooling to form a descriptor, and training the entire network end‑to‑end with only image labels. On the CUB‑200‑2011 dataset the models achieve 84.1 % accuracy, outperforming prior state‑of‑the‑art methods, run at 8 fps on a Tesla K40, and the code is publicly released.
We propose bilinear models, a recognition architecture that consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an image descriptor. This architecture can model local pairwise feature interactions in a translationally invariant manner which is particularly useful for fine-grained categorization. It also generalizes various orderless texture descriptors such as the Fisher vector, VLAD and O2P. We present experiments with bilinear models where the feature extractors are based on convolutional neural networks. The bilinear form simplifies gradient computation and allows end-to-end training of both networks using image labels only. Using networks initialized from the ImageNet dataset followed by domain specific fine-tuning we obtain 84.1% accuracy of the CUB-200-2011 dataset requiring only category labels at training time. We present experiments and visualizations that analyze the effects of fine-tuning and the choice two networks on the speed and accuracy of the models. Results show that the architecture compares favorably to the existing state of the art on a number of fine-grained datasets while being substantially simpler and easier to train. Moreover, our most accurate model is fairly efficient running at 8 frames/sec on a NVIDIA Tesla K40 GPU. The source code for the complete system will be made available at http://vis-www.cs.umass.edu/bcnn.
| Year | Citations | |
|---|---|---|
Page 1
Page 1