Publication | Open Access
Discriminative Learning of Deep Convolutional Feature Point Descriptors
843
Citations
31
References
2015
Year
Unknown Venue
Siamese NetworkGeometric LearningConvolutional Neural NetworkEngineeringFeature DetectionMachine LearningImage ClassificationImage AnalysisData SciencePattern RecognitionFeature (Computer Vision)Video TransformerMachine VisionFeature LearningComputer ScienceDeep LearningComputer VisionDiscriminative LearningConvolutional Neural Networks
Deep learning has revolutionalized image-level tasks such as classification, but patch-level tasks, such as correspondence, still rely on hand-crafted features, e.g. SIFT. In this paper we use Convolutional Neural Networks (CNNs) to learn discriminant patch representations and in particular train a Siamese network with pairs of (non-)corresponding patches. We deal with the large number of potential pairs with the combination of a stochastic sampling of the training set and an aggressive mining strategy biased towards patches that are hard to classify. By using the L2 distance during both training and testing we develop 128-D descriptors whose euclidean distances reflect patch similarity, and which can be used as a drop-in replacement for any task involving SIFT. We demonstrate consistent performance gains over the state of the art, and generalize well against scaling and rotation, perspective transformation, non-rigid deformation, and illumination changes. Our descriptors are efficient to compute and amenable to modern GPUs, and are publicly available.
| Year | Citations | |
|---|---|---|
2017 | 75.5K | |
2004 | 54.6K | |
2015 | 36.2K | |
2008 | 35.7K | |
2011 | 10.2K | |
2009 | 10K | |
2005 | 6.7K | |
2006 | 6K | |
2014 | 3.6K | |
2010 | 2.7K |
Page 1
Page 1