Publication | Open Access
H-Net: Neural Network for Cross-domain Image Patch Matching
28
Citations
24
References
2018
Year
Unknown Venue
Siamese NetworkConvolutional Neural NetworkEngineeringMachine LearningNeural NetworkImage ClassificationImage AnalysisPattern RecognitionSingle-image Super-resolutionComputational ImagingUav 3DVideo TransformerDifferent Domain ImagesMachine VisionComputer ScienceImage SimilarityDeep LearningOptical Image RecognitionComputer VisionScene Understanding
Describing the same scene with different imaging style or rendering image from its 3D model gives us different domain images. Different domain images tend to have a gap and different local appearances, which raise the main challenge on the cross-domain image patch matching. In this paper, we propose to incorporate AutoEncoder into the Siamese network, named as H-Net, of which the structural shape resembles the letter H. The H-Net achieves state-of-the-art performance on the cross-domain image patch matching. Furthermore, we improved H-Net to H-Net++. The H-Net++ extracts invariant feature descriptors in cross-domain image patches and achieves state-of-the-art performance by feature retrieval in Euclidean space. As there is no benchmark dataset including cross-domain images, we made a cross-domain image dataset which consists of camera images, rendering images from UAV 3D model, and images generated by CycleGAN algorithm. Experiments show that the proposed H-Net and H-Net++ outperform the existing algorithms. Our code and cross-domain image dataset are available at https://github.com/Xylon-Sean/H-Net.
| Year | Citations | |
|---|---|---|
Page 1
Page 1