Publication | Closed Access
Discriminative coupled dictionary hashing for fast cross-media retrieval
144
Citations
30
References
2014
Year
EngineeringMachine LearningImage RetrievalDictionary HashingImage AnalysisInformation RetrievalData ScienceData MiningPattern RecognitionText-to-image RetrievalCross-media HashingPerceptual HashingDifferent ModalitiesCoupled DictionariesHash FunctionComputer ScienceDeep LearningComputer VisionSimilarity SearchMultimedia Search
Cross-media hashing, which conducts cross-media retrieval by embedding data from different modalities into a common low-dimensional Hamming space, has attracted intensive attention in recent years. The existing cross-media hashing approaches only aim at learning hash functions to preserve the intra-modality and inter-modality correlations, but do not directly capture the underlying semantic information of the multi-modal data. We propose a discriminative coupled dictionary hashing (DCDH) method in this paper. In DCDH, the coupled dictionary for each modality is learned with side information (e.g., categories). As a result, the coupled dictionaries not only preserve the intra-similarity and inter-correlation among multi-modal data, but also contain dictionary atoms that are semantically discriminative (i.e., the data from the same category is reconstructed by the similar dictionary atoms). To perform fast cross-media retrieval, we learn hash functions which map data from the dictionary space to a low-dimensional Hamming space. Besides, we conjecture that a balanced representation is crucial in cross-media retrieval. We introduce multi-view features on the relatively ``weak'' modalities into DCDH and extend it to multi-view DCDH (MV-DCDH) in order to enhance their representation capability. The experiments on two real-world data sets show that our DCDH and MV-DCDH outperform the state-of-the-art methods significantly on cross-media retrieval.
| Year | Citations | |
|---|---|---|
Page 1
Page 1