Publication | Closed Access
MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid
50
Citations
25
References
2023
Year
Unknown Venue
Artificial IntelligenceEngineeringMachine LearningNatural Language ProcessingMultimodal LlmData SciencePattern RecognitionComputational LinguisticsLanguage StudiesNamed-entity RecognitionMachine TranslationEntity DisambiguationComputer ScienceMultimodal TranslationDeep LearningModality PreferencesNeural Machine TranslationMulti-modal Entity AlignmentLinguisticsMeta Modality Hybrid
Multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) whose entities are associated with relevant images. However, current MMEA algorithms rely on KG-level modality fusion strategies for multi-modal entity representation, which ignores the variations of modality preferences of different entities, thus compromising robustness against noise in modalities such as blurry images and relations. This paper introduces MEAformer, a mlti-modal entity alignment transformer approach for meta modality hybrid, which dynamically predicts the mutual correlation coefficients among modalities for more fine-grained entity-level modality fusion and alignment. Experimental results demonstrate that our model not only achieves SOTA performance in multiple training scenarios, including supervised, unsupervised, iterative, and low-resource settings, but also has a limited number of parameters, efficient runtime, and interpretability. Our code is available at https://github.com/zjukg/MEAformer.
| Year | Citations | |
|---|---|---|
Page 1
Page 1