Publication | Open Access
Transferable Attention for Domain Adaptation
315
Citations
43
References
2019
Year
Natural Language ProcessingMultimodal LlmImage AnalysisMachine LearningTransferable AttentionEngineeringGenerative Adversarial NetworkDomain AdaptationVision Language ModelTransfer LearningDeep LearningDomain DiscriminatorAdversarial Domain AdaptationComputer VisionMachine TranslationSynthetic Image Generation
Recent work in domain adaptation bridges different domains by adversarially learning a domain-invariant representation that cannot be distinguished by a domain discriminator. Existing methods of adversarial domain adaptation mainly align the global images across the source and target domains. However, it is obvious that not all regions of an image are transferable, while forcefully aligning the untransferable regions may lead to negative transfer. Furthermore, some of the images are significantly dissimilar across domains, resulting in weak image-level transferability. To this end, we present Transferable Attention for Domain Adaptation (TADA), focusing our adaptation model on transferable regions or images. We implement two types of complementary transferable attention: transferable local attention generated by multiple region-level domain discriminators to highlight transferable regions, and transferable global attention generated by single image-level domain discriminator to highlight transferable images. Extensive experiments validate that our proposed models exceed state of the art results on standard domain adaptation datasets.
| Year | Citations | |
|---|---|---|
Page 1
Page 1