Publication | Closed Access
MAttNet: Modular Attention Network for Referring Expression Comprehension
807
Citations
24
References
2018
Year
Unknown Venue
EngineeringMachine LearningNatural Language ProcessingMultimodal LlmText-to-image RetrievalVisual GroundingComputational LinguisticsImage RegionVisual Question AnsweringLanguage StudiesMachine TranslationMachine VisionVisual AttentionVision Language ModelDeep LearningModular Attention NetworkSemantic ParsingComputer VisionExpression ComprehensionLinguistics
Referring expression comprehension localizes image regions described by natural language, yet prior work typically treats expressions as a single unit. This paper proposes to decompose expressions into subject appearance, location, and relationship modules to improve comprehension. The Modular Attention Network (MAttNet) employs language‑based attention to learn module weights and word/phrase focus, visual attention to localize subject and relationship modules, and dynamically combines module scores to produce an overall score. MAttNet outperforms state‑of‑the‑art methods by a large margin on both bounding‑box and pixel‑level comprehension tasks. Demo and code are provided.
In this paper, we address referring expression comprehension: localizing an image region described by a natural language expression. While most recent work treats expressions as a single unit, we propose to decompose them into three modular components related to subject appearance, location, and relationship to other objects. This allows us to flexibly adapt to expressions containing different types of information in an end-to-end framework. In our model, which we call the Modular Attention Network (MAttNet), two types of attention are utilized: language-based attention that learns the module weights as well as the word/phrase attention that each module should focus on; and visual attention that allows the subject and relationship modules to focus on relevant image components. Module weights combine scores from all three modules dynamically to output an overall score. Experiments show that MAttNet outperforms previous state-of-the-art methods by a large margin on both bounding-box-level and pixel-level comprehension tasks. Demo1 and code2 are provided.
| Year | Citations | |
|---|---|---|
Page 1
Page 1