Publication | Open Access
Deep Compositional Question Answering with Neural Module Networks
148
Citations
35
References
2015
Year
Artificial IntelligenceEngineeringMachine LearningNatural Language ProcessingMultimodal LlmVisual GroundingData ScienceComputational LinguisticsNeural Module NetworksVisual Question AnsweringLanguage StudiesMachine TranslationQuestion AnsweringVision Language ModelComputer ScienceCompositionalityDeep LearningVisual ReasoningLinguisticsAbstract Shapes
Visual question answering is fundamentally compositional, with questions sharing substructures such as “where is the dog?” and “what color is the dog?”. The paper aims to combine deep networks’ representational power with the compositional linguistic structure of questions. It introduces neural module networks that decompose questions into linguistic substructures, dynamically instantiate reusable modular networks, and jointly train them. Evaluated on two challenging datasets, the approach achieves state‑of‑the‑art performance on the VQA natural image dataset and a new dataset of complex abstract‑shape questions.
Visual question answering is fundamentally compositional in nature---a question like where is the dog? shares substructure with questions like what color is the dog? and where is the cat? This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural modules into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.
| Year | Citations | |
|---|---|---|
Page 1
Page 1