Concepedia

TLDR

The authors propose VisualBERT, a simple and flexible framework for vision‑and‑language tasks, and introduce two visually‑grounded language‑model objectives for pre‑training on image caption data. VisualBERT is built as a stack of Transformer layers that implicitly align text tokens with image regions via self‑attention, and it is pre‑trained with two visually‑grounded language‑model objectives on image caption data. Experiments on VQA, VCR, NLVR2, and Flickr30K demonstrate that VisualBERT matches or surpasses state‑of‑the‑art models while being simpler, and analysis shows it can ground language elements to image regions without explicit supervision and captures syntactic relationships.

Abstract

We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks. VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an associated input image with self-attention. We further propose two visually-grounded language model objectives for pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2, and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between verbs and image regions corresponding to their arguments.

References

YearCitations

Page 1