Concepedia

TLDR

Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. The paper proposes using Relation Networks (RNs) as a simple plug‑and‑play module to solve problems that fundamentally hinge on relational reasoning. RNs are integrated into existing neural architectures, enabling them to explicitly model relations between entities. RN‑augmented networks achieve state‑of‑the‑art, super‑human performance on CLEVR VQA, succeed on bAbI text QA, and outperform powerful convolutional nets on relational tasks such as Sort‑of‑CLEVR, demonstrating that RNs enable deep models to learn relational reasoning.

Abstract

Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.

References

YearCitations

Page 1