Concepedia

TLDR

Modern relation extraction models rely on supervised learning from small hand‑labeled corpora. This work explores a paradigm that eliminates the need for labeled data, enabling domain‑agnostic extraction from corpora of any size. We use Freebase to provide distant supervision, extracting sentences containing entity pairs, extracting textual features, and training a probabilistic classifier. The system extracts 10,000 instances of 102 relations at 67.6 % precision, showing that distant supervision can match supervised methods while scaling to large corpora, with syntactic parse features particularly beneficial for ambiguous relations.

Abstract

Modern models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora. We investigate an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACE-style algorithms, and allowing the use of corpora of any size. Our experiments use Freebase, a large semantic database of several thousand relations, to provide distant supervision. For each pair of entities that appears in some Freebase relation, we find all sentences containing those entities in a large unlabeled corpus and extract textual features to train a relation classifier. Our algorithm combines the advantages of supervised IE (combining 400,000 noisy pattern features in a probabilistic classifier) and unsupervised IE (extracting large numbers of relations from large corpora of any domain). Our model is able to extract 10,000 instances of 102 relations at a precision of 67.6%. We also analyze feature performance, showing that syntactic parse features are particularly helpful for relations that are ambiguous or lexically distant in their expression.

References

YearCitations

Page 1