Concepedia

Publication | Open Access

Knowledge Graph and Text Jointly Embedding

412

Citations

18

References

2014

Year

Abstract

We examine the embedding approach to reason new relational facts from a largescale knowledge graph and a text corpus. We propose a novel method of jointly embedding entities and words into the same continuous vector space. The embedding process attempts to preserve the relations between entities in the knowledge graph and the concurrences of words in the text corpus. Entity names and Wikipedia anchors are utilized to align the embeddings of entities and words in the same space. Large scale experiments on Freebase and a Wikipedia/NY Times corpus show that jointly embedding brings promising improvement in the accuracy of predicting facts, compared to separately embedding knowledge graphs and text. Particularly, jointly embedding enables the prediction of facts containing entities out of the knowledge graph, which cannot be handled by previous embedding methods. At the same time, concerning the quality of the word embeddings, experiments on the analogical reasoning task show that jointly embedding is comparable to or slightly better than word2vec (Skip-Gram).

References

YearCitations

2013

18.1K

2013

18.1K

1995

14K

2013

11.7K

2015

5.2K

2011

5.2K

2008

4.9K

2011

4K

2014

3.7K

2009

2.9K

Page 1