Concepedia

TLDR

Continuous word representations trained on large unlabeled corpora are useful for many NLP tasks, but popular models ignore morphology by assigning a distinct vector to each word, which limits performance especially for languages with large vocabularies and many rare words. The authors propose a skip‑gram–based approach that represents each word as a bag of character n‑grams. Each character n‑gram receives a vector, and a word’s vector is the sum of its n‑gram vectors; the authors evaluate these representations on nine languages using word similarity and analogy tasks. The method trains quickly on large corpora, produces vectors for unseen words, and achieves state‑of‑the‑art performance on word similarity and analogy tasks across nine languages.

Abstract

Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character n-grams. A vector representation is associated to each character n-gram; words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows us to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks.

References

YearCitations

2013

18.1K

1990

12.7K

2013

11.7K

1904

7.2K

2008

5.2K

1954

4.6K

2013

3.1K

2010

2.8K

1996

1.8K

2011

1.2K

Page 1