Concepedia

TLDR

The continuous Skip‑gram model efficiently learns high‑quality distributed word vectors that capture many syntactic and semantic relationships, but it cannot encode word order or idiomatic phrases, as illustrated by the difficulty of combining “Canada” and “Air” into “Air Canada.” The authors aim to extend the Skip‑gram framework to enhance vector quality and training speed, and to develop a simple phrase‑finding method that enables learning vector representations for millions of phrases. They achieve these goals by subsampling frequent words to accelerate training and improve regularity, and by introducing negative sampling as a lightweight alternative to hierarchical softmax. The approach yields significant speedup, more regular word representations, and demonstrates that high‑quality vectors can be learned for millions of phrases.

Abstract

The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of Canada and cannot be easily combined to obtain Air Canada. Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.

References

YearCitations

1986

29.7K

2013

18.1K

2008

5.2K

2013

2.9K

2010

2.8K

2010

1.9K

2000

1.8K

2011

1.6K

2011

1.6K

2012

1.3K

Page 1