Publication | Closed Access
FRAGE: Frequency-Agnostic Word Representation
57
Citations
0
References
2018
Year
EngineeringMachine LearningSemanticsLarge Language ModelCorpus LinguisticsRare WordText MiningWord EmbeddingsNatural Language ProcessingData ScienceComputational LinguisticsEmbeddingsLanguage StudiesFrequency-agnostic Word RepresentationMachine TranslationComputational LexicologyNlp TaskDeep LearningDistributional SemanticsAka WordLinguisticsContinuous Word Representation
Continuous word representation (aka word embedding) is a basic building block in many neural network-based models used in natural language processing tasks. Although it is widely accepted that words with similar semantics should be close to each other in the embedding space, we find that word embeddings learned in several tasks are biased towards word frequency: the embeddings of high-frequency and low-frequency words lie in different subregions of the embedding space, and the embedding of a rare word and a popular word can be far from each other even if they are semantically similar. This makes learned word embeddings ineffective, especially for rare words, and consequently limits the performance of these neural network models. In order to mitigate the issue, in this paper, we propose a neat, simple yet effective adversarial training method to blur the boundary between the embeddings of high-frequency words and low-frequency words. We conducted comprehensive studies on ten datasets across four natural language processing tasks, including word similarity, language modeling, machine translation and text classification. Results show that we achieve higher performance than the baselines in all tasks.