Publication | Closed Access
Language modeling with gated convolutional networks
881
Citations
19
References
2017
Year
Llm Fine-tuningEngineeringMachine LearningGated Convolutional NetworksMultilingual PretrainingLarge Language ModelSpeech RecognitionNatural Language ProcessingData ScienceComputational LinguisticsLanguage StudiesLanguage ModelsRecurrent BaselineMachine TranslationLarge Ai ModelSequence ModellingNon-recurrent ApproachDeep LearningNeural Machine TranslationStrong Recurrent ModelsLinguistics
Recurrent neural networks have dominated language modeling because they can capture unbounded context. This work proposes a finite‑context, convolutional language model that enables parallel token processing and introduces a simplified gating mechanism to outperform prior convolutional baselines. The model stacks convolutional layers with the new gating scheme and systematically studies key architectural choices. It achieves state‑of‑the‑art results on WikiText‑103, competitive performance on Google Billion Words, reduces sentence‑scoring latency by an order of magnitude, and is the first non‑recurrent approach to rival strong recurrent models at this scale.
The pre-dominant approach to language modeling to date is based on recurrent neural networks. Their success on this task is often linked to their ability to capture unbounded context. In this paper we develop a finite context approach through stacked convolutions, which can be more efficient since they allow parallelization over sequential tokens. We propose a novel simplified gating mechanism that outperforms Oord et al. (2016b) and investigate the impact of key architectural decisions. The proposed approach achieves state-of-the-art on the WikiText-103 benchmark, even though it features long-term dependencies, as well as competitive results on the Google Billion Words benchmark. Our model reduces the latency to score a sentence by an order of magnitude compared to a recurrent baseline. To our knowledge, this is the first time a non-recurrent approach is competitive with strong recurrent models on these large scale language tasks.
| Year | Citations | |
|---|---|---|
Page 1
Page 1