Publication | Closed Access
Extensions of recurrent neural network language model
1.6K
Citations
21
References
2011
Year
Unknown Venue
Natural Language ProcessingLarge Ai ModelRnn LmSequence ModellingEngineeringMachine LearningComputational LinguisticsComputational ComplexityLanguage StudiesLarge Language ModelLanguage ModelsRnn ModelRecurrent Neural NetworkLinguisticsMachine TranslationSpeech Recognition
The recurrent neural network language model (RNN LM) outperforms many techniques but suffers from high computational complexity. This study proposes several modifications to the RNN LM aimed at reducing computational complexity and speeding up training and testing. The authors explore parameter‑reduction strategies to make the model smaller and faster. The modified RNN achieves more than 15‑fold speedup, benefits from backpropagation through time, outperforms feedforward networks, and is smaller, faster, and more accurate than the baseline.
We present several modifications of the original recurrent neural network language model (RNN LM).While this model has been shown to significantly outperform many competitive language modeling techniques in terms of accuracy, the remaining problem is the computational complexity. In this work, we show approaches that lead to more than 15 times speedup for both training and testing phases. Next, we show importance of using a backpropagation through time algorithm. An empirical comparison with feedforward networks is also provided. In the end, we discuss possibilities how to reduce the amount of parameters in the model. The resulting RNN model can thus be smaller, faster both during training and testing, and more accurate than the basic one.
| Year | Citations | |
|---|---|---|
1986 | 29.7K | |
1990 | 10.6K | |
1994 | 8.3K | |
2010 | 5.4K | |
1990 | 2.9K | |
2003 | 2.7K | |
2007 | 927 | |
2005 | 836 | |
2001 | 463 | |
1986 | 338 |
Page 1
Page 1