Publication | Closed Access
LSTM time and frequency recurrence for automatic speech recognition
119
Citations
25
References
2015
Year
Unknown Venue
EngineeringMachine LearningSpoken Language ProcessingShort-term MemoryRecurrent Neural NetworkSpeech RecognitionNatural Language ProcessingTime RecurrenceData ScienceRobust Speech RecognitionLstm TimeReal-time LanguageHealth SciencesSequence ModellingComputer ScienceDeep LearningSpeech CommunicationSpeech ProcessingSpeech InputSpeech PerceptionLinguisticsTraditional Time Lstm
Long short-term memory (LSTM) recurrent neural networks (RNNs) have recently shown significant performance improvements over deep feed-forward neural networks (DNNs). A key aspect of these models is the use of time recurrence, combined with a gating architecture that ameliorates the vanishing gradient problem. Inspired by human spectrogram reading, in this paper we propose an extension to LSTMs that performs the recurrence in frequency as well as in time. This model first scans the frequency bands to generate a summary of the spectral information, and then uses the output layer activations as the input to a traditional time LSTM (T-LSTM). Evaluated on a Microsoft short message dictation task, the proposed model obtained a 3.6% relative word error rate reduction over the T-LSTM.
| Year | Citations | |
|---|---|---|
Page 1
Page 1