Publication | Closed Access
An Exploration of Self-Supervised Pretrained Representations for End-to-End Speech Recognition
44
Citations
44
References
2021
Year
EngineeringMachine LearningSpoken Language ProcessingSpeech DataSpeech RecognitionNatural Language ProcessingData ScienceSelf-supervised LearningComputational LinguisticsRobust Speech RecognitionReal-time LanguageHealth SciencesComputer ScienceDeep LearningSelf-supervised Pretrained RepresentationsSpeech SignalSpeech CommunicationMulti-speaker Speech RecognitionSelf-supervised PretrainingSpeech ProcessingSpeech InputSpeech Perception
Self-supervised pretraining on speech data has achieved a lot of progress. High-fidelity representation of the speech signal is learned from a lot of untranscribed data and shows promising performance. Recently, there are several works focusing on evaluating the quality of self-supervised pretrained representations on various tasks with-out domain restriction, e.g. SUPERB. However, such evaluations do not provide a comprehensive comparison among many ASR benchmark corpora. In this paper, we focus on the general applications of pretrained speech representations, on advanced end-to-end automatic speech recognition (E2E-ASR) models. We select sev-eral pretrained speech representations and present the experimental results on various open-source and publicly available corpora for E2E-ASR. Without any modification of the back-end model archi-tectures or training strategy, some of the experiments with pretrained representations, e.g., WSJ, WSJ0-2mix with HuBERT, reach or out-perform current state-of-the-art (SOTA) recognition performance. Moreover, we further explore more scenarios for whether the pre-training representations are effective, such as the cross-language or overlapped speech. The scripts, configuratons and the trained mod-els have been released in ESPnet to let the community reproduce our experiments and improve them.
| Year | Citations | |
|---|---|---|
Page 1
Page 1