Publication | Open Access
FastSpeech: Fast, Robust and Controllable Text to Speech
581
Citations
20
References
2019
Year
EngineeringAttention AlignmentsNeural NetworkControllable TextSpeech RecognitionNatural Language ProcessingComputational LinguisticsProsody ControlLanguage StudiesReal-time LanguageMachine TranslationSpeech SynthesisLinguisticsSpeech OutputDeep LearningText-to-speechSpeech CommunicationMulti-speaker Speech RecognitionSpeech ProcessingSpeech InputSpeech PerceptionSpeech Interface
Neural network end‑to‑end TTS models, such as Tacotron 2, have improved speech quality but are slow, prone to word skipping or repetition, and lack controllability over voice speed and prosody. This work proposes a feed‑forward Transformer that generates mel‑spectrograms in parallel for TTS. The model uses attention alignments from an encoder‑decoder teacher to predict phoneme durations, then a length regulator expands the phoneme sequence to match the mel‑spectrogram length for parallel synthesis. Experiments on LJSpeech show that FastSpeech matches autoregressive models in quality, eliminates word skipping/repeating, smoothly controls voice speed, and speeds mel‑spectrogram generation 270× and end‑to‑end synthesis 38×.
Neural network based end-to-end text to speech (TTS) has significantly improved the quality of synthesized speech. Prominent methods (e.g., Tacotron 2) usually first generate mel-spectrogram from text, and then synthesize speech from the mel-spectrogram using vocoder such as WaveNet. Compared with traditional concatenative and statistical parametric approaches, neural network based end-to-end models suffer from slow inference speed, and the synthesized speech is usually not robust (i.e., some words are skipped or repeated) and lack of controllability (voice speed or prosody control). In this work, we propose a novel feed-forward network based on Transformer to generate mel-spectrogram in parallel for TTS. Specifically, we extract attention alignments from an encoder-decoder based teacher model for phoneme duration prediction, which is used by a length regulator to expand the source phoneme sequence to match the length of the target mel-spectrogram sequence for parallel mel-spectrogram generation. Experiments on the LJSpeech dataset show that our parallel model matches autoregressive models in terms of speech quality, nearly eliminates the problem of word skipping and repeating in particularly hard cases, and can adjust voice speed smoothly. Most importantly, compared with autoregressive Transformer TTS, our model speeds up mel-spectrogram generation by 270x and the end-to-end speech synthesis by 38x. Therefore, we call our model FastSpeech.
| Year | Citations | |
|---|---|---|
Page 1
Page 1