Publication | Open Access
WaveNet: A Generative Model for Raw Audio
3.6K
Citations
0
References
2016
Year
MusicEngineeringMachine LearningData ScienceA Single WavenetMusic GenerationMulti-speaker Speech RecognitionRaw Audio WaveformsSpeech OutputSpeech ProcessingAudio RetrievalSound SynthesisVoice RecognitionDeep LearningDeep Neural NetworkRaw AudioSpeech CommunicationSpeech Recognition
WaveNet is introduced as a deep neural network designed to generate raw audio waveforms. It is a fully probabilistic autoregressive model that predicts each audio sample conditioned on all previous samples and can be trained efficiently on data with tens of thousands of samples per second. The model delivers state‑of‑the‑art text‑to‑speech quality, surpassing parametric and concatenative systems in English and Mandarin, can model multiple speakers and music with high fidelity, and also functions as a discriminative model yielding promising phoneme recognition results.
This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-of-the-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.