Publication | Open Access
Multichannel Audio Source Separation With Deep Neural Networks
290
Citations
47
References
2016
Year
Source SeparationDeep Neural NetworksEngineeringMachine LearningData ScienceHealth SciencesSpeech EnhancementSpeech SeparationSpeech ProcessingMulti-channel ProcessingEm IterationDeep LearningSignal SeparationSignal ProcessingMultichannel Wiener FilterSpeech Recognition
This article addresses the problem of multichannel audio source separation. We propose a framework where deep neural networks (DNNs) are used to model the source spectra and combined with the classical multichannel Gaussian model to exploit the spatial information. The parameters are estimated in an iterative expectation-maximization (EM) fashion and used to derive a multichannel Wiener filter. We present an extensive experimental study to show the impact of different design choices on the performance of the proposed technique. We consider different cost functions for the training of DNNs, namely the probabilistically motivated Itakura-Saito divergence, and also Kullback-Leibler, Cauchy, mean squared error, and phase-sensitive cost functions. We also study the number of EM iterations and the use of multiple DNNs, where each DNN aims to improve the spectra estimated by the preceding EM iteration. Finally, we present its application to a speech enhancement problem. The experimental results show the benefit of the proposed multichannel approach over a single-channel DNN-based approach and the conventional multichannel nonnegative matrix factorization-based iterative EM algorithm.
| Year | Citations | |
|---|---|---|
Page 1
Page 1