Publication | Closed Access
Deep clustering: Discriminative embeddings for segmentation and separation
1.4K
Citations
25
References
2016
Year
Unknown Venue
Source SeparationSingle-channel MixturesEngineeringMachine LearningUnsupervised Machine LearningData SciencePattern RecognitionSpeaker DiarizationMachine VisionFeature LearningSegmentation ImplicitDeep LearningDeep ClusteringComputer VisionMulti-speaker Speech RecognitionSpeech SeparationSpeech ProcessingInput MixturesImage Segmentation
Previous deep network separation methods perform well with a fixed number of distinct source classes, but are unsuitable for arbitrary source classes and numbers. The study proposes deep clustering to solve cocktail‑party source separation. The method trains a deep network to produce contrastive embeddings for each spectrogram region, forming a low‑rank affinity matrix that is then decoded via K‑means clustering to recover source segmentation. Experiments demonstrate that a speaker‑independent model trained on two‑speaker mixtures improves signal quality by about 6 dB on unseen two‑speaker mixtures and also performs surprisingly well on three‑speaker mixtures.
We address the problem of "cocktail-party" source separation in a deep learning framework called deep clustering. Previous deep network approaches to separation have shown promising performance in scenarios with a fixed number of sources, each belonging to a distinct signal class, such as speech and noise. However, for arbitrary source classes and number, "class-based" methods are not suitable. Instead, we train a deep network to assign contrastive embedding vectors to each time-frequency region of the spectrogram in order to implicitly predict the segmentation labels of the target spectrogram from the input mixtures. This yields a deep network-based analogue to spectral clustering, in that the embeddings form a low-rank pair-wise affinity matrix that approximates the ideal affinity matrix, while enabling much faster performance. At test time, the clustering step "decodes" the segmentation implicit in the embeddings by optimizing K-means with respect to the unknown assignments. Preliminary experiments on single-channel mixtures from multiple speakers show that a speaker-independent model trained on two-speaker mixtures can improve signal quality for mixtures of held-out speakers by an average of 6dB. More dramatically, the same model does surprisingly well with three-speaker mixtures.
| Year | Citations | |
|---|---|---|
Page 1
Page 1