Concepedia

Publication | Closed Access

X-Vectors: Robust DNN Embeddings for Speaker Recognition

2.6K

Citations

25

References

2018

Year

TLDR

Deep neural network embeddings, called x‑vectors, map variable‑length utterances to fixed‑dimensional vectors and have been shown to outperform i‑vectors, yet collecting large labeled datasets for training remains difficult. This study investigates whether data augmentation can improve the performance of x‑vector embeddings for speaker recognition. The authors augment training data with added noise and reverberation, then train x‑vectors and compare them to i‑vector baselines on Speakers in the Wild and NIST SRE 2016 Cantonese. Data augmentation benefits the PLDA classifier but not the i‑vector extractor; the supervised x‑vector DNN effectively exploits augmentation, yielding superior performance on the evaluation datasets.

Abstract

In this paper, we use data augmentation to improve performance of deep neural network (DNN) embeddings for speaker recognition. The DNN, which is trained to discriminate between speakers, maps variable-length utterances to fixed-dimensional embeddings that we call x-vectors. Prior studies have found that embeddings leverage large-scale training datasets better than i-vectors. However, it can be challenging to collect substantial quantities of labeled data for training. We use data augmentation, consisting of added noise and reverberation, as an inexpensive method to multiply the amount of training data and improve robustness. The x-vectors are compared with i-vector baselines on Speakers in the Wild and NIST SRE 2016 Cantonese. We find that while augmentation is beneficial in the PLDA classifier, it is not helpful in the i-vector extractor. However, the x-vector DNN effectively exploits data augmentation, due to its supervised training. As a result, the x-vectors achieve superior performance on the evaluation datasets.

References

YearCitations

Page 1