Publication | Closed Access
MFA-Conformer: Multi-scale Feature Aggregation Conformer for Automatic Speaker Verification
119
Citations
28
References
2022
Year
Convolutional Neural NetworkEngineeringMachine LearningBiometricsAutomatic Speaker VerificationSpeech RecognitionData SciencePattern RecognitionSpeaker DiarizationRobust Speech RecognitionVoice RecognitionConvolution-augmented TransformerComputer EngineeringComputer ScienceDeep LearningComputer VisionSpeech CommunicationConformer BlocksMulti-speaker Speech RecognitionSpeech ProcessingConvolution Neural NetworksSpeaker Recognition
In this paper, we present Multi-scale Feature Aggregation Conformer (MFA-Conformer), an easy-to-implement, simple but effective backbone for automatic speaker verification based on the Convolution-augmented Transformer (Conformer).The architecture of the MFA-Conformer is inspired by recent stateof-the-art models in speech recognition and speaker verification.Firstly, we introduce a convolution subsampling layer to decrease the computational cost of the model.Secondly, we adopt Conformer blocks which combine Transformers and convolution neural networks (CNNs) to capture global and local features effectively.Finally, the output feature maps from all Conformer blocks are concatenated to aggregate multi-scale representations before final pooling.We evaluate the MFA-Conformer on the widely used benchmarks.The best system obtains 0.64%, 1.29% and 1.63% EER on VoxCeleb1-O, SITW.Dev, and SITW.Eval set, respectively.MFA-Conformer significantly outperforms the popular ECAPA-TDNN systems in both recognition performance and inference speed.Last but not the least, the ablation studies clearly demonstrate that the combination of global and local feature learning can lead to robust and accurate speaker embedding extraction.We have also released the code 1 for future comparison.
| Year | Citations | |
|---|---|---|
Page 1
Page 1