Publication | Closed Access
Speechmoe2: Mixture-of-Experts Model with Improved Routing
17
Citations
14
References
2022
Year
Artificial IntelligenceEngineeringMachine LearningMixture Of ExpertSpeech RecognitionRobust Speech RecognitionAutomatic RecognitionMixture-of-experts ModelAcoustic AnalysisHealth SciencesSpeech ModelsRouter ArchitectureComputer EngineeringComputer ScienceSpeech CommunicationSpeech TechnologyVoiceMulti-speaker Speech RecognitionSpeech AcousticsAcoustic ModelsSpeech ProcessingSpeech InputSpeech Perception
Mixture-of-experts based acoustic models with dynamic routing mechanisms have proved promising results for speech recognition. The design principle of router architecture is important for the large model capacity and high computational efficiency. Our previous work SpeechMoE only uses local grapheme embedding to help routers to make route decisions. To further improve speech recognition performance against varying domains and accents, we propose a new router architecture which integrates additional global domain and accent embedding into router input to promote adaptability. Experimental results show that the proposed SpeechMoE2 can achieve lower character error rate (CER) with comparable parameters than SpeechMoE on both multi-domain and multi-accent task. Primarily, the proposed method provides up to 1.6% ∼ 4.8% relative CER improvement for the multi-domain task and 1.9% ∼ 17.7% relative CER improvement for the multi-accent task respectively. Besides, increasing the number of experts also achieves consistent performance improvement and keeps the computational cost constant.
| Year | Citations | |
|---|---|---|
Page 1
Page 1