Concepedia

Publication | Closed Access

EduSpeak<sup>®</sup>: A speech recognition and pronunciation scoring toolkit for computer-aided language learning applications

80

Citations

17

References

2010

Year

TLDR

EduSpeak® is a software toolkit that equips language‑learning developers with state‑of‑the‑art speech recognition and automatic pronunciation scoring to give overall quality feedback and pinpoint specific production errors. The authors aim to estimate the grade a human expert would assign to a paragraph or phrase’s pronunciation quality. They train machine‑score predictors on databases of nonnative speech paired with human ratings, provide phone‑level mispronunciation detection that flags specific errors, and evaluate two detection approaches on 130,000 phones from 206 speakers. The best system’s classification error for reliably transcribed phones is only slightly higher than the average pairwise disagreement among human transcribers.

Abstract

SRI International’s EduSpeak® system is a software development toolkit that enables developers of interactive language education software to use state-of-the-art speech recognition and pronunciation scoring technology. Automatic pronunciation scoring allows the computer to provide feedback on the overall quality of pronunciation and to point to specific production problems. We review our approach to pronunciation scoring, where our aim is to estimate the grade that a human expert would assign to the pronunciation quality of a paragraph or a phrase. Using databases of nonnative speech and corresponding human ratings at the sentence level, we evaluate different machine scores that can be used as predictor variables to estimate pronunciation quality. For more specific feedback on pronunciation, the EduSpeak toolkit supports a phone-level mispronunciation detection functionality that automatically flags specific phone segments that have been mispronounced. Phone-level information makes it possible to provide the student with feedback about specific pronunciation mistakes.Two approaches to mispronunciation detection were evaluated in a phonetically transcribed database of 130,000 phones uttered in continuous speech sentences by 206 nonnative speakers. Results show that classification error of the best system, for the phones that can be reliably transcribed, is only slightly higher than the average pairwise disagreement between the human transcribers.

References

YearCitations

Page 1