Publication | Closed Access
The multi-channel Wall Street Journal audio visual corpus (MC-WSJ-AV): specification and initial experiments
175
Citations
10
References
2005
Year
Unknown Venue
MusicEngineeringCommunicationSpeech RecognitionPattern RecognitionAudio Signal ProcessingAudio AnalysisRobust Speech RecognitionAvailable Speech CorporaHealth SciencesInitial ExperimentsMultimodal Signal ProcessingAudio RetrievalComputer ScienceDistant Speech RecognitionSignal ProcessingSpeech CommunicationAudio MiningMulti-speaker Speech RecognitionRead SpeechSpeech ProcessingWsjcam0 DatabaseSpeech InputSpeech PerceptionSpeech Interface
The recognition of speech in meetings poses a number of challenges to current automatic speech recognition (ASR) techniques. Meetings typically take place in rooms with non-ideal acoustic conditions and significant background noise, and may contain large sections of overlapping speech. In such circumstances, headset microphones have to date provided the best recognition performance, however participants are often reluctant to wear them. Microphone arrays provide an alternative to close-talking microphones by providing speech enhancement through directional discrimination. Unfortunately, however, development of array front-end systems for state-of-the-art large vocabulary continuous speech recognition suffers from a lack of necessary resources, as most available speech corpora consist only of single-channel recordings. This paper describes the collection of an audio-visual corpus of read speech from a number of instrumented meeting rooms. The corpus, based on the WSJCAM0 database, is suitable for use in continuous speech recognition experiments and is captured using a variety of microphones, including arrays, as well as close-up and wider angle cameras. The paper also describes some initial ASR experiments on the corpus comparing the use of close-talking microphones with both a fixed and a blind array beamforming technique
| Year | Citations | |
|---|---|---|
Page 1
Page 1