Concepedia

Publication | Closed Access

MikeTalk: a talking facial display based on morphing visemes

146

Citations

16

References

2002

Year

TLDR

We present MikeTalk, a text‑to‑audiovisual speech synthesizer that converts input text into a synchronized audiovisual speech stream. MikeTalk is built from a recorded visual corpus of visemes, uses optical‑flow–derived correspondences to morph between viseme images, concatenates these morphs into complete utterances, and drives the morphing rate and sequence with phoneme and timing data extracted from a text‑to‑speech synthesizer. This approach synchronizes the visual and audio streams, producing a photorealistic talking face.

Abstract

We present MikeTalk, a text-to-audiovisual speech synthesizer which converts input text into an audiovisual speech stream. MikeTalk is built using visemes, which are a set of images spanning a large range of mouth shapes. The visemes are acquired from a recorded visual corpus of a human subject which is specifically designed to elicit one instantiation of each viseme. Using optical flow methods, correspondence from every viseme to every other viseme is computed automatically. By morphing along this correspondence, a smooth transition between viseme images may be generated. A complete visual utterance is constructed by concatenating viseme transitions. Finally, phoneme and timing information extracted from a text-to-speech synthesizer is exploited to determine which viseme transitions to use, and the rate at which the morphing process should occur. In this manner, we are able to synchronize the visual speech stream with the audio speech stream, and hence give the impression, of a photorealistic talking face.

References

YearCitations

Page 1