Publication | Open Access
Parametric multichannel audio coding: synthesis of coherence cues
39
Citations
27
References
2005
Year
MusicLevel DifferenceAuditory ModelingSpatial AudioHealth SciencesEngineeringSound RenderingAudio Signal ProcessingAudio AnalysisCoherence CuesSpeech ProcessingCoherence SynthesisMulti-channel ProcessingSpeech PerceptionSignal ProcessingSpeech Recognition
Parametric multichannel audio coding represents an audio signal as one single audio channel plus side information. The side information contains estimates of perceptually relevant differences between the original audio channels. Usually, time difference, level difference, and coherence cues are considered. These cues determine, to a large degree, the auditory spatial image that is perceived when playing back multichannel audio signals. Level difference and time difference synthesis is simple: Different gain factors and delays are applied to the sum signal in subbands for generating the different decoder output channels. However, it is not as obvious how coherence cues can be synthesized. Several heuristic methods for coherence synthesis were proposed previously. In this paper, we are proposing a systematic approach for coherence synthesis. The coherence that is measured in the encoder between a pair of channels is reproduced in the decoder. For that purpose, de-correlation filters modeling late reverberation with impulse responses of a length of several hundred milliseconds are used, resulting in the ability of the scheme to generate naturally sounding diffuse sound. A method for reducing the computational complexity of the scheme is presented. The results of a subjective test indicate that the proposed scheme achieves good audio quality. Furthermore, the scheme was compared to a previous scheme without multichannel coherence synthesis and performs significantly better for all items tested.
| Year | Citations | |
|---|---|---|
Page 1
Page 1