Publication | Closed Access
Acoustic profiles in vocal emotion expression.
1.8K
Citations
0
References
1996
Year
MusicAcoustic ProfilesSpeech AcousticsAffective ComputingSocial SciencesSpeech AcousticSpeech PerceptionEmotionEmotion RecognitionSpeech CommunicationHealth Sciences
The study used 224 professional actor portrayals of 14 emotions, digitally analyzed acoustic parameters, and tested predictions from Scherer's component process model. Judges decoded emotions with high accuracy, and acoustic parameters distinguished intensity and valence, confirming most theoretical predictions while revealing some needed revisions.
Professional actors' portrayals of 14 emotions varying in intensity and valence were presented to judges. The results on decoding replicate earlier findings on the ability of judges to infer vocally expressed emotions with much-better-than-chance accuracy, including consistently found differences in the recognizability of different emotions. A total of 224 portrayals were subjected to digital acoustic analysis to obtain profiles of vocal parameters for different emotions. The data suggest that vocal parameters not only index the degree of intensity typical for different emotions but also differentiate valence or quality aspects. The data are also used to test theoretical predictions on vocal patterning based on the component process model of emotion (K.R. Scherer, 1986). Although most hypotheses are supported, some need to be revised on the basis of the empirical evidence. Discriminant analysis and jackknifing show remarkably high hit rates and patterns of confusion that closely mirror those found for listener-judges.