Publication | Open Access
Words Can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors
441
Citations
35
References
2019
Year
PsycholinguisticsSpoken Language ProcessingMultimodal Sentiment AnalysisSpeech RecognitionNatural Language ProcessingWord EmbeddingsComputational LinguisticsAffective ComputingLanguage StudiesNonverbal IntentsExpressive Nonverbal RepresentationsHealth SciencesCognitive ScienceMultimodal Signal ProcessingSpeech CommunicationFacial Expression RecognitionFacial AnimationSpeech ProcessingShifted Word RepresentationsParalinguisticsSpeech PerceptionLinguisticsEmotion RecognitionNonverbal Communication
Human communication conveys intentions through both verbal and nonverbal behaviors, which vary dynamically with vocal and facial cues, so language models must incorporate both literal meaning and contextual nonverbal signals. The authors aim to improve language modeling by learning expressive nonverbal representations from fine‑grained visual and acoustic patterns and dynamically shifting word embeddings according to accompanying nonverbal cues using the proposed RAVEN. RAVEN employs recurrent attention over fine‑grained nonverbal subword sequences to generate context‑aware word embeddings, and the authors visualize the resulting shifts across different nonverbal contexts to reveal multimodal variation patterns. The proposed model achieves competitive performance on two publicly available datasets for multimodal sentiment analysis and emotion recognition.
Humans convey their intentions through the usage of both verbal and nonverbal behaviors during face-to-face communication. Speaker intentions often vary dynamically depending on different nonverbal contexts, such as vocal patterns and facial expressions. As a result, when modeling human language, it is essential to not only consider the literal meaning of the words but also the nonverbal contexts in which these words appear. To better model human language, we first model expressive nonverbal representations by analyzing the fine-grained visual and acoustic patterns that occur during word segments. In addition, we seek to capture the dynamic nature of nonverbal intents by shifting word representations based on the accompanying nonverbal behaviors. To this end, we propose the Recurrent Attended Variation Embedding Network (RAVEN) that models the fine-grained structure of nonverbal subword sequences and dynamically shifts word representations based on nonverbal cues. Our proposed model achieves competitive performance on two publicly available datasets for multimodal sentiment analysis and emotion recognition. We also visualize the shifted word representations in different nonverbal contexts and summarize common patterns regarding multimodal variations of word representations.
| Year | Citations | |
|---|---|---|
Page 1
Page 1