Publication | Open Access
Multimodal automatic assessment of acute pain through facial videos and heart rate signals utilizing transformer-based architectures
34
Citations
31
References
2024
Year
Accurate and objective pain evaluation is crucial in developing effective pain management protocols, aiming to alleviate distress and prevent patients from experiencing decreased functionality. A multimodal automatic assessment framework for acute pain utilizing video and heart rate signals is introduced in this study. The proposed framework comprises four pivotal modules: the <i>Spatial Module</i>, responsible for extracting embeddings from videos; the <i>Heart Rate Encoder</i>, tasked with mapping heart rate signals into a higher dimensional space; the <i>AugmNet</i>, designed to create learning-based augmentations in the latent space; and the <i>Temporal Module</i>, which utilizes the extracted video and heart rate embeddings for the final assessment. The <i>Spatial-Module</i> undergoes pre-training on a two-stage strategy: first, with a face recognition objective learning universal facial features, and second, with an emotion recognition objective in a multitask learning approach, enabling the extraction of high-quality embeddings for the automatic pain assessment. Experiments with the facial videos and heart rate extracted from electrocardiograms of the <i>BioVid</i> database, along with a direct comparison to 29 studies, demonstrate state-of-the-art performances in unimodal and multimodal settings, maintaining high efficiency. Within the multimodal context, 82.74% and 39.77% accuracy were achieved for the binary and multi-level pain classification task, respectively, utilizing <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mn>9.62</mml:mn></mml:math> million parameters for the entire framework.
| Year | Citations | |
|---|---|---|
Page 1
Page 1