Publication | Open Access
A Multimodal-Sensor-Enabled Room for Unobtrusive Group Meeting Analysis
27
Citations
44
References
2018
Year
Unknown Venue
EngineeringHuman-machine InteractionCommunicationInteraction ManagementSpeech RecognitionGroup MeetingsAffective ComputingMultimodal InteractionConversation AnalysisMultimodal Human Computer InterfaceHealth SciencesCognitive ScienceGroup InteractionMultimodal Signal ProcessingMultimodal-sensor-enabled RoomSpeech CommunicationManual CodingGroup CommunicationSocial ComputingEye TrackingHuman InteractionHuman-computer InteractionSpeech PerceptionSmart Meeting Room
Group meetings can suffer from serious problems that undermine performance, including bias, "groupthink", fear of speaking, and unfocused discussion. To better understand these issues, propose interventions, and thus improve team performance, we need to study human dynamics in group meetings. However, this process currently heavily depends on manual coding and video cameras. Manual coding is tedious, inaccurate, and subjective, while active video cameras can affect the natural behavior of meeting participants. Here, we present a smart meeting room that combines microphones and unobtrusive ceiling-mounted Time-of-Flight (ToF) sensors to understand group dynamics in team meetings. We automatically process the multimodal sensor outputs with signal, image, and natural language processing algorithms to estimate participant head pose, visual focus of attention (VFOA), non-verbal speech patterns, and discussion content. We derive metrics from these automatic estimates and correlate them with user-reported rankings of emergent group leaders and major contributors to produce accurate predictors. We validate our algorithms and report results on a new dataset of lunar survival tasks of 36 individuals across 10 groups collected in the multimodal-sensor-enabled smart room.
| Year | Citations | |
|---|---|---|
Page 1
Page 1