Publication | Closed Access
UNDERSTANDING GESTURES IN MULTIMODAL HUMAN COMPUTER INTERACTION
40
Citations
2
References
2000
Year
Computer VisionFree Hand GesturesEngineeringGesture InterpretationEye TrackingEducationMultimodal InteractionHuman-computer InteractionComputer ScienceMultimodal Human Computer InterfaceMultimodal CommunicationAnnotationLinguisticsGesture ProcessingGesture RecognitionGesture PrimitivesAmerican Sign Language
In recent years because of the advances in computer vision research, free hand gestures have been explored as a means of human-computer interaction (HCI). Gestures in combination with speech can be an important step toward natural, multimodal HCI. However, interpretation of gestures in a multimodal setting can be a particularly challenging problem. In this paper, we propose an approach for studying multimodal HCI in the context of a computerized map. An implemented testbed allows us to conduct user studies and address issues toward understanding of hand gestures in a multimodal computer interface. Absence of an adequate gesture classification in HCI makes gesture interpretation difficult. We formalize a method for bootstrapping the interpretation process by a semantic classification of gesture primitives in HCI context. We distinguish two main categories of gesture classes based on their spatio-temporal deixis. Results of user studies revealed that gesture primitives, originally extracted from weather map narration, form patterns of co-occurrence with speech parts in association with their meaning in a visual display control system. The results of these studies indicated two levels of gesture meaning: individual stroke and motion complex. These findings define a direction in approaching interpretation in natural gesture-speech interfaces.
| Year | Citations | |
|---|---|---|
Page 1
Page 1