Concepedia

Publication | Closed Access

Head, Eye, and Hand Patterns for Driver Activity Recognition

136

Citations

27

References

2014

Year

Abstract

In this paper, a multiview, multimodal vision framework is proposed in order to characterize driver activity based on head, eye, and hand cues. Leveraging the three types of cues allows for a richer description of the driver's state and for improved activity detection performance. First, regions of interest are extracted from two videos, one observing the driver's hands and one the driver's head. Next, hand location hypotheses are generated and integrated with a head pose and facial landmark module in order to classify driver activity into three states: wheel region interaction with two hands on the wheel, gear region activity, or instrument cluster region activity. The method is evaluated on a video dataset captured in on-road settings.

References

YearCitations

Page 1