Publication | Closed Access
Pixel-Level Hand Detection in Ego-centric Videos
240
Citations
22
References
2013
Year
Unknown Venue
Machine VisionImage AnalysisEngineeringHand DetectionPattern RecognitionGesture RecognitionBiometricsEye TrackingHuman Pose EstimationScene UnderstandingComputer ScienceDeep LearningMultimodal Human Computer InterfaceComputer VisionPixel-level Hand DetectionHand-object Manipulation
We address the task of pixel-level hand detection in the context of ego-centric cameras. Extracting hand regions in ego-centric videos is a critical step for understanding hand-object manipulation and analyzing hand-eye coordination. However, in contrast to traditional applications of hand detection, such as gesture interfaces or sign-language recognition, ego-centric videos present new challenges such as rapid changes in illuminations, significant camera motion and complex hand-object manipulations. To quantify the challenges and performance in this new domain, we present a fully labeled indoor/outdoor ego-centric hand detection benchmark dataset containing over 200 million labeled pixels, which contains hand images taken under various illumination conditions. Using both our dataset and a publicly available ego-centric indoors dataset, we give extensive analysis of detection performance using a wide range of local appearance features. Our analysis highlights the effectiveness of sparse features and the importance of modeling global illumination. We propose a modeling strategy based on our findings and show that our model outperforms several baseline approaches.
| Year | Citations | |
|---|---|---|
Page 1
Page 1