Concepedia

Publication | Closed Access

Prototype Arabic Sign language recognition using multi-sensor data fusion of two leap motion controllers

38

Citations

12

References

2015

Year

Abstract

Sign language is important for facilitating communication between hearing impaired and the rest of society. However, most vocal people do not understand sign language, hence, the need to develop system capable of translating sign language. Two approaches have traditionally been used in the literature: image-based and glove-based systems. Glove-based systems require the user to wear electronic gloves while performing the signs. The glove includes a number of sensors detecting different hand and finger articulations. Image-based systems use camera(s) to acquire a sequence of images of the hand. Each of the two approaches has its own disadvantages. The glove-based method is not natural as the user must wear a cumbersome instrument while the camera-based system requires specific background and environmental conditions to achieve high accuracy. In this paper, we propose a new approach for Arabic Sign Language Recognition (ArSLR) which involves the use of two Leap Motion Controllers (LMC) to prevent the case of one finger being occluded by another finger or hand. This device detects and tracks the hand and fingers to provide position and motion information. We propose to use the two LMCs as a backbone of the ArSLR system. In addition to data acquisition, the system includes a preprocessing stage, a feature extraction stage, and a classification stage. Fusion of evidences from the two LMCs at the feature extraction and classification stage was also investigated using Dempster-Shafer theory of evidence. Features fusion from the two LMCs gives 97.7% classification accuracy with Linear Discriminant Analysis (LDA) classifier and 97.1% with classifier level fusion. This gives better recognition over the use of a single LMC.

References

YearCitations

Page 1