Publication | Open Access
Comparative Analysis of Convolution Neural Network Models for Continuous Indian Sign Language Classification
31
Citations
14
References
2020
Year
Sign LanguageSpeech RecognitionKinesiologyEngineeringHealth SciencesPattern RecognitionBiometricsWearable TechnologyLanguage RecognitionRobust Speech RecognitionSpeech ProcessingSpeech InputComparative AnalysisDeep LearningIndian Sign LanguageGesture RecognitionContinuous Sign LanguageAmerican Sign Language
Classification of continuous sign language is essential for development of a sign language to spoken language translator. In this paper, classification of continuously signed sentences from the Indian Sign Language is considered using data from one inertial measurement unit placed on each hand of the signer. The recorded accelerometer and gyroscope data are used in tracking the position of hand in three-dimension, which are used as input to the classifier. The time-LeNet and multi-channel deep convolutional neural network (MC-DCNN) are employed for classification of sentences from raw position data of both hands. Moreover, a modified time-LeNet architecture is proposed to address the issue of over-fitting observed in the time-LeNet. The three models are compared for performance in terms of model complexity, loss and classification accuracies. MC-DCNN has large number of trainable parameters and provides an overall accuracy of 83.94%, while time-LeNet yields an average accuracy of 79.70%. The modified time-LeNet yields a classification accuracy of 81.62 % with just sixteenth of trainable parameters as compared to MC-DCNN.
| Year | Citations | |
|---|---|---|
Page 1
Page 1