Publication | Closed Access
Spatio-Temporal Laban Features for Dance Style Recognition
21
Citations
28
References
2018
Year
Unknown Venue
MusicEngineeringMachine LearningHuman Pose EstimationDance Style RecognitionAction Recognition (Movement Science)Action Recognition (Computer Vision)Laban TheoryVideo InterpretationImage AnalysisKinesiologyPattern RecognitionSpatio-temporal Laban FeaturesHuman MotionMusic ProcessingHealth SciencesDanceMachine VisionDance On CameraLaban Movement AnalysisVideo UnderstandingComputer VisionVideo AnalysisMusic ClassificationHuman MovementActivity RecognitionMotion Analysis
Laban Movement Analysis has become an increasingly popular language for describing, indexing, and recording human motion. This study develops a Spatio‑Temporal Laban Feature descriptor to recognize dance styles in videos, extending Human Action Recognition. The descriptor uses only motion and body‑pose features, omitting appearance cues, and is evaluated on action‑recognition benchmarks and the ICD YouTube dance dataset in unconstrained, natural settings. The method is robust to camera jitter, zoom, and other acquisition variations, is computationally efficient, and matches or surpasses state‑of‑the‑art performance.
This work targets Dance Style Recognition in videos as an application of Human Action Recognition. We propose a novel Spatio-Temporal Laban Feature descriptor (STLF) for dance style recognition based on Laban theory. Laban Movement Analysis has become increasingly popular as a language to describe, index and record human motion. We only exploit motion features and body-pose information without encoding the appearance. The model is tested on some action recognition benchmarks and ICD, a challenging dataset of YouTube dance videos. Unlike other works, where Laban based features have been used in constrained environments, with static camera, sensors and no background noise, we employ STLF on videos in unconstrained and natural settings. It is robust to camera jitter, zoom variations and other acquisition conditions and is computationally cheap. It performs comparable or better than the state-of-the-art.
| Year | Citations | |
|---|---|---|
Page 1
Page 1