Publication | Closed Access
Multi-Modal Sensors Fusion for Fall Detection and Action Recognition in Indoor Environment
31
Citations
55
References
2024
Year
Unknown Venue
In computer vision and inertial sensors, recent research focuses on monitoring and identifying falls, as well as daily actions, through autonomous human action detection. In this paper, the proposed model utilizes multi-modal sensors for fall detection. The data from inertial sensors was filtered using a bilateral filter, known for its monotonic and maximum flat magnitude response in the passband, giving smoothness. Key properties, such as the Gaussian Mixture Model (GMM) and parseval's energy, were retrieved. Additionally, fall detection was accomplished utilizing RGB (red-green-blue) videos, where features such as rectangles, triangles, full-body ridges, and full-body curves were generated. The final results from the inertial sensor and vision data were fused utilizing multimodal fusion. We then optimized the fused data using the Naive Bayes approach and trained Multi-layer perceptron (MLP) classifier for classification. The UR Fall Detection (URFD) dataset was utilized to evaluate the proposed system, demonstrating its usefulness by reaching an accuracy of 88%.
| Year | Citations | |
|---|---|---|
Page 1
Page 1