Publication | Closed Access
Deep learning for human activity recognition: A resource efficient implementation on low-power devices
253
Citations
18
References
2016
Year
Unknown Venue
Wearable SystemConvolutional Neural NetworkEngineeringMachine LearningHuman Pose EstimationAction Recognition (Movement Science)Action Recognition (Computer Vision)Wearable TechnologyInertial Sensor DataHuman MonitoringData SciencePattern RecognitionSparse Neural NetworkEmbedded Machine LearningHuman Activity RecognitionVideo TransformerHealth SciencesDeep Learning MethodologyMachine VisionComputer EngineeringComputer ScienceMobile ComputingResource Efficient ImplementationDeep LearningComputer VisionMobile SensingHuman MovementActivity Recognition
Human Activity Recognition provides contextual information for wellbeing, healthcare, and sport, yet most machine‑learning approaches are offline and unsuitable for sensor nodes. The paper proposes a deep‑learning HAR technique that delivers accurate, real‑time classification on low‑power wearable devices. The method generates orientation‑, placement‑, and sampling‑rate‑invariant features by applying sums of temporal convolutions to the spectral representation of inertial data. Evaluation shows the approach outperforms state‑of‑the‑art methods on laboratory and real‑world datasets, and analysis demonstrates efficient computation times on mobile devices and sensor nodes.
Human Activity Recognition provides valuable contextual information for wellbeing, healthcare, and sport applications. Over the past decades, many machine learning approaches have been proposed to identify activities from inertial sensor data for specific applications. Most methods, however, are designed for offline processing rather than processing on the sensor node. In this paper, a human activity recognition technique based on a deep learning methodology is designed to enable accurate and real-time classification for low-power wearable devices. To obtain invariance against changes in sensor orientation, sensor placement, and in sensor acquisition rates, we design a feature generation process that is applied to the spectral domain of the inertial data. Specifically, the proposed method uses sums of temporal convolutions of the transformed input. Accuracy of the proposed approach is evaluated against the current state-of-the-art methods using both laboratory and real world activity datasets. A systematic analysis of the feature generation parameters and a comparison of activity recognition computation times on mobile devices and sensor nodes are also presented.
| Year | Citations | |
|---|---|---|
Page 1
Page 1