Concepedia

TLDR

Human Activity Recognition provides contextual information for wellbeing, healthcare, and sport, yet most machine‑learning approaches are offline and unsuitable for sensor nodes. The paper proposes a deep‑learning HAR technique that delivers accurate, real‑time classification on low‑power wearable devices. The method generates orientation‑, placement‑, and sampling‑rate‑invariant features by applying sums of temporal convolutions to the spectral representation of inertial data. Evaluation shows the approach outperforms state‑of‑the‑art methods on laboratory and real‑world datasets, and analysis demonstrates efficient computation times on mobile devices and sensor nodes.

Abstract

Human Activity Recognition provides valuable contextual information for wellbeing, healthcare, and sport applications. Over the past decades, many machine learning approaches have been proposed to identify activities from inertial sensor data for specific applications. Most methods, however, are designed for offline processing rather than processing on the sensor node. In this paper, a human activity recognition technique based on a deep learning methodology is designed to enable accurate and real-time classification for low-power wearable devices. To obtain invariance against changes in sensor orientation, sensor placement, and in sensor acquisition rates, we design a feature generation process that is applied to the spectral domain of the inertial data. Specifically, the proposed method uses sums of temporal convolutions of the transformed input. Accuracy of the proposed approach is evaluated against the current state-of-the-art methods using both laboratory and real world activity datasets. A systematic analysis of the feature generation parameters and a comparison of activity recognition computation times on mobile devices and sensor nodes are also presented.

References

YearCitations

Page 1