Publication | Open Access
NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding
1.6K
Citations
97
References
2019
Year
Depth‑based human activity analysis has shown strong performance, yet current RGB+D benchmarks lack large‑scale training data, diverse camera views, varied environments, and a realistic number of action categories. This work introduces a large‑scale RGB+D action dataset and explores a novel one‑shot 3D activity recognition task. The dataset comprises 114,000+ videos from 106 subjects across 120 action classes, recorded in 8 million frames, providing diverse daily, mutual, and health‑related activities. Existing 3D methods perform better with deep learning on this benchmark, and the proposed Action‑Part Semantic Relevance‑aware framework achieves promising results for novel action recognition. The dataset is available at http://rose1.ntu.edu.sg/Datasets/actionRecognition.asp.
Research on depth-based human activity analysis achieved outstanding performance and demonstrated the effectiveness of 3D representation for action recognition. The existing depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of large-scale training samples, realistic number of distinct class categories, diversity in camera views, varied environmental conditions, and variety of human subjects. In this work, we introduce a large-scale dataset for RGB+D human action recognition, which is collected from 106 distinct subjects and contains more than 114 thousand video samples and 8 million frames. This dataset contains 120 different action classes including daily, mutual, and health-related activities. We evaluate the performance of a series of existing 3D activity analysis methods on this dataset, and show the advantage of applying deep learning methods for 3D-based human action recognition. Furthermore, we investigate a novel one-shot 3D activity recognition problem on our dataset, and a simple yet effective Action-Part Semantic Relevance-aware (APSR) framework is proposed for this task, which yields promising results for recognition of the novel action classes. We believe the introduction of this large-scale dataset will enable the community to apply, adapt, and develop various data-hungry learning techniques for depth-based and RGB+D-based human activity understanding. [The dataset is available at: http://rose1.ntu.edu.sg/Datasets/actionRecognition.asp]
| Year | Citations | |
|---|---|---|
Page 1
Page 1