Concepedia

TLDR

The authors propose a flexible video‑level framework, Temporal Segment Networks (TSN), to learn action models by modeling long‑range temporal structure. TSN samples a video into segments, aggregates the features, and trains a model on the entire video, enabling efficient action recognition with simple average pooling and multi‑scale temporal window integration, and incorporates best practices for limited training data. TSN attains state‑of‑the‑art accuracy on HMDB51, UCF101, THUMOS14, ActivityNet v1.2, and Kinetics400, reaches 91 % on UCF101 using RGB difference at 340 FPS, and won the ActivityNet 2016 video classification challenge.

Abstract

We present a general and flexible video-level framework for learning action models in videos. This method, called temporal segment network (TSN), aims to model long-range temporal structure with a new segment-based sampling and aggregation scheme. This unique design enables the TSN framework to efficiently learn action models by using the whole video. The learned models could be easily deployed for action recognition in both trimmed and untrimmed videos with simple average pooling and multi-scale temporal window integration, respectively. We also study a series of good practices for the implementation of the TSN framework given limited training samples. Our approach obtains the state-the-of-art performance on five challenging action recognition benchmarks: HMDB51 (71.0 percent), UCF101 (94.9 percent), THUMOS14 (80.1 percent), ActivityNet v1.2 (89.6 percent), and Kinetics400 (75.7 percent). In addition, using the proposed RGB difference as a simple motion representation, our method can still achieve competitive accuracy on UCF101 (91.0 percent) while running at 340 FPS. Furthermore, based on the proposed TSN framework, we won the video classification track at the ActivityNet challenge 2016 among 24 teams.

References

YearCitations

Page 1