Publication | Open Access
The Kinetics Human Action Video Dataset
2.9K
Citations
4
References
2017
Year
Video ClipsMachine LearningEngineeringHuman Action ClassesVideo RetrievalVideo InterpretationHuman-object InteractionHuman Action ClassificationKinesiologyImage AnalysisData SciencePattern RecognitionVideo TransformerHealth SciencesComputer ScienceVideo UnderstandingDeep LearningComputer VisionVideo AnalysisHuman MovementActivity Recognition
The Kinetics dataset comprises 400 human action classes, each represented by at least 400 ten‑second clips sourced from distinct YouTube videos, covering a wide range of human‑object and human‑human interactions. The paper introduces the DeepMind Kinetics human action video dataset. The authors detail the dataset’s statistics and collection process, present baseline neural‑network classification results, and analyze how class imbalance may bias classifiers.
We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers.
| Year | Citations | |
|---|---|---|
Page 1
Page 1