Concepedia

Publication | Closed Access

VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking

365

Citations

68

References

2023

Year

TLDR

Building powerful foundation models depends on scale, yet training video models with billions of parameters remains difficult. The study demonstrates that VideoMAE is a scalable, general self‑supervised pre‑trainer for video foundation models. The authors scale VideoMAE by dual masking—an encoder on a token subset and a decoder on another—alongside a progressive training pipeline from diverse unlabeled data to mixed labeled data. VideoMAE’s dual masking cuts computational cost, enabling efficient billion‑parameter pre‑training that achieves state‑of‑the‑art results on Kinetics and Something‑Something and performs well across diverse downstream tasks.

Abstract

Scale is the primary factor for building a powerful foundation model that could well generalize to a variety of downstream tasks. However, it is still challenging to train video foundation models with billions of parameters. This paper shows that video masked autoencoder (VideoMAE) is a scalable and general self-supervised pre-trainer for building video foundation models. We scale the VideoMAE in both model and data with a core design. Specifically, we present a dual masking strategy for efficient pre-training, with an encoder operating on a subset of video tokens and a decoder processing another subset of video tokens. Although VideoMAE is very efficient due to high masking ratio in encoder, masking decoder can still further reduce the overall computational cost. This enables the efficient pre-training of billion-level models in video. We also use a progressive training paradigm that involves an initial pre-training on a diverse multi-sourced unlabeled dataset, followed by a post-pre-training on a mixed labeled dataset. Finally, we successfully train a video ViT model with a billion parameters, which achieves a new state-of-the-art performance on the datasets of Kinetics (90.0% on K400 and 89.9% on K600) and Something-Something (68.7% on V1 and 77.0% on V2). In addition, we extensively verify the pre-trained video ViT models on a variety of downstream tasks, demonstrating its effectiveness as a general video representation learner.

References

YearCitations

Page 1