Concepedia

TLDR

The paper proposes a method to generate a video sequence from a single image. The method uses motion strokes as control signals and trains a recurrent network with an autoencoding constraint and GAN to produce realistic, temporally smooth animations from a single image. Experiments demonstrate that the architecture can generate arbitrary-length video sequences from a single image and motion strokes, achieving realistic results on MNIST, KTH, Human3.6M, Push, and Weizmann datasets.

Abstract

We present a method to generate a video sequence given a single image. Because items in an image can be animated in arbitrarily many different ways, we introduce as control signal a sequence of motion strokes. Such control signal can be automatically transferred from other videos, e.g., via bounding box tracking. Each motion stroke provides the direction to the moving object in the input image and we aim to train a network to generate an animation following a sequence of such directions. To address this task we design a novel recurrent architecture, which can be trained easily and effectively thanks to an explicit separation of past, future and current states. As we demonstrate in the experiments, our proposed architecture is capable of generating an arbitrary number of frames from a single image and a sequence of motion strokes. Key components of our architecture are an autoencoding constraint to ensure consistency with the past and a generative adversarial scheme to ensure that images look realistic and are temporally smooth. We demonstrate the effectiveness of our approach on the MNIST, KTH, Human3.6M, Push and Weizmann datasets.

References

YearCitations

Page 1