Concepedia

Publication | Closed Access

Conditional Image-to-Video Generation with Latent Flow Diffusion Models

101

Citations

55

References

2023

Year

TLDR

The key challenge of conditional image‑to‑video generation is simultaneously producing realistic spatial appearance and temporal dynamics that match a given image and condition. The authors propose using latent flow diffusion models to synthesize an optical flow sequence in latent space conditioned on an image and a label, warping the image to generate realistic videos. LFDM is trained in two stages: an unsupervised latent flow auto‑encoder learns spatial content and flow prediction, followed by a conditional 3D‑UNet diffusion model that generates temporally coherent latent flows, enabling efficient motion synthesis in a low‑dimensional flow space. Experiments on multiple datasets show that LFDM consistently outperforms prior methods, produces finer spatial details and smoother motion, and can be adapted to new domains with simple decoder fine‑tuning. Code is available at https://github.com/nihaomiao/CVPR23_LFDM.

Abstract

Conditional image-to-video (cI2V) generation aims to synthesize a new plausible video starting from an image (e.g., a person's face) and a condition (e.g., an action class label like smile). The key challenge of the cI2V task lies in the simultaneous generation of realistic spatial appearance and temporal dynamics corresponding to the given image and condition. In this paper, we propose an approach for cI2V using novel latent flow diffusion models (LFDM) that synthesize an optical flow sequence in the latent space based on the given condition to warp the given image. Compared to previous direct-synthesis-based works, our proposed LFDM can better synthesize spatial details and temporal motion by fully utilizing the spatial content of the given image and warping it in the latent space according to the generated temporally-coherent flow. The training of LFDM consists of two separate stages: (1) an unsupervised learning stage to train a latent flow auto-encoder for spatial content generation, including a flow predictor to estimate latent flow between pairs of video frames, and (2) a conditional learning stage to train a 3D-UNet-based diffusion model (DM) for temporal latent flow generation. Unlike previous DMs operating in pixel space or latent feature space that couples spatial and temporal information, the DM in our LFDM only needs to learn a low-dimensional latent flow space for motion generation, thus being more computationally efficient. We conduct comprehensive experiments on multiple datasets, where LFDM consistently outperforms prior arts. Furthermore, we show that LFDM can be easily adapted to new domains by simply finetuning the image decoder. Our code is available at https://github.com/nihaomiao/CVPR23_LFDM.

References

YearCitations

Page 1