Concepedia

Publication | Closed Access

Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation

107

Citations

16

References

2023

Year

TLDR

Large text‑to‑image diffusion models generate high‑quality images, yet maintaining temporal consistency when applied to video remains a major challenge. The study introduces a zero‑shot, text‑guided video‑to‑video translation framework that adapts image diffusion models for video. The framework first generates key frames with an adapted diffusion model and hierarchical cross‑frame constraints, then propagates these key frames to the full video using temporal‑aware patch matching and frame blending, and is compatible with existing image diffusion techniques such as LoRA and ControlNet. The method delivers globally consistent style and locally consistent texture at low cost, outperforming prior approaches in producing high‑quality, temporally coherent videos, with code available at the project page.

Abstract

Large text-to-image diffusion models have exhibited impressive proficiency in generating high-quality images. However, when applying these models to video domain, ensuring temporal consistency across video frames remains a formidable challenge. This paper proposes a novel zero-shot text-guided video-to-video translation framework to adapt image models to videos. The framework includes two parts: key frame translation and full video translation. The first part uses an adapted diffusion model to generate key frames, with hierarchical cross-frame constraints applied to enforce coherence in shapes, textures and colors. The second part propagates the key frames to other frames with temporal-aware patch matching and frame blending. Our framework achieves global style and local texture temporal consistency at a low cost (without re-training or optimization). The adaptation is compatible with existing image diffusion techniques, allowing our framework to take advantage of them, such as customizing a specific subject with LoRA, and introducing extra spatial guidance with ControlNet. Extensive experimental results demonstrate the effectiveness of our proposed framework over existing methods in rendering high-quality and temporally-coherent videos. Code is available at our project page: https://www.mmlab-ntu.com/project/rerender/

References

YearCitations

Page 1