Concepedia

Publication | Closed Access

StoryGAN: A Sequential Conditional GAN for Story Visualization

206

Citations

31

References

2019

Year

TLDR

Story visualization prioritizes global consistency across dynamic scenes and characters rather than frame continuity, a challenge unmet by existing single‑image or video generation methods. The authors introduce Story Visualization as a new task and propose StoryGAN, a sequential conditional GAN, to generate image sequences from multi‑sentence paragraphs. StoryGAN employs a deep Context Encoder to track story flow, two story‑ and image‑level discriminators to enhance quality and consistency, and is evaluated on newly created CLEVR‑SV and Pororo‑SV datasets. Empirical results show that StoryGAN surpasses state‑of‑the‑art models in image quality, contextual consistency, and human evaluation metrics.

Abstract

In this work, we propose a new task called Story Visualization. Given a multi-sentence paragraph, the story is visualized by generating a sequence of images, one for each sentence. In contrast to video generation, story visualization focuses less on the continuity in generated images (frames), but more on the global consistency across dynamic scenes and characters -- a challenge that has not been addressed by any single-image or video generation methods. Therefore, we propose a new story-to-image-sequence generation model, StoryGAN, based on the sequential conditional GAN framework. Our model is unique in that it consists of a deep Context Encoder that dynamically tracks the story flow, and two discriminators at the story and image levels, to enhance the image quality and the consistency of the generated sequences. To evaluate the model, we modified existing datasets to create the CLEVR-SV and Pororo-SV datasets. Empirically, StoryGAN outperformed state-of-the-art models in image quality, contextual consistency metrics, and human evaluation.

References

YearCitations

Page 1