Concepedia

TLDR

Story generation faces a vast, highly personalized output space, yet existing end‑to‑end models produce monotonous narratives due to limited vocabulary and knowledge from a single training dataset. The authors aim to demonstrate that KG‑Story, a three‑stage framework leveraging external Knowledge Graphs, outperforms existing methods in visual storytelling from five‑photo prompts. KG‑Story distills representative words from prompts, enriches them via external Knowledge Graphs, and generates stories from the enriched set, applying external resources throughout distillation, enrichment, and generation. Human evaluations rank KG‑Story stories higher than state‑of‑the‑art systems, and the authors provide code and sample stories at the given GitHub repository.

Abstract

Stories are diverse and highly personalized, resulting in a large possible output space for story generation. Existing end-to-end approaches produce monotonous stories because they are limited to the vocabulary and knowledge in a single training dataset. This paper introduces KG-Story, a three-stage framework that allows the story generation model to take advantage of external Knowledge Graphs to produce interesting stories. KG-Story distills a set of representative words from the input prompts, enriches the word set by using external knowledge graphs, and finally generates stories based on the enriched word set. This distill-enrich-generate framework allows the use of external resources not only for the enrichment phase, but also for the distillation and generation phases. In this paper, we show the superiority of KG-Story for visual storytelling, where the input prompt is a sequence of five photos and the output is a short story. Per the human ranking evaluation, stories generated by KG-Story are on average ranked better than that of the state-of-the-art systems. Our code and output stories are available at https://github.com/zychen423/KE-VIST.

References

YearCitations

Page 1