Publication | Closed Access
Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis
423
Citations
23
References
2019
Year
Unknown Venue
Image AnalysisMachine LearningDiverse Image SynthesisSingle Conditional ContextEngineeringGenerative Adversarial NetworkImage SynthesisGenerative ModelsRegularization TermEffective Regularization TermGenerative ModelGenerative AiDeep LearningGenerative SystemComputer VisionSynthetic Image Generation
Conditional generation tasks demand diverse outputs, yet cGANs often ignore noise vectors, causing mode collapse and forcing costly, task‑specific fixes. This paper introduces a simple regularization term to mitigate mode collapse in cGANs. The regularizer maximizes the ratio of image distance to latent code distance, encouraging generators to explore minor modes and can be applied to various conditional tasks without extra training overhead. Experiments on categorical generation, image‑to‑image translation, and text‑to‑image synthesis show that the method boosts diversity while preserving quality.
Most conditional generation tasks expect diverse outputs given a single conditional context. However, conditional generative adversarial networks (cGANs) often focus on the prior conditional information and ignore the input noise vectors, which contribute to the output variations. Recent attempts to resolve the mode collapse issue for cGANs are usually task-specific and computationally expensive. In this work, we propose a simple yet effective regularization term to address the mode collapse issue for cGANs. The proposed method explicitly maximizes the ratio of the distance between generated images with respect to the corresponding latent codes, thus encouraging the generators to explore more minor modes during training. This mode seeking regularization term is readily applicable to various conditional generation tasks without imposing training overhead or modifying the original network structures. We validate the proposed algorithm on three conditional image synthesis tasks including categorical generation, image-to-image translation, and text-to-image synthesis with different baseline models. Both qualitative and quantitative results demonstrate the effectiveness of the proposed regularization method for improving diversity without loss of quality.
| Year | Citations | |
|---|---|---|
Page 1
Page 1