Publication | Open Access
Progressive Growing of GANs for Improved Quality, Stability, and Variation
1.6K
Citations
29
References
2017
Year
Image AnalysisMachine LearningData ScienceEngineeringGenerative Adversarial NetworkCeleba DatasetGenerative ModelsGenerative ModelImage QualityComputer ScienceGenerative AiDeep LearningProgressive GrowingNew Training MethodologyGenerative SystemComputer VisionSynthetic Image Generation
The authors introduce a progressive training method for GANs that enhances image quality, stability, and diversity, and propose a new evaluation metric and a higher‑quality CelebA dataset. The method progressively adds layers to both generator and discriminator, starting from low resolution and refining details, which accelerates training, stabilizes learning, and boosts image quality and diversity while incorporating tricks to prevent unhealthy competition. The approach yields faster, more stable training and produces high‑resolution images (e.g., 1024×1024 CelebA) with a record unsupervised CIFAR‑10 inception score of 8.80.
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
| Year | Citations | |
|---|---|---|
Page 1
Page 1