Publication | Open Access
Training Generative Adversarial Networks with Limited Data
931
Citations
25
References
2020
Year
Artificial IntelligenceData AugmentationEngineeringMachine LearningData ScienceTraining ImagesGenerative Adversarial NetworkGenerative ModelsGenerative ModelGenerative Adversarial NetworksComputer ScienceLimited Data RegimesGenerative AiDeep LearningGenerative SystemComputer VisionSynthetic Image Generation
GAN training with limited data often causes discriminator overfitting and divergence. The study proposes an adaptive discriminator augmentation mechanism to stabilize GAN training with limited data. The method augments the discriminator without altering loss functions or architectures, and is applicable to training from scratch or fine‑tuning. On several datasets, the approach achieves results comparable to StyleGAN2 with an order of magnitude fewer images, improves CIFAR‑10 FID from 5.59 to 2.42, and opens new application domains.
Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42.
| Year | Citations | |
|---|---|---|
Page 1
Page 1