Publication | Closed Access
Adversarial nets with perceptual losses for text-to-image synthesis
34
Citations
16
References
2017
Year
Unknown Venue
Generative Artificial IntelligenceAdversarial NetsImage AnalysisMachine LearningEngineeringGenerative Adversarial NetworkCompelling Synthetic ImagesImage SynthesisSynthetic Image GeneratorGenerative ModelsDescriptive TextStyle TransferGenerative AiDeep LearningComputer VisionMachine TranslationSynthetic Image Generation
Recent approaches in generative adversarial networks (GANs) can automatically synthesize realistic images from descriptive text. Despite the overall fair quality, the generated images often expose visible flaws that lack structural definition for an object of interest. In this paper, we aim to extend state of the art for GAN-based text-to-image synthesis by improving perceptual quality of generated images. Differentiated from previous work, our synthetic image generator optimizes on perceptual loss functions that measure pixel, feature activation, and texture differences against a natural image. We present visually more compelling synthetic images of birds and flowers generated from text descriptions in comparison to some of the most prominent existing work.
| Year | Citations | |
|---|---|---|
Page 1
Page 1