Publication | Closed Access
GLIGEN: Open-Set Grounded Text-to-Image Generation
417
Citations
51
References
2023
Year
Unknown Venue
Natural Language ProcessingArtificial IntelligenceLarge MarginMultimodal LlmImage AnalysisMachine LearningEngineeringGrounded Text2img GenerationVisual GroundingGrounding AbilityVision Language ModelVisual Question AnsweringComputer ScienceHuman Image SynthesisDeep LearningComputer VisionMachine TranslationSynthetic Image Generation
Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN's zero-shot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin.
| Year | Citations | |
|---|---|---|
Page 1
Page 1