Concepedia

Publication | Closed Access

DualGAN: Unsupervised Dual Learning for Image-to-Image Translation

2.1K

Citations

15

References

2017

Year

TLDR

Conditional GANs for cross‑domain image‑to‑image translation require large labeled image pairs, which are expensive and often unavailable. The authors propose DualGAN, a dual‑learning framework that trains image translators using only unlabeled images from two domains. DualGAN employs a primal GAN translating from domain U to V and a dual GAN performing the inverse, with a closed‑loop reconstruction loss guiding training. Experiments on multiple tasks show DualGAN outperforms a single GAN and can match or slightly exceed conditional GANs trained on fully labeled data.

Abstract

Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently [7, 8, 21, 12, 4, 18]. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation [23], we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.

References

YearCitations

2015

36.2K

2017

21.7K

2017

21.3K

2017

12K

2014

8.9K

2016

1.4K

2016

1.1K

2008

913

2016

598

2016

583

Page 1