Concepedia

Publication | Closed Access

Detecting GAN-Generated Imagery Using Saturation Cues

208

Citations

10

References

2019

Year

TLDR

Image forensics is increasingly relevant for combating online disinformation and social media issues, especially with the rise of GAN‑generated imagery such as deepfakes, which recent GANs can produce with near‑realistic quality. The study analyzes the structure of a popular GAN’s generating network to investigate how it treats exposure compared to a real camera. The authors examine the GAN’s exposure handling within its network architecture, revealing a marked difference from real camera processing. This exposure cue enables effective discrimination between GAN‑generated imagery and real camera images, including those used to train the GAN.

Abstract

Image forensics is an increasingly relevant problem, as it can potentially address online disinformation campaigns and mitigate problematic aspects of social media. Of particular interest, given its recent successes, is the detection of imagery produced by Generative Adversarial Networks (GANs), e.g. `deepfakes'. Leveraging large training sets and extensive computing resources, recent GANs can be trained to generate synthetic imagery which is (in some ways) indistinguishable from real imagery. We analyze the structure of the generating network of a popular GAN implementation [1], and show that the network's treatment of exposure is markedly different from a real camera. We further show that this cue can be used to distinguish GAN-generated imagery from camera imagery, including effective discrimination between GAN imagery and real camera images used to train the GAN.

References

YearCitations

Page 1