Publication | Closed Access
ARWGAN: Attention-Guided Robust Image Watermarking Model Based on GAN
65
Citations
62
References
2023
Year
Digital WatermarkingConvolutional Neural NetworkMachine VisionImage AnalysisMachine LearningImage FeaturesPattern RecognitionAttention ModuleEngineeringFeature LearningGenerative Adversarial NetworkMultimedia SecurityDeep LearningFeature FusionComputer VisionSynthetic Image Generation
In the existing deep learning based watermarking models, extracted image features for fusing with watermark are not abundant enough and more critically, essential features are not highlighted to be learned with the purpose of robust watermarking, both of which limit the watermarking performance. To solve those two drawbacks, this paper proposes an attention-guided robust image watermarking model based on generative adversarial network (ARWGAN). To acquire a great deal of representational image features, a feature fusion module (FFM) is devised to learn shallow and deep features effectively for multi-layer fusion with watermark, and meanwhile, reuse of those features by the dense connection enhances robustness. To alleviate image distortion caused by embedding watermark, an attention module (AM) is deployed to compute the attention mask by mining the global features of the original image. Specifically, with the guidance of the attention mask, image features representing inconspicuous regions and texture regions are enhanced for embedding the high strength of watermark, and simultaneously other features are suppressed to improve the watermarking performance. Furthermore, the noise sub-network is adopted for robustness enhancement by simulating various image attacks in iterative training. The discriminator is used to distinguish the encoded image from the original image for improving watermarking invisibility continuously. Experimental results demonstrate that ARWGAN is superior to the existing state-of-the-art watermarking models, and ablation experiments prove the effectiveness of the FFM and the AM. The code is avaliable in https://github.com/river-huang/ARWGAN.
| Year | Citations | |
|---|---|---|
Page 1
Page 1