Publication | Closed Access
Context Adaptive Network for Image Inpainting
27
Citations
43
References
2023
Year
Convolutional Neural NetworkEngineeringMachine LearningContext Adaptive NetworkConvolution KernelImage ClassificationImage AnalysisPattern RecognitionVideo TransformerVideo RestorationVision RecognitionContext Adaptive BlockSynthetic Image GenerationMachine VisionTypical ImageComputer ScienceMedical Image ComputingDeep LearningComputer VisionInpainting
In a typical image inpainting task, the location and shape of the damaged or masked area is often random and irregular. The vanilla convolutions widely used in learning-based inpainting models treat all spatial features as valid and share parameters across regions, making it difficult for them to cope with those irregular damages, and models tend to produce inpainting results with color discrepancy and blurriness. In this paper, we propose a novel Context Adaptive Network (CANet) to address this issue. The main idea of the proposed CANet is able to generate different weights depending on the miscellaneous input, which may help to complement images with multiple broken forms in a flexible way. Specifically, the proposed CANet has two novel context adaptive modules, namely, the context adaptive block (CAB) and the cross-scale contextual attention (CSCA), which utilize attention mechanisms to cope with diverse content breakdowns. The proposed CAB, during the forward propagation, uses an adaptive term to determine the importance between adaptive term and convolution kernel, so as to dynamically balance features based on the degree of breakage (confidence level or soft mask), and the overall calculation is formulated as a classic convolution implementation with an additional attention term to describe local structure. Besides, the proposed CSCA, not only takes advantage of the contextual attention module, but also considers cross-scale information transfer to generate reasonable features for damaged areas, thus alleviating the inefficiency of the long-range modeling capability of convolutional neural networks. Qualitative and quantitative experiments show that our method performs better than state-of-the-arts, producing clearer, more coherent and visually plausible inpainting results. The code can be found at github.com/dengyecode/CANet_image_inpainting.
| Year | Citations | |
|---|---|---|
Page 1
Page 1