Publication | Closed Access
SAT-Net: Structure-Aware Transformer-Based Attention Fusion Network for Low-Quality Retinal FunduImages Enhancement
38
Citations
34
References
2025
Year
In ophthalmology diagnosis, high-fidelity fundus images are essential for disease diagnosis and intervention. However, many real-world clinical conditions may degrade the quality of the acquired images and thus affect clinical diagnostic accuracy. Traditional convolutional neural network-based retinal fundus image enhancement methods cannot always capture long-range dependencies, which reduces the overall visual quality of images, especially for real retinal fundus images. Furthermore, existing enhancement methods often fail to fully utilize low-resolution structural detail information, which potentially leads to inaccurate pivotal fundus vessel topology or capillary details. In this paper, we propose a novel Structure-Aware Transformer-based attention fusion Network (SAT-Net) for low-quality retinal fundus image enhancement. First, we introduce a Transformer-based attention fusion module which incorporates window-based self-attention and channel self-attention to capture global spatial dependencies and emphasize important feature channels simultaneously. This fusion significantly improves the overall perceptual quality of the image by enhancing both the local details and the uniformity of the non-vessel background regions. Second, we introduce a cross-quality knowledge distillation technique, which bridges the quality gap between high-quality and low-quality fundus images. By designing a high-performing teacher network to guide a lightweight student network, the student network enables to capture detailed features from low-quality fundus images, further preserving critical diagnostic information and fine topology structures. Moreover, we design a structure-aware multi-scale loss function by using a trainable subnetwork to obtain the edge structure from different scales to better constrain pivotal fundus vessel structure and capillary details. Comprehensive quantitative and qualitative experiments on both synthetic and real fundus image datasets robustly validate that our proposed SAT-Net outperforms other state-of-the-art methods for fundus image enhancement. In addition, extensive comparative experiments on both the vessel segmentation and Optic Disc/Cup detection tasks further validate the effectiveness and superiority of our proposed method.
| Year | Citations | |
|---|---|---|
Page 1
Page 1