Concepedia

Publication | Open Access

DeepU-Net: A Parallel Dual-Branch Model for Deeply Fusing Multiscale Features for Road Extraction From High-Resolution Remote Sensing Images

31

Citations

45

References

2025

Year

Abstract

The existing encoder–decoder model, or encoder–decoder model with atrous convolutions, has exposed its limitations under diverse environments, such as road scales, shadows, building occlusions, and vegetation in high-resolution remote sensing images. Therefore, this article introduces a dual-branch deep fusion network, named “DeepU-Net,” for obtaining global and local information in parallel. Two novel modules are designed: 1) the spatial and coordinate squeeze-and-excitation fusion attention module that enhances the focus on spatial positions and target channel information; and 2) the efficient multiscale convolutional attention module that can boost the competence to tackle multiscale road information. The validation of the proposed model is conducted using two datasets, CHN6-CUG and DeepGlobe, which are from urban and rural areas, respectively. A comparative analysis with the six commonly used models, including U-Net, PSPNet, DeepLabv3+, HRNet, CoANet, and SegFormer, is conducted. The experimental results reveal that the introduced model achieves mean intersection over union scores of 83.18% and 81.43%, which are averagely improved by 1.93% and 1.02%, respectively, for the two datasets, when compared with the six commonly used models. The outcomes suggest that the introduced model achieves a greater accuracy than the six extensively applied models do.

References

YearCitations

Page 1