Publication | Open Access
Refined Extraction Of Building Outlines From High-Resolution Remote Sensing Imagery Based on a Multifeature Convolutional Neural Network and Morphological Filtering
68
Citations
59
References
2020
Year
Convolutional Neural NetworkEngineeringMachine LearningMulti-image FusionImage ClassificationImage AnalysisMathematical MorphologyPattern RecognitionEdge DetectionMachine VisionObject DetectionGeographyAutomatic ExtractionDeep LearningOptical Image RecognitionMorphological FilteringBuilding BoundariesComputer VisionRemote SensingImage SegmentationRefined Building Boundaries
The automatic extraction of building outlines from high-resolution images is an important and challenging task. Convolutional neural networks have shown excellent results compared with traditional building extraction methods because of their ability to extract high-level abstract features from images. However, it is difficult to fully utilize the multiple features of current building extraction methods; consequently, the resulting building boundaries are irregular. To overcome these limitations, we propose a method for extracting buildings from high-resolution images using a multifeature convolutional neural network (MFCNN) and morphological filtering. Our method consists of two steps. First, the MFCNN, which consists of residual connected unit, dilated perception unit, and pyramid aggregation unit, is used to achieve pixel-level segmentation of the buildings. Second, morphological filtering is used to optimize the building boundaries, improve the boundary regularity, and obtain refined building boundaries. The Massachusetts and Inria datasets are selected for experimental analysis. Under the same experimental conditions, the extraction results achieved with the proposed MFCNN are compared with the results of other deep learning models that have been commonly used in recent years: FCN-8s, SegNet, and U-Net. The results on both datasets reveal that the proposed model improves the F1-score by 3.31%-5.99%, increases the overall accuracy (OA) by 1.85%-3.07%, and increases the intersection over union (IOU) by 3.47%-8.82%. These findings illustrate that the proposed method is effective at extracting buildings from complex scenes.
| Year | Citations | |
|---|---|---|
Page 1
Page 1