Publication | Open Access
Learning Fully Dense Neural Networks for Image Semantic Segmentation
45
Citations
27
References
2019
Year
Convolutional Neural NetworkMachine VisionMachine LearningImage AnalysisData SciencePixel-wise ClassificationEngineeringAutoencodersFeature LearningScene UnderstandingSemantic SegmentationImage Semantic SegmentationComputer ScienceDense Neural NetworkDeep LearningCritical Spatial InformationImage SegmentationComputer Vision
Semantic segmentation is pixel-wise classification which retains critical spatial information. The “feature map reuse” has been commonly adopted in CNN based approaches to take advantage of feature maps in the early layers for the later spatial reconstruction. Along this direction, we go a step further by proposing a fully dense neural network with an encoderdecoder structure that we abbreviate as FDNet. For each stage in the decoder module, feature maps of all the previous blocks are adaptively aggregated to feedforward as input. On the one hand, it reconstructs the spatial boundaries accurately. On the other hand, it learns more efficiently with the more efficient gradient backpropagation. In addition, we propose the boundary-aware loss function to focus more attention on the pixels near the boundary, which boosts the “hard examples” labeling. We have demonstrated the best performance of the FDNet on the two benchmark datasets: PASCAL VOC 2012, NYUDv2 over previous works when not considering training on other datasets.
| Year | Citations | |
|---|---|---|
2016 | 214.9K | |
2014 | 75.4K | |
2017 | 43.3K | |
2015 | 36.2K | |
2017 | 27.7K | |
2015 | 24.2K | |
2017 | 21.4K | |
2009 | 19K | |
2017 | 18.2K | |
2017 | 15.1K |
Page 1
Page 1