Published October 16, 2022 | Version v1
Conference paper

Fully convolutional and feedforward networks for the semantic segmentation of remotely sensed images

Description

This paper presents a novel semantic segmentation method of very high resolution remotely sensed images based on fully convolutional networks (FCNs) and feedforward neural networks (FFNNs). The proposed model aims to exploit the intrinsic multiscale information extracted at different convolutional blocks in an FCN by the integration of FFNNs, thus incorporating information at different scales. The purpose is to obtain accurate classification results with realistic data sets characterized by sparse ground truth (GT) data by taking benefit from multiscale and long-range spatial information. The final loss function is computed as a linear combination of the weighted cross-entropy losses of the FFNNs and of the FCN. The modeling of spatial-contextual information is further addressed by the introduction of an additional loss term which allows to integrate spatial information between neighboring pixels. The experimental validation is conducted with the ISPRS 2D Semantic Labeling Challenge data set over the city of Vaihingen, Germany. The results are promising, as the proposed approach obtains higher average classification results than the state-of-the-art techniques considered, especially in the case of scarce, suboptimal GTs.

Abstract

International audience

Additional details

Created:
December 3, 2022
Modified:
December 1, 2023