Back to overview

Deep multi-task learning for a geographically-regularized semantic segmentation of aerial images

Type of publication Peer-reviewed
Publikationsform Original article (peer-reviewed)
Author Volpi Michele, Tuia Devis,
Project Multimodal machine learning for remote sensing information fusion
Show all

Original article (peer-reviewed)

Journal ISPRS Journal of Photogrammetry and Remote Sensing
Volume (Issue) 144
Page(s) 48 - 60
Title of proceedings ISPRS Journal of Photogrammetry and Remote Sensing
DOI 10.1016/j.isprsjprs.2018.06.007

Open Access

Type of Open Access Repository (Green Open Access)


© 2018 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) When approaching the semantic segmentation of overhead imagery in the decimeter spatial resolution range, successful strategies usually combine powerful methods to learn the visual appearance of the semantic classes (e.g. convolutional neural networks) with strategies for spatial regularization (e.g. graphical models such as conditional random fields). In this paper, we propose a method to learn evidence in the form of semantic class likelihoods, semantic boundaries across classes and shallow-to-deep visual features, each one modeled by a multi-task convolutional neural network architecture. We combine this bottom-up information with top-down spatial regularization encoded by a conditional random field model optimizing the label space across a hierarchy of segments with constraints related to structural, spatial and data-dependent pairwise relationships between regions. Our results show that such strategy provide better regularization than a series of strong baselines reflecting state-of-the-art technologies. The proposed strategy offers a flexible and principled framework to include several sources of visual and structural information, while allowing for different degrees of spatial regularization accounting for priors about the expected output structures.