Back to overview

Dense semantic labeling of subdecimeter resolution images with convolutional neural networks

Type of publication Peer-reviewed
Publikationsform Original article (peer-reviewed)
Author Volpi Michele, Tuia Devis,
Project Multimodal machine learning for remote sensing information fusion
Show all

Original article (peer-reviewed)

Journal IEEE Transactions on Geoscience and Remote Sensing
Volume (Issue) 55(2)
Page(s) 881 - 893
Title of proceedings IEEE Transactions on Geoscience and Remote Sensing
DOI 10.1109/tgrs.2016.2616585

Open Access

Type of Open Access Repository (Green Open Access)


© 1980-2012 IEEE. Semantic labeling (or pixel-level land-cover classification) in ultrahigh-resolution imagery ( < 10 cm) requires statistical models able to learn high-level concepts from spatial data, with large appearance variations. Convolutional neural networks (CNNs) achieve this goal by learning discriminatively a hierarchy of representations of increasing abstraction. In this paper, we present a CNN-based system relying on a downsample-then-upsample architecture. Specifically, it first learns a rough spatial map of high-level representations by means of convolutions and then learns to upsample them back to the original resolution by deconvolutions. By doing so, the CNN learns to densely label every pixel at the original resolution of the image. This results in many advantages, including: 1) the state-of-the-art numerical accuracy; 2) the improved geometric accuracy of predictions; and 3) high efficiency at inference time. We test the proposed system on the Vaihingen and Potsdam subdecimeter resolution data sets, involving the semantic labeling of aerial images of 9- and 5-cm resolution, respectively. These data sets are composed by many large and fully annotated tiles, allowing an unbiased evaluation of models making use of spatial information. We do so by comparing two standard CNN architectures with the proposed one: standard patch classification, prediction of local label patches by employing only convolutions, and full patch labeling by employing deconvolutions. All the systems compare favorably or outperform a state-of-the-art baseline relying on superpixels and powerful appearance descriptors. The proposed full patch labeling CNN outperforms these models by a large margin, also showing a very appealing inference time.