Special Issue "Deep Learning in Remote Sensing Application"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 December 2022.

Special Issue Editors

Dr. Weijia Li
E-Mail Website
Guest Editor
CUHK-Sensetime Joint Lab, Department of Information Engineering, The Chinese University of Hong Kong, Hong Kong, China
Interests: remote sensing image understanding; computer vision; deep learning
Special Issues and Collections in MDPI journals
Dr. Lichao Mou
E-Mail Website
Guest Editor
1. Data Science in Earth Observation, Technical University of Munich (TUM), Arcisstraße 21, 80333 München, Germany
2. Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Münchener Straße 20, 82234 Weßling, Germany
Interests: earth observation; remote sensing; computer vision; machine/deep learning
Dr. Angelica I. Aviles-Rivero
E-Mail Website
Guest Editor
DAMTP, University of Cambridge, Wilberforce Rd, Cambridge CB3 0WA, UK
Interests: semi-supervised learning; hyperspectral analysis; street level analysis; deep learning; graph-based techniques
Runmin Dong
E-Mail Website
Guest Editor Assistant
Ministry of Education Key Laboratory for Earth System Modeling, Department of Earth System Science, Tsinghua University, Beijing 100084, China
Interests: remote sensing image understanding; deep learning; land cover mapping; image super-resolution reconstruction
Juepeng Zheng
E-Mail Website
Guest Editor Assistant
Ministry of Education Key Laboratory for Earth System Modeling, Department of Earth System Science, Tsinghua University, Beijing 100084, China
Interests: remote sensing image understanding; deep learning; high performance computing

Special Issue Information

Dear Colleagues,

Remote sensing images have recorded various kinds of information on the earth surface for decades, which have been broadly applied to many crucial areas, e.g., urban planning, national security, agriculture, forestry, climate, hydrology, etc. It is important to extract essential information from the substantial amount of remote sensing images efficiently and accurately. Over the recent years, artificial intelligence, especially the deep learning technique, has had a significant effect on the remote sensing domain and shown great potential in land cover and land use mapping, crop monitoring, object detection, building and road extraction, change detection, super-resolution, and many other remote sensing applications. However, many challenges still exist due to the limited number of annotated datasets, the special characteristic of different sensors or data sources, the complexity and diversity of large-scale areas, and other specific problems in real-world applications. In this Special Issue, we expect new research progress and contributions for deep-learning-based remote sensing applications. We look forward to novel datasets, algorithm design, or application domains. The scope of this Special Issue includes but is not limited to:

  • Image classification;
  • Object detection;
  • Semantic segmentation;
  • Instance segmentation;
  • Weakly supervised learning;
  • Semi-supervised learning;
  • Self-supervised learning;
  • Unsupervised learning;
  • Domain adaptation;
  • Transfer learning;
  • Novel datasets;
  • Novel tasks/applications;
  • 3D vision for monocular images;
  • Multi-view stereo;
  • Point cloud data;
  • Change detection;
  • Time-series data analysis;
  • Multispectral or hyperspectral image analysis;
  • Image super-resolution/restoration;
  • Data Fusion;
  • Multi-modal data analysis.

Dr. Weijia Li
Dr. Lichao Mou
Dr. Angelica I. Aviles-Rivero
Guest Editors
Runmin Dong
Juepeng Zheng
Guest Editor Assistants

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep learning
  • Computer vision
  • Remote sensing image analysis
  • Classification/detection/segmentation
  • Novel datasets
  • Remote sensing applications

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Multi-Object Segmentation in Complex Urban Scenes from High-Resolution Remote Sensing Data
Remote Sens. 2021, 13(18), 3710; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13183710 - 16 Sep 2021
Viewed by 321
Abstract
Terrestrial features extraction, such as roads and buildings from aerial images using an automatic system, has many usages in an extensive range of fields, including disaster management, change detection, land cover assessment, and urban planning. This task is commonly tough because of complex [...] Read more.
Terrestrial features extraction, such as roads and buildings from aerial images using an automatic system, has many usages in an extensive range of fields, including disaster management, change detection, land cover assessment, and urban planning. This task is commonly tough because of complex scenes, such as urban scenes, where buildings and road objects are surrounded by shadows, vehicles, trees, etc., which appear in heterogeneous forms with lower inter-class and higher intra-class contrasts. Moreover, such extraction is time-consuming and expensive to perform by human specialists manually. Deep convolutional models have displayed considerable performance for feature segmentation from remote sensing data in the recent years. However, for the large and continuous area of obstructions, most of these techniques still cannot detect road and building well. Hence, this work’s principal goal is to introduce two novel deep convolutional models based on UNet family for multi-object segmentation, such as roads and buildings from aerial imagery. We focused on buildings and road networks because these objects constitute a huge part of the urban areas. The presented models are called multi-level context gating UNet (MCG-UNet) and bi-directional ConvLSTM UNet model (BCL-UNet). The proposed methods have the same advantages as the UNet model, the mechanism of densely connected convolutions, bi-directional ConvLSTM, and squeeze and excitation module to produce the segmentation maps with a high resolution and maintain the boundary information even under complicated backgrounds. Additionally, we implemented a basic efficient loss function called boundary-aware loss (BAL) that allowed a network to concentrate on hard semantic segmentation regions, such as overlapping areas, small objects, sophisticated objects, and boundaries of objects, and produce high-quality segmentation maps. The presented networks were tested on the Massachusetts building and road datasets. The MCG-UNet improved the average F1 accuracy by 1.85%, and 1.19% and 6.67% and 5.11% compared with UNet and BCL-UNet for road and building extraction, respectively. Additionally, the presented MCG-UNet and BCL-UNet networks were compared with other state-of-the-art deep learning-based networks, and the results proved the superiority of the networks in multi-object segmentation tasks. Full article
(This article belongs to the Special Issue Deep Learning in Remote Sensing Application)
Show Figures

Graphical abstract

Article
Learning Adjustable Reduced Downsampling Network for Small Object Detection in Urban Environments
Remote Sens. 2021, 13(18), 3608; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13183608 - 10 Sep 2021
Viewed by 235
Abstract
Detecting small objects (e.g., manhole covers, license plates, and roadside milestones) in urban images is a long-standing challenge mainly due to the scale of small object and background clutter. Although convolution neural network (CNN)-based methods have made significant progress and achieved impressive results [...] Read more.
Detecting small objects (e.g., manhole covers, license plates, and roadside milestones) in urban images is a long-standing challenge mainly due to the scale of small object and background clutter. Although convolution neural network (CNN)-based methods have made significant progress and achieved impressive results in generic object detection, the problem of small object detection remains unsolved. To address this challenge, in this study we developed an end-to-end network architecture that has three significant characteristics compared to previous works. First, we designed a backbone network module, namely Reduced Downsampling Network (RD-Net), to extract informative feature representations with high spatial resolutions and preserve local information for small objects. Second, we introduced an Adjustable Sample Selection (ADSS) module which frees the Intersection-over-Union (IoU) threshold hyperparameters and defines positive and negative training samples based on statistical characteristics between generated anchors and ground reference bounding boxes. Third, we incorporated the generalized Intersection-over-Union (GIoU) loss for bounding box regression, which efficiently bridges the gap between distance-based optimization loss and area-based evaluation metrics. We demonstrated the effectiveness of our method by performing extensive experiments on the public Urban Element Detection (UED) dataset acquired by Mobile Mapping Systems (MMS). The Average Precision (AP) of the proposed method was 81.71%, representing an improvement of 1.2% compared with the popular detection framework Faster R-CNN. Full article
(This article belongs to the Special Issue Deep Learning in Remote Sensing Application)
Show Figures

Graphical abstract

Article
Making Low-Resolution Satellite Images Reborn: A Deep Learning Approach for Super-Resolution Building Extraction
Remote Sens. 2021, 13(15), 2872; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13152872 - 22 Jul 2021
Viewed by 522
Abstract
Existing methods for building extraction from remotely sensed images strongly rely on aerial or satellite-based images with very high resolution, which are usually limited by spatiotemporally accessibility and cost. In contrast, relatively low-resolution images have better spatial and temporal availability but cannot directly [...] Read more.
Existing methods for building extraction from remotely sensed images strongly rely on aerial or satellite-based images with very high resolution, which are usually limited by spatiotemporally accessibility and cost. In contrast, relatively low-resolution images have better spatial and temporal availability but cannot directly contribute to fine- and/or high-resolution building extraction. In this paper, based on image super-resolution and segmentation techniques, we propose a two-stage framework (SRBuildingSeg) for achieving super-resolution (SR) building extraction using relatively low-resolution remotely sensed images. SRBuildingSeg can fully utilize inherent information from the given low-resolution images to achieve high-resolution building extraction. In contrast to the existing building extraction methods, we first utilize an internal pairs generation module (IPG) to obtain SR training datasets from the given low-resolution images and an edge-aware super-resolution module (EASR) to improve the perceptional features, following the dual-encoder building segmentation module (DES). Both qualitative and quantitative experimental results demonstrate that our proposed approach is capable of achieving high-resolution (e.g., 0.5 m) building extraction results at 2×, 4× and 8× SR. Our approach outperforms eight other methods with respect to the extraction result of mean Intersection over Union (mIoU) values by a ratio of 9.38%, 8.20%, and 7.89% with SR ratio factors of 2, 4, and 8, respectively. The results indicate that the edges and borders reconstructed in super-resolved images serve a pivotal role in subsequent building extraction and reveal the potential of the proposed approach to achieve super-resolution building extraction. Full article
(This article belongs to the Special Issue Deep Learning in Remote Sensing Application)
Show Figures

Graphical abstract

Back to TopTop