remotesensing-logo

Journal Browser

Journal Browser

Deep Learning for Remote Sensing Data

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Environmental Remote Sensing".

Deadline for manuscript submissions: closed (31 October 2021) | Viewed by 39041

Special Issue Editors


E-Mail Website
Guest Editor
The University of Western Australia
Interests: deep learning; remote sensing; hyperspectral image analysis; classification; tracking; data fusion; video analysis; 3D point cloud analysis; LiDAR data analysis
School of Information and Communication Technology, Griffith University, Nathan, QLD 4111, Australia
Interests: pattern recognition; computer vision and spectral imaging with their applications to remote sensing and environmental informatics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
University of Western Australia
Interests: deep learning; remote sensing; hyperspectral image analysis; adversarial attacks and defenses

grade E-Mail Website
Guest Editor
1.Helmholtz Institute Freiberg for Resource Technology, Helmholtz-Zentrum Dresden-Rossendorf (HZDR), D-09599 Freiberg, Germany
2. Institute of Advanced Research in Artificial Intelligence (IARAI), 1030 Wien, Austria
Interests: hyperspectral image interpretation; multisensor and multitemporal data fusion
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Sci Eng & Built Env, School of Info Technology, Geelong Waurn Ponds Campus, Deakin University, Geelong, Australia
Interests: reflectance models; pattern recognition; machine learning; computer vision; segmentation; graph-matching; imaging spectroscopy; shape-from-X; environmental management

E-Mail Website
Guest Editor
School of Computer Science, The University of Adelaide, Adelaide, SA 5005, Australia
Interests: 3D sensing; processing and analysis; LiDAR data analysis; augmented reality; large scale optimization

Special Issue Information

The past decade has seen a quantum leap in the accuracies of numerous signal and image processing tasks due to deep learning. Deep learning can model very complex nonlinear mathematical functions in a data-driven manner, which makes it an attractive technology for numerous tasks in the field of remote sensing. Moreover, the recent rise in the number of Earth-observing satellites has also resulted in large volumes of data, which makes the application of deep learning even more appealing for remote sensing data. The ever-increasing computational capacity of GPUs and efficient implementation of deep learning algorithms in public software libraries are additional factors that are currently shifting the focus of the remote sensing community towards deep learning as the main data analysis tool.

This Special Issue on “Deep Learning for Remote Sensing Data” aims to capture recent advances and trends in exploiting deep learning for complex remote sensing data analysis tasks. The Special Issue welcomes contributions towards both theoretical advancements of the deep learning framework in the context of remote sensing, as well as application of this technology to remote sensing data. The topics of interest include but are not limited to:

  • Deep learning for remote sensing image processing, e.g., pan-sharpening, super-resolution;
  • Remote sensing data analysis with deep learning;
  • Specialized network architectures and deep learning algorithms for remote sensing data;
  • Transfer learning and cross-domain learning;
  • Real and synthetic remote sensing data generation;
  • Multimodality data fusion with deep models;
  • Pixel-level and subpixel-level classification, e.g., hyperspectral unmixing, segmentation.

Keywords

  • remote sensing
  • deep learning
  • hyperspectral imaging
  • segmentation
  • pan-sharpening
  • hyperspectral unmixing

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 32560 KiB  
Article
Decision Fusion of Deep Learning and Shallow Learning for Marine Oil Spill Detection
by Junfang Yang, Yi Ma, Yabin Hu, Zongchen Jiang, Jie Zhang, Jianhua Wan and Zhongwei Li
Remote Sens. 2022, 14(3), 666; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14030666 - 30 Jan 2022
Cited by 22 | Viewed by 3347
Abstract
Marine oil spills are an emergency of great harm and have become a hot topic in marine environmental monitoring research. Optical remote sensing is an important means to monitor marine oil spills. Clouds, weather, and light control the amount of available data, which [...] Read more.
Marine oil spills are an emergency of great harm and have become a hot topic in marine environmental monitoring research. Optical remote sensing is an important means to monitor marine oil spills. Clouds, weather, and light control the amount of available data, which often limit feature characterization using a single classifier and therefore difficult to accurate monitoring of marine oil spills. In this paper, we develop a decision fusion algorithm to integrate deep learning methods and shallow learning methods based on multi-scale features for improving oil spill detection accuracy in the case of limited samples. Based on the multi-scale features after wavelet transform, two deep learning methods and two classical shallow learning algorithms are used to extract oil slick information from hyperspectral oil spill images. The decision fusion algorithm based on fuzzy membership degree is introduced to fuse multi-source oil spill information. The research shows that oil spill detection accuracy using the decision fusion algorithm is higher than that of the single detection algorithms. It is worth noting that oil spill detection accuracy is affected by different scale features. The decision fusion algorithm under the first-level scale features can further improve the accuracy of oil spill detection. The overall classification accuracy of the proposed method is 91.93%, which is 2.03%, 2.15%, 1.32%, and 0.43% higher than that of SVM, DBN, 1D-CNN, and MRF-CNN algorithms, respectively. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Data)
Show Figures

Figure 1

17 pages, 26327 KiB  
Article
Incorporating Aleatoric Uncertainties in Lake Ice Mapping Using RADARSAT–2 SAR Images and CNNs
by Nastaran Saberi, Katharine Andrea Scott and Claude Duguay
Remote Sens. 2022, 14(3), 644; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14030644 - 29 Jan 2022
Cited by 6 | Viewed by 2066
Abstract
With the increasing availability of SAR imagery in recent years, more research is being conducted using deep learning (DL) for the classification of ice and open water; however, ice and open water classification using conventional DL methods such as convolutional neural networks (CNNs) [...] Read more.
With the increasing availability of SAR imagery in recent years, more research is being conducted using deep learning (DL) for the classification of ice and open water; however, ice and open water classification using conventional DL methods such as convolutional neural networks (CNNs) is not yet accurate enough to replace manual analysis for operational ice chart mapping. Understanding the uncertainties associated with CNN model predictions can help to quantify errors and, therefore, guide efforts on potential enhancements using more–advanced DL models and/or synergistic approaches. This paper evaluates an approach for estimating the aleatoric uncertainty [a measure used to identify the noise inherent in data] of CNN probabilities to map ice and open water with a custom loss function applied to RADARSAT–2 HH and HV observations. The images were acquired during the 2014 ice season of Lake Erie and Lake Ontario, two of the five Laurentian Great Lakes of North America. Operational image analysis charts from the Canadian Ice Service (CIS), which are based on visual interpretation of SAR imagery, are used to provide training and testing labels for the CNN model and to evaluate the accuracy of the model predictions. Bathymetry, as a variable that has an impact on the ice regime of lakes, was also incorporated during model training in supplementary experiments. Adding aleatoric loss and bathymetry information improved the accuracy of mapping water and ice. Results are evaluated quantitatively (accuracy metrics) and qualitatively (visual comparisons). Ice and open water scores were improved in some sections of the lakes by using aleatoric loss and including bathymetry. In Lake Erie, the ice score was improved by ∼2 on average in the shallow near–shore zone as a result of better mapping of dark ice (low backscatter) in the western basin. As for Lake Ontario, the open water score was improved by ∼6 on average in the deepest profundal off–shore zone. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Data)
Show Figures

Figure 1

20 pages, 6930 KiB  
Article
Transferable Deep Learning from Time Series of Landsat Data for National Land-Cover Mapping with Noisy Labels: A Case Study of China
by Xuemei Zhao, Danfeng Hong, Lianru Gao, Bing Zhang and Jocelyn Chanussot
Remote Sens. 2021, 13(21), 4194; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214194 - 20 Oct 2021
Cited by 5 | Viewed by 1669
Abstract
Large-scale land-cover classification using a supervised algorithm is a challenging task. Enormous efforts have been made to manually process and check the production of national land-cover maps. This has led to complex pre- and post-processing and even the production of inaccurate mapping products [...] Read more.
Large-scale land-cover classification using a supervised algorithm is a challenging task. Enormous efforts have been made to manually process and check the production of national land-cover maps. This has led to complex pre- and post-processing and even the production of inaccurate mapping products from large-scale remote sensing images. Inspired by the recent success of deep learning techniques, in this study we provided a feasible automatic solution for improving the quality of national land-cover maps. However, the application of deep learning to national land-cover mapping remains limited because only small-scale noisy labels are available. To this end, a mutual transfer network MTNet was developed. MTNet is capable of learning better feature representations by mutually transferring pre-trained models from time-series of data and fine-tuning current data. An interactive training strategy such as this can effectively alleviate the effects of inaccurate or noisy labels and unbalanced sample distributions, thus yielding a relatively stable classification system. Extensive experiments were conducted by focusing on several representative regions to evaluate the classification results of our proposed method. Quantitative results showed that the proposed MTNet outperformed its baseline model about 1%, and the accuracy can be improved up to 6.45% compared with the model trained by the training set of another year. We also visualized the national classification maps generated by MTNet for two different time periods to quantitatively analyze the performance gain. It was concluded that the proposed MTNet provides an efficient method for large-scale land cover mapping. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Data)
Show Figures

Figure 1

21 pages, 3424 KiB  
Article
Self-Attention-Based Conditional Variational Auto-Encoder Generative Adversarial Networks for Hyperspectral Classification
by Zhitao Chen, Lei Tong, Bin Qian, Jing Yu and Chuangbai Xiao
Remote Sens. 2021, 13(16), 3316; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163316 - 21 Aug 2021
Cited by 14 | Viewed by 6025
Abstract
Hyperspectral classification is an important technique for remote sensing image analysis. For the current classification methods, limited training data affect the classification results. Recently, Conditional Variational Autoencoder Generative Adversarial Network (CVAEGAN) has been used to generate virtual samples to augment the training data, [...] Read more.
Hyperspectral classification is an important technique for remote sensing image analysis. For the current classification methods, limited training data affect the classification results. Recently, Conditional Variational Autoencoder Generative Adversarial Network (CVAEGAN) has been used to generate virtual samples to augment the training data, which could improve the classification performance. To further improve the classification performance, based on the CVAEGAN, we propose a Self-Attention-Based Conditional Variational Autoencoder Generative Adversarial Network (SACVAEGAN). Compared with CVAEGAN, we first use random latent vectors to obtain more enhanced virtual samples, which can improve the generalization performance. Then, we introduce the self-attention mechanism into our model to force the training process to pay more attention to global information, which can achieve better classification accuracy. Moreover, we explore model stability by incorporating the WGAN-GP loss function into our model to reduce the mode collapse probability. Experiments on three data sets and a comparison of the state-of-art methods show that SACVAEGAN has great advantages in accuracy compared with state-of-the-art HSI classification methods. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Data)
Show Figures

Figure 1

21 pages, 14384 KiB  
Article
Remote Sensing Image Scene Classification via Label Augmentation and Intra-Class Constraint
by Hao Xie, Yushi Chen and Pedram Ghamisi
Remote Sens. 2021, 13(13), 2566; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132566 - 30 Jun 2021
Cited by 15 | Viewed by 3166
Abstract
In recent years, many convolutional neural network (CNN)-based methods have been proposed to address the scene classification tasks of remote sensing images. Since the number of training samples in RS datasets is generally small, data augmentation is often used to expand the training [...] Read more.
In recent years, many convolutional neural network (CNN)-based methods have been proposed to address the scene classification tasks of remote sensing images. Since the number of training samples in RS datasets is generally small, data augmentation is often used to expand the training set. It is, however, not appropriate when original data augmentation methods keep the label and change the content of the image at the same time. In this study, label augmentation (LA) is presented to fully utilize the training set by assigning a joint label to each generated image, which considers the label and data augmentation at the same time. Moreover, the output of images obtained by different data augmentation is aggregated in the test process. However, the augmented samples increase the intra-class diversity of the training set, which is a challenge to complete the following classification process. To address the above issue and further improve classification accuracy, Kullback–Leibler divergence (KL) is used to constrain the output distribution of two training samples with the same scene category to generate a consistent output distribution. Extensive experiments were conducted on widely-used UCM, AID and NWPU datasets. The proposed method can surpass the other state-of-the-art methods in terms of classification accuracy. For example, on the challenging NWPU dataset, competitive overall accuracy (i.e., 91.05%) is obtained with a 10% training ratio. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Data)
Show Figures

Graphical abstract

19 pages, 4472 KiB  
Article
Unsupervised Haze Removal for High-Resolution Optical Remote-Sensing Images Based on Improved Generative Adversarial Networks
by Anna Hu, Zhong Xie, Yongyang Xu, Mingyu Xie, Liang Wu and Qinjun Qiu
Remote Sens. 2020, 12(24), 4162; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12244162 - 19 Dec 2020
Cited by 24 | Viewed by 3552
Abstract
One major limitation of remote-sensing images is bad weather conditions, such as haze. Haze significantly reduces the accuracy of satellite image interpretation. To solve this problem, this paper proposes a novel unsupervised method to remove haze from high-resolution optical remote-sensing images. The proposed [...] Read more.
One major limitation of remote-sensing images is bad weather conditions, such as haze. Haze significantly reduces the accuracy of satellite image interpretation. To solve this problem, this paper proposes a novel unsupervised method to remove haze from high-resolution optical remote-sensing images. The proposed method, based on cycle generative adversarial networks, is called the edge-sharpening cycle-consistent adversarial network (ES-CCGAN). Most importantly, unlike existing methods, this approach does not require prior information; the training data are unsupervised, which mitigates the pressure of preparing the training data set. To enhance the ability to extract ground-object information, the generative network replaces a residual neural network (ResNet) with a dense convolutional network (DenseNet). The edge-sharpening loss function of the deep-learning model is designed to recover clear ground-object edges and obtain more detailed information from hazy images. In the high-frequency information extraction model, this study re-trained the Visual Geometry Group (VGG) network using remote-sensing images. Experimental results reveal that the proposed method can recover different kinds of scenes from hazy images successfully and obtain excellent color consistency. Moreover, the ability of the proposed method to obtain clear edges and rich texture feature information makes it superior to the existing methods. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Data)
Show Figures

Graphical abstract

18 pages, 13716 KiB  
Article
3-Net: Feature Fusion and Filtration Network for Object Detection in Optical Remote Sensing Images
by Xinhai Ye, Fengchao Xiong, Jianfeng Lu, Jun Zhou and Yuntao Qian
Remote Sens. 2020, 12(24), 4027; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12244027 - 09 Dec 2020
Cited by 19 | Viewed by 6112
Abstract
Object detection in remote sensing (RS) images is a challenging task due to the difficulties of small size, varied appearance, and complex background. Although a lot of methods have been developed to address this problem, many of them cannot fully exploit multilevel context [...] Read more.
Object detection in remote sensing (RS) images is a challenging task due to the difficulties of small size, varied appearance, and complex background. Although a lot of methods have been developed to address this problem, many of them cannot fully exploit multilevel context information or handle cluttered background in RS images either. To this end, in this paper, we propose a feature fusion and filtration network (F3-Net) to improve object detection in RS images, which has higher capacity of combining the context information at multiple scales while suppressing the interference from the background. Specifically, F3-Net leverages a feature adaptation block with a residual structure to adjust the backbone network in an end-to-end manner, better considering the characteristics of RS images. Afterward, the network learns the context information of the object at multiple scales by hierarchically fusing the feature maps from different layers. In order to suppress the interference from cluttered background, the fused feature is then projected into a low-dimensional subspace by an additional feature filtration module. As a result, more relevant and accurate context information is extracted for further detection. Extensive experiments on DOTA, NWPU VHR-10, and UCAS AOD datasets demonstrate that the proposed detector achieves very promising detection performance. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Data)
Show Figures

Graphical abstract

18 pages, 4585 KiB  
Article
High-Rankness Regularized Semi-Supervised Deep Metric Learning for Remote Sensing Imagery
by Jian Kang, Rubén Fernández-Beltrán, Zhen Ye, Xiaohua Tong, Pedram Ghamisi and Antonio Plaza
Remote Sens. 2020, 12(16), 2603; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12162603 - 12 Aug 2020
Cited by 9 | Viewed by 5310
Abstract
Deep metric learning has recently received special attention in the field of remote sensing (RS) scene characterization, owing to its prominent capabilities for modeling distances among RS images based on their semantic information. Most of the existing deep metric learning methods exploit pairwise [...] Read more.
Deep metric learning has recently received special attention in the field of remote sensing (RS) scene characterization, owing to its prominent capabilities for modeling distances among RS images based on their semantic information. Most of the existing deep metric learning methods exploit pairwise and triplet losses to learn the feature embeddings with the preservation of semantic-similarity, which requires the construction of image pairs and triplets based on the supervised information (e.g., class labels). However, generating such semantic annotations becomes a completely unaffordable task in large-scale RS archives, which may eventually constrain the availability of sufficient training data for this kind of models. To address this issue, we reformulate the deep metric learning scheme in a semi-supervised manner to effectively characterize RS scenes. Specifically, we aim at learning metric spaces by utilizing the supervised information from a small number of labeled RS images and exploring the potential decision boundaries for massive sets of unlabeled aerial scenes. In order to reach this goal, a joint loss function, composed of a normalized softmax loss with margin and a high-rankness regularization term, is proposed, as well as its corresponding optimization algorithm. The conducted experiments (including different state-of-the-art methods and two benchmark RS archives) validate the effectiveness of the proposed approach for RS image classification, clustering and retrieval tasks. The codes of this paper are publicly available. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Data)
Show Figures

Figure 1

Other

Jump to: Research

15 pages, 4180 KiB  
Technical Note
Integrating EfficientNet into an HAFNet Structure for Building Mapping in High-Resolution Optical Earth Observation Data
by Luca Ferrari, Fabio Dell’Acqua, Peng Zhang and Peijun Du
Remote Sens. 2021, 13(21), 4361; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214361 - 29 Oct 2021
Cited by 8 | Viewed by 2011
Abstract
Automated extraction of buildings from Earth observation (EO) data is important for various applications, including updating of maps, risk assessment, urban planning, and policy-making. Combining data from different sensors, such as high-resolution multispectral images (HRI) and light detection and ranging (LiDAR) data, has [...] Read more.
Automated extraction of buildings from Earth observation (EO) data is important for various applications, including updating of maps, risk assessment, urban planning, and policy-making. Combining data from different sensors, such as high-resolution multispectral images (HRI) and light detection and ranging (LiDAR) data, has shown great potential in building extraction. Deep learning (DL) is increasingly used in multi-modal data fusion and urban object extraction. However, DL-based multi-modal fusion networks may under-perform due to insufficient learning of “joint features” from multiple sources and oversimplified approaches to fusing multi-modal features. Recently, a hybrid attention-aware fusion network (HAFNet) has been proposed for building extraction from a dataset, including co-located Very-High-Resolution (VHR) optical images and light detection and ranging (LiDAR) joint data. The system reported good performances thanks to the adaptivity of the attention mechanism to the features of the information content of the three streams but suffered from model over-parametrization, which inevitably leads to long training times and heavy computational load. In this paper, the authors propose a restructuring of the scheme, which involved replacing VGG-16-like encoders with the recently proposed EfficientNet, whose advantages counteract exactly the issues found with the HAFNet scheme. The novel configuration was tested on multiple benchmark datasets, reporting great improvements in terms of processing times, and also in terms of accuracy. The new scheme, called HAFNetE (HAFNet with EfficientNet integration), appears indeed capable of achieving good results with less parameters, translating into better computational efficiency. Based on these findings, we can conclude that, given the current advancements in single-thread schemes, the classical multi-thread HAFNet scheme could be effectively transformed by the HAFNetE scheme by replacing VGG-16 with EfficientNet blocks on each single thread. The remarkable reduction achieved in computational requirements moves the system one step closer to on-board implementation in a possible, future “urban mapping” satellite constellation. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Data)
Show Figures

Figure 1

15 pages, 4876 KiB  
Technical Note
PlaNet: A Neural Network for Detecting Transverse Aeolian Ridges on Mars
by Timothy Nagle-McNaughton, Timothy McClanahan and Louis Scuderi
Remote Sens. 2020, 12(21), 3607; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12213607 - 03 Nov 2020
Cited by 19 | Viewed by 3459
Abstract
Transverse aeolian ridges (TARs) are unusual bedforms on the surface of Mars. TARs are common but sparse on Mars; TAR fields are small, rarely continuous, and scattered, making manual mapping impractical. There have been many efforts to automatically classify the Martian surface, but [...] Read more.
Transverse aeolian ridges (TARs) are unusual bedforms on the surface of Mars. TARs are common but sparse on Mars; TAR fields are small, rarely continuous, and scattered, making manual mapping impractical. There have been many efforts to automatically classify the Martian surface, but they have never explicitly located TARs successfully. Here, we present a simple adaptation of the off-the-shelf neural network RetinaNet that is designed to identify the presence of TARs at a 50-m scale. Once trained, the network was able to identify TARs with high precision (92.9%). Our model also shows promising results for applications to other surficial features like ripples and polygonal terrain. In the future, we hope to apply this model more broadly and generate a large database of TAR distributions on Mars. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Data)
Show Figures

Graphical abstract

Back to TopTop