remotesensing-logo

Journal Browser

Journal Browser

Convolutional Neural Networks Applications in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 June 2019) | Viewed by 135460

Special Issue Editors


E-Mail Website
Guest Editor
University Federico II of Naples (I), Napoli, Italy
Interests: image processing; remote sensing; image forensics; deep learning

E-Mail Website
Guest Editor
CIRAD, UMR TETIS, Maison de la Télédétection, Montpellier, France
Interests: image processing; remote sensing; multi-sensor data fusion; machine and deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
DLR, German Aerospace Center, Münchner Str. 20, BY 82234 Weßling, Germany
Interests: image and signal processing; machine and deep learning; synthetic aperture radar (SAR) and SAR interferometry (InSAR); data fusion for land applications
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In the last few years, convolutional neural networks (CNNs) have been applied in a large set of fields in which image processing is fundamental, from multimedia to medicine and robotics. Along with the rise of deep learning (DL), CNNs have emerged as a particularly powerful tool by both providing outstanding performances in conventional tasks and allowing a wide variety of unprecedented applications in computer vision.

Meanwhile, the interest of the remote sensing (RS) community in innovative image processing approaches has increased strongly and in a specific manner towards CNNs and the panoply of existing DL architectures proposed in the computer vision literature. With the recent exponential increase of remote sensing systems offering a large variety of sensors (optical, multi- and hyper-spectral, synthetic aperture radar, temperature and microwave radiometer, altimeter, etc.), CNN-based approaches may extensively profit from this large data availability.

The current open challenge is hence to properly exploit CNN tools to correctly address the needs and constraints of the wide variety of remote sensing applications. Indeed, both the nature of the observed phenomena and the quality and availability of data is very diverse depending on the scientific domain (biosphere, geosphere, cryosphere and hydrosphere), on the geographical areas of interest and on the specific applicative tasks.

This Special Issue aims to foster the application of convolutional neural networks to remote sensing problems. Authors are encouraged to submit original papers of both a theoretical and application-based nature.

Topics of interest include, but are not limited to, the following:

  • Convolutional neural networks for RS image understanding (e.g., land use/land cover classification, image retrieval, change detection, semantic labeling);
  • Convolutional neural networks for RS image restoration (e.g., enhancement, denoising, estimation problems);
  • Strategies of data fusion based on convolutional neural networks for RS applications (e.g., multi-sensor data fusion, multi-modal data fusion, pan-sharpening);
  • Strategies of transfer learning based on convolutional neural networks for RS applications (e.g., cross-sensor transfer learning, cross-modality transfer learning, guided despeckling);
  • Analysis and processing of RS multi-temporal series through convolutional neural networks;
  • Large-scale RS datasets for training and evaluating convolutional neural networks.
Dr. Davide Cozzolino
Dr. Raffaele Gaetano
Dr. Francescopaolo Sica
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image processing
  • remote sensing
  • deep learning
  • neural network
  • convolutional neural networks
  • data fusion
  • transfer learning

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 26591 KiB  
Article
Enhanced Feature Representation in Detection for Optical Remote Sensing Images
by Kun Fu, Zhuo Chen, Yue Zhang and Xian Sun
Remote Sens. 2019, 11(18), 2095; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11182095 - 08 Sep 2019
Cited by 25 | Viewed by 3825
Abstract
In recent years, deep learning has led to a remarkable breakthrough in object detection in remote sensing images. In practice, two-stage detectors perform well regarding detection accuracy but are slow. On the other hand, one-stage detectors integrate the detection pipeline of two-stage detectors [...] Read more.
In recent years, deep learning has led to a remarkable breakthrough in object detection in remote sensing images. In practice, two-stage detectors perform well regarding detection accuracy but are slow. On the other hand, one-stage detectors integrate the detection pipeline of two-stage detectors to simplify the detection process, and are faster, but with lower detection accuracy. Enhancing the capability of feature representation may be a way to improve the detection accuracy of one-stage detectors. For this goal, this paper proposes a novel one-stage detector with enhanced capability of feature representation. The enhanced capability benefits from two proposed structures: dual top-down module and dense-connected inception module. The former efficiently utilizes multi-scale features from multiple layers of the backbone network. The latter both widens and deepens the network to enhance the ability of feature representation with limited extra computational cost. To evaluate the effectiveness of proposed structures, we conducted experiments on horizontal bounding box detection tasks on the challenging DOTA dataset and gained 73.49% mean Average Precision (mAP), achieving state-of-the-art performance. Furthermore, our method ran significantly faster than the best public two-stage detector on the DOTA dataset. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

23 pages, 8353 KiB  
Article
Towards Automated Ship Detection and Category Recognition from High-Resolution Aerial Images
by Yingchao Feng, Wenhui Diao, Xian Sun, Menglong Yan and Xin Gao
Remote Sens. 2019, 11(16), 1901; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11161901 - 14 Aug 2019
Cited by 36 | Viewed by 4175
Abstract
Ship category classification in high-resolution aerial images has attracted great interest in applications such as maritime security, naval construction, and port management. However, the applications of previous methods were mainly limited by the following issues: (i) The existing ship category classification methods were [...] Read more.
Ship category classification in high-resolution aerial images has attracted great interest in applications such as maritime security, naval construction, and port management. However, the applications of previous methods were mainly limited by the following issues: (i) The existing ship category classification methods were mainly to classify on accurately-cropped image patches. This is unsatisfactory for the results of the existing methods in practical applications, because the location of the ship in the patch obtained by the object detection varies greatly. (ii) The factors such as target scale variations and class imbalance have a great influence on the performance of ship category classification. Aiming at the issues above, we propose a novel ship detection and category classification framework. The category classification is based on accurate location. The detection network can generate more precise rotated bounding boxes in large-scale aerial images by introducing a novel Sequence Local Context (SLC) module. Besides, three different ship category classification networks are proposed to eliminate the effect of scale variations, and the Spatial Transform Crop (STC) operation is used to get aligned image patches. Whatever the problem of insufficient samples or class imbalance have, the Proposals Simulation Generator (PSG) is considered to handle this properly. Most remarkably, the state-of-the-art performance of our framework is demonstrated by experiments based on the 19-class ship dataset HRSC2016 and our multiclass warship dataset. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

21 pages, 11702 KiB  
Article
Alternately Updated Spectral–Spatial Convolution Network for the Classification of Hyperspectral Images
by Wenju Wang, Shuguang Dou and Sen Wang
Remote Sens. 2019, 11(15), 1794; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11151794 - 31 Jul 2019
Cited by 20 | Viewed by 3535
Abstract
The connection structure in the convolutional layers of most deep learning-based algorithms used for the classification of hyperspectral images (HSIs) has typically been in the forward direction. In this study, an end-to-end alternately updated spectral–spatial convolutional network (AUSSC) with a recurrent feedback structure [...] Read more.
The connection structure in the convolutional layers of most deep learning-based algorithms used for the classification of hyperspectral images (HSIs) has typically been in the forward direction. In this study, an end-to-end alternately updated spectral–spatial convolutional network (AUSSC) with a recurrent feedback structure is used to learn refined spectral and spatial features for HSI classification. The proposed AUSSC includes alternating updated blocks in which each layer serves as both an input and an output for the other layers. The AUSSC can refine spectral and spatial features many times under fixed parameters. A center loss function is introduced as an auxiliary objective function to improve the discrimination of features acquired by the model. Additionally, the AUSSC utilizes smaller convolutional kernels than other convolutional neural network (CNN)-based methods to reduce the number of parameters and alleviate overfitting. The proposed method was implemented on four HSI data sets, as follows: Indian Pines, Kennedy Space Center, Salinas Scene, and Houston. Experimental results demonstrated that the proposed AUSSC outperformed the HSI classification accuracy obtained by state-of-the-art deep learning-based methods with a small number of training samples. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

22 pages, 7072 KiB  
Article
Deep Feature Fusion with Integration of Residual Connection and Attention Model for Classification of VHR Remote Sensing Images
by Jicheng Wang, Li Shen, Wenfan Qiao, Yanshuai Dai and Zhilin Li
Remote Sens. 2019, 11(13), 1617; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11131617 - 08 Jul 2019
Cited by 28 | Viewed by 5448
Abstract
The classification of very-high-resolution (VHR) remote sensing images is essential in many applications. However, high intraclass and low interclass variations in these kinds of images pose serious challenges. Fully convolutional network (FCN) models, which benefit from a powerful feature learning ability, have shown [...] Read more.
The classification of very-high-resolution (VHR) remote sensing images is essential in many applications. However, high intraclass and low interclass variations in these kinds of images pose serious challenges. Fully convolutional network (FCN) models, which benefit from a powerful feature learning ability, have shown impressive performance and great potential. Nevertheless, only classification results with coarse resolution can be obtained from the original FCN method. Deep feature fusion is often employed to improve the resolution of outputs. Existing strategies for such fusion are not capable of properly utilizing the low-level features and considering the importance of features at different scales. This paper proposes a novel, end-to-end, fully convolutional network to integrate a multiconnection ResNet model and a class-specific attention model into a unified framework to overcome these problems. The former fuses multilevel deep features without introducing any redundant information from low-level features. The latter can learn the contributions from different features of each geo-object at each scale. Extensive experiments on two open datasets indicate that the proposed method can achieve class-specific scale-adaptive classification results and it outperforms other state-of-the-art methods. The results were submitted to the International Society for Photogrammetry and Remote Sensing (ISPRS) online contest for comparison with more than 50 other methods. The results indicate that the proposed method (ID: SWJ_2) ranks #1 in terms of overall accuracy, even though no additional digital surface model (DSM) data that were offered by ISPRS were used and no postprocessing was applied. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 2905 KiB  
Article
Deep Learning for SAR Image Despeckling
by Francesco Lattari, Borja Gonzalez Leon, Francesco Asaro, Alessio Rucci, Claudio Prati and Matteo Matteucci
Remote Sens. 2019, 11(13), 1532; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11131532 - 28 Jun 2019
Cited by 95 | Viewed by 10438
Abstract
Speckle filtering is an unavoidable step when dealing with applications that involve amplitude or intensity images acquired by coherent systems, such as Synthetic Aperture Radar (SAR). Speckle is a target-dependent phenomenon; thus, its estimation and reduction require the individuation of specific properties of [...] Read more.
Speckle filtering is an unavoidable step when dealing with applications that involve amplitude or intensity images acquired by coherent systems, such as Synthetic Aperture Radar (SAR). Speckle is a target-dependent phenomenon; thus, its estimation and reduction require the individuation of specific properties of the image features. Speckle filtering is one of the most prominent topics in the SAR image processing research community, who has first tackled this issue using handcrafted feature-based filters. Even if classical algorithms have slowly and progressively achieved better and better performance, the more recent Convolutional-Neural-Networks (CNNs) have proven to be a promising alternative, in the light of the outstanding capabilities in efficiently learning task-specific filters. Currently, only simplistic CNN architectures have been exploited for the speckle filtering task. While these architectures outperform classical algorithms, they still show some weakness in the texture preservation. In this work, a deep encoder–decoder CNN architecture, focused in the specific context of SAR images, is proposed in order to enhance speckle filtering capabilities alongside texture preservation. This objective has been addressed through the adaptation of the U-Net CNN, which has been modified and optimized accordingly. This architecture allows for the extraction of features at different scales, and it is capable of producing detailed reconstructions through its system of skip connections. In this work, a two-phase learning strategy is adopted, by first pre-training the model on a synthetic dataset and by adapting the learned network to the real SAR image domain through a fast fine-tuning procedure. During the fine-tuning phase, a modified version of the total variation (TV) regularization was introduced to improve the network performance when dealing with real SAR data. Finally, experiments were carried out on simulated and real data to compare the performance of the proposed method with respect to the state-of-the-art methodologies. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Figure 1

20 pages, 6116 KiB  
Article
Automatic Extraction of Gravity Waves from All-Sky Airglow Image Based on Machine Learning
by Chang Lai, Jiyao Xu, Jia Yue, Wei Yuan, Xiao Liu, Wei Li and Qinzeng Li
Remote Sens. 2019, 11(13), 1516; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11131516 - 27 Jun 2019
Cited by 10 | Viewed by 4036
Abstract
With the development of ground-based all-sky airglow imager (ASAI) technology, a large amount of airglow image data needs to be processed for studying atmospheric gravity waves. We developed a program to automatically extract gravity wave patterns in the ASAI images. The auto-extraction program [...] Read more.
With the development of ground-based all-sky airglow imager (ASAI) technology, a large amount of airglow image data needs to be processed for studying atmospheric gravity waves. We developed a program to automatically extract gravity wave patterns in the ASAI images. The auto-extraction program includes a classification model based on convolutional neural network (CNN) and an object detection model based on faster region-based convolutional neural network (Faster R-CNN). The classification model selects the images of clear nights from all ASAI raw images. The object detection model locates the region of wave patterns. Then, the wave parameters (horizontal wavelength, period, direction, etc.) can be calculated within the region of the wave patterns. Besides auto-extraction, we applied a wavelength check to remove the interference of wavelike mist near the imager. To validate the auto-extraction program, a case study was conducted on the images captured in 2014 at Linqu (36.2°N, 118.7°E), China. Compared to the result of the manual check, the auto-extraction recognized less (28.9% of manual result) wave-containing images due to the strict threshold, but the result shows the same seasonal variation as the references. The auto-extraction program applies a uniform criterion to avoid the accidental error in manual distinction of gravity waves and offers a reliable method to process large ASAI images for efficiently studying the climatology of atmospheric gravity waves. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

23 pages, 2284 KiB  
Article
Unsupervised Domain Adaptation Using Generative Adversarial Networks for Semantic Segmentation of Aerial Images
by Bilel Benjdira, Yakoub Bazi, Anis Koubaa and Kais Ouni
Remote Sens. 2019, 11(11), 1369; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11111369 - 07 Jun 2019
Cited by 161 | Viewed by 10021
Abstract
Segmenting aerial images is of great potential in surveillance and scene understanding of urban areas. It provides a mean for automatic reporting of the different events that happen in inhabited areas. This remarkably promotes public safety and traffic management applications. After the wide [...] Read more.
Segmenting aerial images is of great potential in surveillance and scene understanding of urban areas. It provides a mean for automatic reporting of the different events that happen in inhabited areas. This remarkably promotes public safety and traffic management applications. After the wide adoption of convolutional neural networks methods, the accuracy of semantic segmentation algorithms could easily surpass 80% if a robust dataset is provided. Despite this success, the deployment of a pretrained segmentation model to survey a new city that is not included in the training set significantly decreases accuracy. This is due to the domain shift between the source dataset on which the model is trained and the new target domain of the new city images. In this paper, we address this issue and consider the challenge of domain adaptation in semantic segmentation of aerial images. We designed an algorithm that reduces the domain shift impact using generative adversarial networks (GANs). In the experiments, we tested the proposed methodology on the International Society for Photogrammetry and Remote Sensing (ISPRS) semantic segmentation dataset and found that our method improves overall accuracy from 35% to 52% when passing from the Potsdam domain (considered as source domain) to the Vaihingen domain (considered as target domain). In addition, the method allows efficiently recovering the inverted classes due to sensor variation. In particular, it improves the average segmentation accuracy of the inverted classes due to sensor variation from 14% to 61%. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Figure 1

22 pages, 8849 KiB  
Article
A Convolutional Neural Network with Fletcher–Reeves Algorithm for Hyperspectral Image Classification
by Chen Chen, Yi Ma and Guangbo Ren
Remote Sens. 2019, 11(11), 1325; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11111325 - 02 Jun 2019
Cited by 23 | Viewed by 4656
Abstract
Deep learning models, especially the convolutional neural networks (CNNs), are very active in hyperspectral remote sensing image classification. In order to better apply the CNN model to hyperspectral classification, we propose a CNN model based on Fletcher–Reeves algorithm (F–R CNN), which uses the [...] Read more.
Deep learning models, especially the convolutional neural networks (CNNs), are very active in hyperspectral remote sensing image classification. In order to better apply the CNN model to hyperspectral classification, we propose a CNN model based on Fletcher–Reeves algorithm (F–R CNN), which uses the Fletcher–Reeves (F–R) algorithm for gradient updating to optimize the convergence performance of the model in classification. In view of the fact that there are fewer optional training samples in practical applications, we further propose a method of increasing the number of samples by adding a certain degree of perturbed samples, which can also test the anti-interference ability of classification methods. Furthermore, we analyze the anti-interference and convergence performance of the proposed model in terms of different training sample data sets, different batch training sample numbers and iteration time. In this paper, we describe the experimental process in detail and comprehensively evaluate the proposed model based on the classification of CHRIS hyperspectral imagery covering coastal wetlands, and further evaluate it on a commonly used hyperspectral image benchmark dataset. The experimental results show that the accuracy of the two models after increasing training samples and adjusting the number of batch training samples is improved. When the number of batch training samples is continuously increased to 350, the classification accuracy of the proposed method can still be maintained above 80.7%, which is 2.9% higher than the traditional one. And its time consumption is less than that of the traditional one while ensuring classification accuracy. It can be concluded that the proposed method has anti-interference ability and outperforms the traditional CNN in terms of batch computing adaptability and convergence speed. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

16 pages, 4906 KiB  
Article
SAR ATR of Ground Vehicles Based on ESENet
by Li Wang, Xueru Bai and Feng Zhou
Remote Sens. 2019, 11(11), 1316; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11111316 - 01 Jun 2019
Cited by 53 | Viewed by 4190
Abstract
In recent studies, synthetic aperture radar (SAR) automatic target recognition (ATR) algorithms that are based on the convolutional neural network (CNN) have achieved high recognition rates in the moving and stationary target acquisition and recognition (MSTAR) dataset. However, in a SAR ATR task, [...] Read more.
In recent studies, synthetic aperture radar (SAR) automatic target recognition (ATR) algorithms that are based on the convolutional neural network (CNN) have achieved high recognition rates in the moving and stationary target acquisition and recognition (MSTAR) dataset. However, in a SAR ATR task, the feature maps with little information automatically learned by CNN will disturb the classifier. We design a new enhanced squeeze and excitation (enhanced-SE) module to solve this problem, and then propose a new SAR ATR network, i.e., the enhanced squeeze and excitation network (ESENet). When compared to the available CNN structures that are designed for SAR ATR, the ESENet can extract more effective features from SAR images and obtain better generalization performance. In the MSTAR dataset containing pure targets, the proposed method achieves a recognition rate of 97.32% and it exceeds the available CNN-based SAR ATR algorithms. Additionally, it has shown robustness to large depression angle variation, configuration variants, and version variants. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

24 pages, 8314 KiB  
Article
Dual Learning-Based Siamese Framework for Change Detection Using Bi-Temporal VHR Optical Remote Sensing Images
by Bo Fang, Li Pan and Rong Kou
Remote Sens. 2019, 11(11), 1292; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11111292 - 30 May 2019
Cited by 51 | Viewed by 6622
Abstract
As a fundamental and profound task in remote sensing, change detection from very-high-resolution (VHR) images plays a vital role in a wide range of applications and attracts considerable attention. Current methods generally focus on the research of simultaneously modeling and discriminating the changed [...] Read more.
As a fundamental and profound task in remote sensing, change detection from very-high-resolution (VHR) images plays a vital role in a wide range of applications and attracts considerable attention. Current methods generally focus on the research of simultaneously modeling and discriminating the changed and unchanged features. In practice, for bi-temporal VHR optical remote sensing images, the temporal spectral variability tends to exist in all bands throughout the entire paired images, making it difficult to distinguish none-changes and changes with a single model. In this paper, motivated by this observation, we propose a novel hybrid end-to-end framework named dual learning-based Siamese framework (DLSF) for change detection. The framework comprises two parallel streams which are dual learning-based domain transfer and Siamese-based change decision. The former stream is aimed at reducing the domain differences of two paired images and retaining the intrinsic information by translating them into each other’s domain. While the latter stream is aimed at learning a decision strategy to decide the changes in two domains, respectively. By training our proposed framework with certain change map references, this method learns a cross-domain translation in order to suppress the differences of unchanged regions and highlight the differences of changed regions in two domains, respectively, then focus on the detection of changed regions. To the best of our knowledge, the idea of incorporating dual learning framework and Siamese network for change detection is novel. The experimental results on two datasets and the comparison with other state-of-the-art methods verify the efficiency and superiority of our proposed DLSF. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

18 pages, 11150 KiB  
Article
Fig Plant Segmentation from Aerial Images Using a Deep Convolutional Encoder-Decoder Network
by Jorge Fuentes-Pacheco, Juan Torres-Olivares, Edgar Roman-Rangel, Salvador Cervantes, Porfirio Juarez-Lopez, Jorge Hermosillo-Valadez and Juan Manuel Rendón-Mancha
Remote Sens. 2019, 11(10), 1157; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11101157 - 15 May 2019
Cited by 34 | Viewed by 5959
Abstract
Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a [...] Read more.
Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

19 pages, 43781 KiB  
Article
A Stacked Fully Convolutional Networks with Feature Alignment Framework for Multi-Label Land-cover Segmentation
by Guangming Wu, Yimin Guo, Xiaoya Song, Zhiling Guo, Haoran Zhang, Xiaodan Shi, Ryosuke Shibasaki and Xiaowei Shao
Remote Sens. 2019, 11(9), 1051; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11091051 - 03 May 2019
Cited by 24 | Viewed by 4753
Abstract
Applying deep-learning methods, especially fully convolutional networks (FCNs), has become a popular option for land-cover classification or segmentation in remote sensing. Compared with traditional solutions, these approaches have shown promising generalization capabilities and precision levels in various datasets of different scales, resolutions, and [...] Read more.
Applying deep-learning methods, especially fully convolutional networks (FCNs), has become a popular option for land-cover classification or segmentation in remote sensing. Compared with traditional solutions, these approaches have shown promising generalization capabilities and precision levels in various datasets of different scales, resolutions, and imaging conditions. To achieve superior performance, a lot of research has focused on constructing more complex or deeper networks. However, using an ensemble of different fully convolutional models to achieve better generalization and to prevent overfitting has long been ignored. In this research, we design four stacked fully convolutional networks (SFCNs), and a feature alignment framework for multi-label land-cover segmentation. The proposed feature alignment framework introduces an alignment loss of features extracted from basic models to balance their similarity and variety. Experiments on a very high resolution(VHR) image dataset with six categories of land-covers indicates that the proposed SFCNs can gain better performance when compared to existing deep learning methods. In the 2nd variant of SFCN, the optimal feature alignment gains increments of 4.2% (0.772 vs. 0.741), 6.8% (0.629 vs. 0.589), and 5.5% (0.727 vs. 0.689) for its f1-score, jaccard index, and kappa coefficient, respectively. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 7798 KiB  
Article
Spatial Resolution Enhancement of Satellite Microwave Radiometer Data with Deep Residual Convolutional Neural Network
by Weidong Hu, Yade Li, Wenlong Zhang, Shi Chen, Xin Lv and Leo Ligthart
Remote Sens. 2019, 11(7), 771; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070771 - 30 Mar 2019
Cited by 18 | Viewed by 3968
Abstract
Satellite microwave radiometer data is affected by many degradation factors during the imaging process, such as the sampling interval, antenna pattern and scan mode, etc., leading to spatial resolution reduction. In this paper, a deep residual convolutional neural network (CNN) is proposed to [...] Read more.
Satellite microwave radiometer data is affected by many degradation factors during the imaging process, such as the sampling interval, antenna pattern and scan mode, etc., leading to spatial resolution reduction. In this paper, a deep residual convolutional neural network (CNN) is proposed to solve these degradation problems by learning the end-to-end mapping between low-and high-resolution images. Unlike traditional methods that handle each degradation factor separately, our network jointly learns both the sampling interval limitation and the comprehensive degeneration factors, including the antenna pattern, receiver sensitivity and scan mode, during the training process. Moreover, due to the powerful mapping capability of the deep residual CNN, our method achieves better resolution enhancement results both quantitatively and qualitatively than the methods in literature. The microwave radiation imager (MWRI) data from the Fengyun-3C (FY-3C) satellite has been used to demonstrate the validity and the effectiveness of the method. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

29 pages, 16226 KiB  
Article
Chimera: A Multi-Task Recurrent Convolutional Neural Network for Forest Classification and Structural Estimation
by Tony Chang, Brandon P. Rasmussen, Brett G. Dickson and Luke J. Zachmann
Remote Sens. 2019, 11(7), 768; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070768 - 29 Mar 2019
Cited by 27 | Viewed by 9431
Abstract
More consistent and current estimates of forest land cover type and forest structural metrics are needed to guide national policies on forest management, carbon sequestration, and ecosystem health. In recent years, the increased availability of high-resolution (<30 m) imagery and advancements in machine [...] Read more.
More consistent and current estimates of forest land cover type and forest structural metrics are needed to guide national policies on forest management, carbon sequestration, and ecosystem health. In recent years, the increased availability of high-resolution (<30 m) imagery and advancements in machine learning algorithms have opened up a new opportunity to fuse multiple datasets of varying spatial, spectral, and temporal resolutions. Here, we present a new model, based on a deep learning architecture, that performs both classification and regression concurrently, thereby consolidating what was previously several independent tasks and models into one stream. The model, a multi-task recurrent convolutional neural network that we call the Chimera, integrates varying resolution, freely available aerial and satellite imagery, as well as relevant environmental factors (e.g., climate, terrain) to simultaneously classify five forest cover types (‘conifer’, ‘deciduous’, ‘mixed’, ‘dead’, ‘none’ (non-forest)) and to estimate four continuous forest structure metrics (above ground biomass, quadratic mean diameter, basal area, canopy cover). We demonstrate the performance of our approach by training an ensemble of Chimera models on 9967 georeferenced (true locations) Forest Inventory and Analysis field plots from the USDA Forest Service within California and Nevada. Classification diagnostics for the Chimera ensemble on an independent test set produces an overall average precision, recall, and F1-score of 0.92, 0.92, and 0.92. Class-wise F1-scores were high for ‘none’ (0.99) and ‘conifer’ (0.85) cover classes, and moderate for the ‘mixed’ (0.74) class samples. This demonstrates a strong ability to discriminate locations with and without trees. Regression diagnostics on the test set indicate very high accuracy for ensembled estimates of above ground biomass ( R 2 = 0.84 , RMSE = 37.28 Mg/ha), quadratic mean diameter ( R 2 = 0.81 , RMSE = 3.74 inches), basal area ( R 2 = 0.87 , RMSE = 25.88 ft 2 /ac), and canopy cover ( R 2 = 0.89 , RMSE = 8.01 percent). Comparative analysis of the Chimera ensemble versus support vector machine and random forest approaches demonstrates increased performance over both methods. Future implementations of the Chimera ensemble on a distributed computing platform could provide continuous, annual estimates of forest structure for other forested landscapes at regional or national scales. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 4156 KiB  
Article
Multispectral Transforms Using Convolution Neural Networks for Remote Sensing Multispectral Image Compression
by Jin Li and Zilong Liu
Remote Sens. 2019, 11(7), 759; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070759 - 28 Mar 2019
Cited by 36 | Viewed by 5330
Abstract
A multispectral image is a three-order tensor since it is a three-dimensional matrix, i.e., one spectral dimension and two spatial position dimensions. Multispectral image compression can be achieved by means of the advantages of tensor decomposition (TD), such as Nonnegative Tucker Decomposition (NTD). [...] Read more.
A multispectral image is a three-order tensor since it is a three-dimensional matrix, i.e., one spectral dimension and two spatial position dimensions. Multispectral image compression can be achieved by means of the advantages of tensor decomposition (TD), such as Nonnegative Tucker Decomposition (NTD). Unfortunately, the TD suffers from high calculation complexity and cannot be used in the on-board low-complexity case (e.g., multispectral cameras) that the hardware resources and power are limited. Here, we propose a low-complexity compression approach for multispectral images based on convolution neural networks (CNNs) with NTD. We construct a new spectral transform using CNNs, where the CNNs are able to transform the three-dimension spectral tensor from large-scale to a small-scale version. The NTD resources only allocate the small-scale three-dimension tensor to improve calculation efficiency. We obtain the optimized small-scale spectral tensor by the minimization of original and reconstructed three-dimension spectral tensor in self-learning CNNs. Then, the NTD is applied to the optimized three-dimension spectral tensor in the DCT domain to obtain the high compression performance. We experimentally confirmed the proposed method on multispectral images. Compared to the case that the new spectral tensor transform with CNNs is not applied to the original three-dimension spectral tensor at the same compression bit-rates, the reconstructed image quality could be improved. Compared with the full NTD-based method, the computation efficiency was obviously improved with only a small sacrifices of PSNR without affecting the quality of images. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

19 pages, 8219 KiB  
Article
Convolutional Neural Network and Guided Filtering for SAR Image Denoising
by Shuaiqi Liu, Tong Liu, Lele Gao, Hailiang Li, Qi Hu, Jie Zhao and Chong Wang
Remote Sens. 2019, 11(6), 702; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11060702 - 23 Mar 2019
Cited by 56 | Viewed by 5285
Abstract
Coherent noise often interferes with synthetic aperture radar (SAR), which has a huge impact on subsequent processing and analysis. This paper puts forward a novel algorithm involving the convolutional neural network (CNN) and guided filtering for SAR image denoising, which combines the advantages [...] Read more.
Coherent noise often interferes with synthetic aperture radar (SAR), which has a huge impact on subsequent processing and analysis. This paper puts forward a novel algorithm involving the convolutional neural network (CNN) and guided filtering for SAR image denoising, which combines the advantages of model-based optimization and discriminant learning and considers how to obtain the best image information and improve the resolution of the images. The advantages of proposed method are that, firstly, an SAR image is filtered via five different level denoisers to obtain five denoised images, in which the efficient and effective CNN denoiser prior is employed. Later, a guided filtering-based fusion algorithm is used to integrate the five denoised images into a final denoised image. The experimental results indicate that the algorithm cannot eliminate noise, but it does improve the visual effect of the image significantly, allowing it to outperform some recent denoising methods in this field. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 3542 KiB  
Article
A Multiscale Deep Middle-level Feature Fusion Network for Hyperspectral Classification
by Zhaokui Li, Lin Huang and Jinrong He
Remote Sens. 2019, 11(6), 695; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11060695 - 22 Mar 2019
Cited by 45 | Viewed by 9826
Abstract
Recently, networks consider spectral-spatial information in multiscale inputs less, even though there are some networks that consider this factor, however these networks cannot guarantee to get optimal features, which are extracted from each scale input. Furthermore, these networks do not consider the complementary [...] Read more.
Recently, networks consider spectral-spatial information in multiscale inputs less, even though there are some networks that consider this factor, however these networks cannot guarantee to get optimal features, which are extracted from each scale input. Furthermore, these networks do not consider the complementary and related information among different scale features. To address these issues, a multiscale deep middle-level feature fusion network (MMFN) is proposed in this paper for hyperspectral classification. In MMFN, the network fully fuses the strong complementary and related information among different scale features to extract more discriminative features. The training of network contains two stages: the first stage obtains the optimal models corresponding to different scale inputs and extracts the middle-level features under the corresponding scale model. It can guarantee the multiscale middle-level features are optimal. The second stage fuses the optimal multiscale middle-level features in the convolutional layer, and the subsequent residual blocks can learn the complementary and related information among different scale middle-level features. Moreover, the idea of identity mapping in residual learning can help the network obtain a higher accuracy when the network is deeper. The effectiveness of our method is proved on four HSI data sets and the experimental results show that our method outperforms the other state-of-the-art methods especially with small training samples. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Figure 1

19 pages, 13625 KiB  
Article
Detection of Fir Trees (Abies sibirica) Damaged by the Bark Beetle in Unmanned Aerial Vehicle Images with Deep Learning
by Anastasiia Safonova, Siham Tabik, Domingo Alcaraz-Segura, Alexey Rubtsov, Yuriy Maglinets and Francisco Herrera
Remote Sens. 2019, 11(6), 643; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11060643 - 16 Mar 2019
Cited by 112 | Viewed by 13021
Abstract
Invasion of the Polygraphus proximus Blandford bark beetle causes catastrophic damage to forests with firs (Abies sibirica Ledeb) in Russia, especially in Central Siberia. Determining tree damage stage based on the shape, texture and colour of tree crown in unmanned aerial vehicle [...] Read more.
Invasion of the Polygraphus proximus Blandford bark beetle causes catastrophic damage to forests with firs (Abies sibirica Ledeb) in Russia, especially in Central Siberia. Determining tree damage stage based on the shape, texture and colour of tree crown in unmanned aerial vehicle (UAV) images could help to assess forest health in a faster and cheaper way. However, this task is challenging since (i) fir trees at different damage stages coexist and overlap in the canopy, (ii) the distribution of fir trees in nature is irregular and hence distinguishing between different crowns is hard, even for the human eye. Motivated by the latest advances in computer vision and machine learning, this work proposes a two-stage solution: In a first stage, we built a detection strategy that finds the regions of the input UAV image that are more likely to contain a crown, in the second stage, we developed a new convolutional neural network (CNN) architecture that predicts the fir tree damage stage in each candidate region. Our experiments show that the proposed approach shows satisfactory results on UAV Red, Green, Blue (RGB) images of forest areas in the state nature reserve “Stolby” (Krasnoyarsk, Russia). Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

16 pages, 15828 KiB  
Article
Road Extraction from High-Resolution Remote Sensing Imagery Using Refined Deep Residual Convolutional Neural Network
by Lin Gao, Weidong Song, Jiguang Dai and Yang Chen
Remote Sens. 2019, 11(5), 552; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11050552 - 06 Mar 2019
Cited by 92 | Viewed by 8070
Abstract
Road extraction is one of the most significant tasks for modern transportation systems. This task is normally difficult due to complex backgrounds such as rural roads that have heterogeneous appearances with large intraclass and low interclass variations and urban roads that are covered [...] Read more.
Road extraction is one of the most significant tasks for modern transportation systems. This task is normally difficult due to complex backgrounds such as rural roads that have heterogeneous appearances with large intraclass and low interclass variations and urban roads that are covered by vehicles, pedestrians and the shadows of surrounding trees or buildings. In this paper, we propose a novel method for extracting roads from optical satellite images using a refined deep residual convolutional neural network (RDRCNN) with a postprocessing stage. RDRCNN consists of a residual connected unit (RCU) and a dilated perception unit (DPU). The RDRCNN structure is symmetric to generate the outputs of the same size. A math morphology and a tensor voting algorithm are used to improve RDRCNN performance during postprocessing. Experiments are conducted on two datasets of high-resolution images to demonstrate the performance of the proposed network architectures, and the results of the proposed architectures are compared with those of other network architectures. The results demonstrate the effective performance of the proposed method for extracting roads from a complex scene. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

22 pages, 17974 KiB  
Article
Semantic Segmentation on Remotely Sensed Images Using an Enhanced Global Convolutional Network with Channel Attention and Domain Specific Transfer Learning
by Teerapong Panboonyuen, Kulsawasd Jitkajornwanich, Siam Lawawirojwong, Panu Srestasathiern and Peerapon Vateekul
Remote Sens. 2019, 11(1), 83; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11010083 - 04 Jan 2019
Cited by 82 | Viewed by 8915
Abstract
In the remote sensing domain, it is crucial to complete semantic segmentation on the raster images, e.g., river, building, forest, etc., on raster images. A deep convolutional encoder–decoder (DCED) network is the state-of-the-art semantic segmentation method for remotely sensed images. However, the accuracy [...] Read more.
In the remote sensing domain, it is crucial to complete semantic segmentation on the raster images, e.g., river, building, forest, etc., on raster images. A deep convolutional encoder–decoder (DCED) network is the state-of-the-art semantic segmentation method for remotely sensed images. However, the accuracy is still limited, since the network is not designed for remotely sensed images and the training data in this domain is deficient. In this paper, we aim to propose a novel CNN for semantic segmentation particularly for remote sensing corpora with three main contributions. First, we propose applying a recent CNN called a global convolutional network (GCN), since it can capture different resolutions by extracting multi-scale features from different stages of the network. Additionally, we further enhance the network by improving its backbone using larger numbers of layers, which is suitable for medium resolution remotely sensed images. Second, “channel attention” is presented in our network in order to select the most discriminative filters (features). Third, “domain-specific transfer learning” is introduced to alleviate the scarcity issue by utilizing other remotely sensed corpora with different resolutions as pre-trained data. The experiment was then conducted on two given datasets: (i) medium resolution data collected from Landsat-8 satellite and (ii) very high resolution data called the ISPRS Vaihingen Challenge Dataset. The results show that our networks outperformed DCED in terms of F 1 for 17.48% and 2.49% on medium and very high resolution corpora, respectively. Full article
(This article belongs to the Special Issue Convolutional Neural Networks Applications in Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop