Special Issue "Computational Intelligence and Advanced Learning Techniques in Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 July 2021).

Special Issue Editors

Dr. Edoardo Pasolli
E-Mail Website
Guest Editor
Department of Agricultural Sciences, University of Naples Federico II, Via Università 100, 80055 Portici, Naples, Italy
Interests: multi/hyperspectral remote sensing; image processing and analysis; machine learning; pattern recognition; computer vision
Special Issues and Collections in MDPI journals
Dr. Zhou Zhang
E-Mail Website
Guest Editor
Department of Biological Systems Engineering, University of Wisconsin-Madison, 230 Agricultural Engineering Building, 460 Henry Mall, Madison, WI 53706, USA
Interests: hyperspectral remote sensing; machine learning; unmanned aerial vehicle (UAV)-based imaging platform developments; precision agriculture; high-throughput plant phenotyping
Special Issues and Collections in MDPI journals
Dr. Zhengxia Zou
E-Mail Website
Guest Editor
Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, North Campus Research Complex Building 300, Ann Arbor, MI 48105, USA
Interests: remote sensing image processing and analysis; computer vision; machine learning; pattern recognition
Dr. ZhiYong Lv
E-Mail Website
Guest Editor
School of computer science and Engineering, Xi’An University of Technology, Jin Hua South Road No.5, Xi’An City, Shaan Xi Province, China, 710054
Interests: very high-resolution remote sensing images; land cover change detection; landslide inventory mapping; land cover classification and pattern recognition; remote sensing application; machine learning

Special Issue Information

Dear Colleagues,

In the last couple of decades, remote sensing has represented a fundamental technology to monitor urban and natural areas at local and global scales. Important achievements have been obtained thanks to the growing availability of sensors having improved spatial and spectral resolutions and placed on different platforms such as satellite, airborne, and newly developed UAVs systems.

Such improvements in terms of acquisition capabilities open at the same time relevant challenges in terms of processing methodologies. More specifically, traditional image analysis techniques are impractical and ineffective to extract meaningful information from the growing amount of collected data. New strategies from both the methodological and the computational sides are required to deal with this massive amount of data.

In this Special Issue, we welcome methodological contributions in terms of innovative computational intelligence and learning techniques as well as the application of advanced methodologies to relevant scenarios from remote sensing data. We invite you to submit the most recent advancements in the following, and related, topics:

  • Machine learning and pattern recognition methodologies for remote sensing image analysis
  • Deep, transfer, and active learning from single and multiple sources
  • Semantic and image segmentation
  • Manifold learning
  • Large-scale image analysis
  • Change and target detection in single- and multi-temporal analysis
  • Multi-modal data fusion
  • Near-real time and real-time processing

Dr. Edoardo Pasolli
Dr. Zhou Zhang
Dr. Zhengxia Zou
Dr. ZhiYong Lv
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Remote sensing
  • Machine learning
  • Pattern recognition
  • Deep learning
  • Domain adaptation
  • Active learning
  • Manifold learning
  • Semantic segmentation
  • Data fusion

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
A Method of Ground-Based Cloud Motion Predict: CCLSTM + SR-Net
Remote Sens. 2021, 13(19), 3876; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13193876 (registering DOI) - 28 Sep 2021
Abstract
Ground-based cloud images can provide information on weather and cloud conditions, which play an important role in cloud cover monitoring and photovoltaic power generation forecasting. However, the cloud motion prediction of ground-based cloud images still lacks advanced and complete methods, and traditional technologies [...] Read more.
Ground-based cloud images can provide information on weather and cloud conditions, which play an important role in cloud cover monitoring and photovoltaic power generation forecasting. However, the cloud motion prediction of ground-based cloud images still lacks advanced and complete methods, and traditional technologies based on image processing and motion vector calculation are difficult to predict cloud morphological changes. In this paper, we propose a cloud motion prediction method based on Cascade Causal Long Short-Term Memory (CCLSTM) and Super-Resolution Network (SR-Net). Firstly, CCLSTM is used to estimate the shape and speed of cloud motion. Secondly, the Super-Resolution Network is built based on perceptual losses to reconstruct the result of CCLSTM and, finally, make it clearer. We tested our method on Atmospheric Radiation Measurement (ARM) Climate Research Facility TSI (total sky imager) images. The experiments showed that the method is able to predict the sky cloud changes in the next few steps. Full article
Article
Aircraft Detection in High Spatial Resolution Remote Sensing Images Combining Multi-Angle Features Driven and Majority Voting CNN
Remote Sens. 2021, 13(11), 2207; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13112207 - 04 Jun 2021
Cited by 2 | Viewed by 683
Abstract
Aircraft is a means of transportation and weaponry, which is crucial for civil and military fields to detect from remote sensing images. However, detecting aircraft effectively is still a problem due to the diversity of the pose, size, and position of the aircraft [...] Read more.
Aircraft is a means of transportation and weaponry, which is crucial for civil and military fields to detect from remote sensing images. However, detecting aircraft effectively is still a problem due to the diversity of the pose, size, and position of the aircraft and the variety of objects in the image. At present, the target detection methods based on convolutional neural networks (CNNs) lack the sufficient extraction of remote sensing image information and the post-processing of detection results, which results in a high missed detection rate and false alarm rate when facing complex and dense targets. Aiming at the above questions, we proposed a target detection model based on Faster R-CNN, which combines multi-angle features driven and majority voting strategy. Specifically, we designed a multi-angle transformation module to transform the input image to realize the multi-angle feature extraction of the targets in the image. In addition, we added a majority voting mechanism at the end of the model to deal with the results of the multi-angle feature extraction. The average precision (AP) of this method reaches 94.82% and 95.25% on the public and private datasets, respectively, which are 6.81% and 8.98% higher than that of the Faster R-CNN. The experimental results show that the method can detect aircraft effectively, obtaining better performance than mature target detection networks. Full article
Show Figures

Graphical abstract

Article
Single Object Tracking in Satellite Videos: Deep Siamese Network Incorporating an Interframe Difference Centroid Inertia Motion Model
Remote Sens. 2021, 13(7), 1298; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13071298 - 29 Mar 2021
Cited by 1 | Viewed by 604
Abstract
Satellite video single object tracking has attracted wide attention. The development of remote sensing platforms for earth observation technologies makes it increasingly convenient to acquire high-resolution satellite videos, which greatly accelerates ground target tracking. However, overlarge images with small object size, high similarity [...] Read more.
Satellite video single object tracking has attracted wide attention. The development of remote sensing platforms for earth observation technologies makes it increasingly convenient to acquire high-resolution satellite videos, which greatly accelerates ground target tracking. However, overlarge images with small object size, high similarity among multiple moving targets, and poor distinguishability between the objects and the background make this task most challenging. To solve these problems, a deep Siamese network (DSN) incorporating an interframe difference centroid inertia motion (ID-CIM) model is proposed in this paper. In object tracking tasks, the DSN inherently includes a template branch and a search branch; it extracts the features from these two branches and employs a Siamese region proposal network to obtain the position of the target in the search branch. The ID-CIM mechanism was proposed to alleviate model drift. These two modules build the ID-DSN framework and mutually reinforce the final tracking results. In addition, we also adopted existing object detection datasets for remotely sensed images to generate training datasets suitable for satellite video single object tracking. Ablation experiments were performed on six high-resolution satellite videos acquired from the International Space Station and “Jilin-1” satellites. We compared the proposed ID-DSN results with other 11 state-of-the-art trackers, including different networks and backbones. The comparison results show that our ID-DSN obtained a precision criterion of 0.927 and a success criterion of 0.694 with a frames per second (FPS) value of 32.117 implemented on a single NVIDIA GTX1070Ti GPU. Full article
Show Figures

Figure 1

Article
Enlighten-GAN for Super Resolution Reconstruction in Mid-Resolution Remote Sensing Images
Remote Sens. 2021, 13(6), 1104; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13061104 - 14 Mar 2021
Viewed by 934
Abstract
Previously, generative adversarial networks (GAN) have been widely applied on super resolution reconstruction (SRR) methods, which turn low-resolution (LR) images into high-resolution (HR) ones. However, as these methods recover high frequency information with what they observed from the other images, they tend to [...] Read more.
Previously, generative adversarial networks (GAN) have been widely applied on super resolution reconstruction (SRR) methods, which turn low-resolution (LR) images into high-resolution (HR) ones. However, as these methods recover high frequency information with what they observed from the other images, they tend to produce artifacts when processing unfamiliar images. Optical satellite remote sensing images are of a far more complicated scene than natural images. Therefore, applying the previous networks on remote sensing images, especially mid-resolution ones, leads to unstable convergence and thus unpleasing artifacts. In this paper, we propose Enlighten-GAN for SRR tasks on large-size optical mid-resolution remote sensing images. Specifically, we design the enlighten blocks to induce network converging to a reliable point, and bring the Self-Supervised Hierarchical Perceptual Loss to attain performance improvement overpassing the other loss functions. Furthermore, limited by memory, large-scale images need to be cropped into patches to get through the network separately. To merge the reconstructed patches into a whole, we employ the internal inconsistency loss and cropping-and-clipping strategy, to avoid the seam line. Experiment results certify that Enlighten-GAN outperforms the state-of-the-art methods in terms of gradient similarity metric (GSM) on mid-resolution Sentinel-2 remote sensing images. Full article
Show Figures

Graphical abstract

Article
A Spatial-Temporal Attention-Based Method and a New Dataset for Remote Sensing Image Change Detection
Remote Sens. 2020, 12(10), 1662; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12101662 - 22 May 2020
Cited by 28 | Viewed by 4807
Abstract
Remote sensing image change detection (CD) is done to identify desired significant changes between bitemporal images. Given two co-registered images taken at different times, the illumination variations and misregistration errors overwhelm the real object changes. Exploring the relationships among different spatial–temporal pixels may [...] Read more.
Remote sensing image change detection (CD) is done to identify desired significant changes between bitemporal images. Given two co-registered images taken at different times, the illumination variations and misregistration errors overwhelm the real object changes. Exploring the relationships among different spatial–temporal pixels may improve the performances of CD methods. In our work, we propose a novel Siamese-based spatial–temporal attention neural network. In contrast to previous methods that separately encode the bitemporal images without referring to any useful spatial–temporal dependency, we design a CD self-attention mechanism to model the spatial–temporal relationships. We integrate a new CD self-attention module in the procedure of feature extraction. Our self-attention module calculates the attention weights between any two pixels at different times and positions and uses them to generate more discriminative features. Considering that the object may have different scales, we partition the image into multi-scale subregions and introduce the self-attention in each subregion. In this way, we could capture spatial–temporal dependencies at various scales, thereby generating better representations to accommodate objects of various sizes. We also introduce a CD dataset LEVIR-CD, which is two orders of magnitude larger than other public datasets of this field. LEVIR-CD consists of a large set of bitemporal Google Earth images, with 637 image pairs (1024 × 1024) and over 31 k independently labeled change instances. Our proposed attention module improves the F1-score of our baseline model from 83.9 to 87.3 with acceptable computational overhead. Experimental results on a public remote sensing image CD dataset show our method outperforms several other state-of-the-art methods. Full article
Show Figures

Graphical abstract

Article
Hierarchical Multi-View Semi-Supervised Learning for Very High-Resolution Remote Sensing Image Classification
Remote Sens. 2020, 12(6), 1012; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12061012 - 21 Mar 2020
Cited by 4 | Viewed by 1359
Abstract
Traditional classification methods used for very high-resolution (VHR) remote sensing images require a large number of labeled samples to obtain higher classification accuracy. Labeled samples are difficult to obtain and costly. Therefore, semi-supervised learning becomes an effective paradigm that combines the labeled and [...] Read more.
Traditional classification methods used for very high-resolution (VHR) remote sensing images require a large number of labeled samples to obtain higher classification accuracy. Labeled samples are difficult to obtain and costly. Therefore, semi-supervised learning becomes an effective paradigm that combines the labeled and unlabeled samples for classification. In semi-supervised learning, the key issue is to enlarge the training set by selecting highly-reliable unlabeled samples. Observing the samples from multiple views is helpful to improving the accuracy of label prediction for unlabeled samples. Hence, the reasonable view partition is very important for improving the classification performance. In this paper, a hierarchical multi-view semi-supervised learning framework with CNNs (HMVSSL) is proposed for VHR remote sensing image classification. Firstly, a superpixel-based sample enlargement method is proposed to increase the number of training samples in each view. Secondly, a view partition method is designed to partition the training set into two independent views, and the partitioned subsets are characterized by being inter-distinctive and intra-compact. Finally, a collaborative classification strategy is proposed for the final classification. Experiments are conducted on three VHR remote sensing images, and the results show that the proposed method performs better than several state-of-the-art methods. Full article
Show Figures

Graphical abstract

Article
Multiscale Deep Spatial Feature Extraction Using Virtual RGB Image for Hyperspectral Imagery Classification
Remote Sens. 2020, 12(2), 280; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12020280 - 15 Jan 2020
Cited by 10 | Viewed by 1565
Abstract
In recent years, deep learning technology has been widely used in the field of hyperspectral image classification and achieved good performance. However, deep learning networks need a large amount of training samples, which conflicts with the limited labeled samples of hyperspectral images. Traditional [...] Read more.
In recent years, deep learning technology has been widely used in the field of hyperspectral image classification and achieved good performance. However, deep learning networks need a large amount of training samples, which conflicts with the limited labeled samples of hyperspectral images. Traditional deep networks usually construct each pixel as a subject, ignoring the integrity of the hyperspectral data and the methods based on feature extraction are likely to lose the edge information which plays a crucial role in the pixel-level classification. To overcome the limit of annotation samples, we propose a new three-channel image build method (virtual RGB image) by which the trained networks on natural images are used to extract the spatial features. Through the trained network, the hyperspectral data are disposed as a whole. Meanwhile, we propose a multiscale feature fusion method to combine both the detailed and semantic characteristics, thus promoting the accuracy of classification. Experiments show that the proposed method can achieve ideal results better than the state-of-art methods. In addition, the virtual RGB image can be extended to other hyperspectral processing methods that need to use three-channel images. Full article
Show Figures

Graphical abstract

Article
Do Game Data Generalize Well for Remote Sensing Image Segmentation?
Remote Sens. 2020, 12(2), 275; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12020275 - 14 Jan 2020
Cited by 2 | Viewed by 1865
Abstract
Despite the recent progress in deep learning and remote sensing image interpretation, the adaption of a deep learning model between different sources of remote sensing data still remains a challenge. This paper investigates an interesting question: do synthetic data generalize well for remote [...] Read more.
Despite the recent progress in deep learning and remote sensing image interpretation, the adaption of a deep learning model between different sources of remote sensing data still remains a challenge. This paper investigates an interesting question: do synthetic data generalize well for remote sensing image applications? To answer this question, we take the building segmentation as an example by training a deep learning model on the city map of a well-known video game “Grand Theft Auto V” and then adapting the model to real-world remote sensing images. We propose a generative adversarial training based segmentation framework to improve the adaptability of the segmentation model. Our model consists of a CycleGAN model and a ResNet based segmentation network, where the former one is a well-known image-to-image translation framework which learns a mapping of the image from the game domain to the remote sensing domain; and the latter one learns to predict pixel-wise building masks based on the transformed data. All models in our method can be trained in an end-to-end fashion. The segmentation model can be trained without using any additional ground truth reference of the real-world images. Experimental results on a public building segmentation dataset suggest the effectiveness of our adaptation method. Our method shows superiority over other state-of-the-art semantic segmentation methods, for example, Deeplab-v3 and UNet. Another advantage of our method is that by introducing semantic information to the image-to-image translation framework, the image style conversion can be further improved. Full article
Show Figures

Graphical abstract

Article
Geo-Object-Based Land Cover Map Update for High-Spatial-Resolution Remote Sensing Images via Change Detection and Label Transfer
Remote Sens. 2020, 12(1), 174; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12010174 - 03 Jan 2020
Cited by 6 | Viewed by 1385
Abstract
Land cover (LC) information plays an important role in different geoscience applications such as land resources and ecological environment monitoring. Enhancing the automation degree of LC classification and updating at a fine scale by remote sensing has become a key problem, as the [...] Read more.
Land cover (LC) information plays an important role in different geoscience applications such as land resources and ecological environment monitoring. Enhancing the automation degree of LC classification and updating at a fine scale by remote sensing has become a key problem, as the capability of remote sensing data acquisition is constantly being improved in terms of spatial and temporal resolution. However, the present methods of generating LC information are relatively inefficient, in terms of manually selecting training samples among multitemporal observations, which is becoming the bottleneck of application-oriented LC mapping. Thus, the objectives of this study are to speed up the efficiency of LC information acquisition and update. This study proposes a rapid LC map updating approach at a geo-object scale for high-spatial-resolution (HSR) remote sensing. The challenge is to develop methodologies for quickly sampling. Hence, the core step of our proposed methodology is an automatic method of collecting samples from historical LC maps through combining change detection and label transfer. A data set with Chinese Gaofen-2 (GF-2) HSR satellite images is utilized to evaluate the effectiveness of our method for multitemporal updating of LC maps. Prior labels in a historical LC map are certified to be effective in a LC updating task, which contributes to improve the effectiveness of the LC map update by automatically generating a number of training samples for supervised classification. The experimental outcomes demonstrate that the proposed method enhances the automation degree of LC map updating and allows for geo-object-based up-to-date LC mapping with high accuracy. The results indicate that the proposed method boosts the ability of automatic update of LC map, and greatly reduces the complexity of visual sample acquisition. Furthermore, the accuracy of LC type and the fineness of polygon boundaries in the updated LC maps effectively reflect the characteristics of geo-object changes on the ground surface, which makes the proposed method suitable for many applications requiring refined LC maps. Full article
Show Figures

Figure 1

Other

Jump to: Research

Letter
SPMF-Net: Weakly Supervised Building Segmentation by Combining Superpixel Pooling and Multi-Scale Feature Fusion
Remote Sens. 2020, 12(6), 1049; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12061049 - 24 Mar 2020
Cited by 6 | Viewed by 1387
Abstract
The lack of pixel-level labeling limits the practicality of deep learning-based building semantic segmentation. Weakly supervised semantic segmentation based on image-level labeling results in incomplete object regions and missing boundary information. This paper proposes a weakly supervised semantic segmentation method for building detection. [...] Read more.
The lack of pixel-level labeling limits the practicality of deep learning-based building semantic segmentation. Weakly supervised semantic segmentation based on image-level labeling results in incomplete object regions and missing boundary information. This paper proposes a weakly supervised semantic segmentation method for building detection. The proposed method takes the image-level label as supervision information in a classification network that combines superpixel pooling and multi-scale feature fusion structures. The main advantage of the proposed strategy is its ability to improve the intactness and boundary accuracy of a detected building. Our method achieves impressive results on two 2D semantic labeling datasets, which outperform some competing weakly supervised methods and are close to the result of the fully supervised method. Full article
Show Figures

Graphical abstract

Back to TopTop