remotesensing-logo

Journal Browser

Journal Browser

Pattern Analysis in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: closed (15 October 2022) | Viewed by 23520

Special Issue Editors


E-Mail Website
Guest Editor
Technologies of Vision, Digital Industry Center, Fondazione Bruno Kessler, Via Sommarive, 18, 38123 Povo, TN, Italy
Interests: pattern recognition; computer vision; remote sensing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing constitutes an essential instrument towards the monitoring of earthly changes. It has been effectively adopted in the pre- and post-analysis of various civilian (e.g., urban expansion), environmental (e.g., natural disaster prevention/aftermath), as well as military applications (e.g., target detection and localization). Furthermore, the onset of several European and American missions has brought about voluminous data, often in the form of multispectral and hyperspectral images. In order to avail the most out of these latter ones, their handling and analysis require particular attention. Further, the advent of miniaturized unmanned aerial vehicles (UAV) has enabled the acquisition of high-resolution data, which is prone to facilitate the pinpointing of fine details.

In the last decade, a cutting-edge performance has been witnessed in several remote sensing applications such as object detection and/or segmentation, owing for instance to deep learning architectures. However, it is evident that remote sensing has been lagging behind computer vision, which has quickly moved past traditional tasks such as object recognition and is now already paving the way toward bigger challenges such as image description and annotation, commonly termed as the “next frontier”.

In this regard, this Special Issue encourages the submission of papers that offer challenging applications and innovative solutions in the wide topic of remote sensing image analysis. In particular, topics that fall within (but are not limited to) the following are welcome:

  • Multispectral/hyperspectral remote sensing image classification, segmentation, fusion;
  • Natural disaster analysis (e.g., evolution, aftermath) via remote sensing data;
  • Object detection and/or estimation/counting in remote sensing images;
  • Mapping between natural language and remote sensing images;
  • Deep architectures in remote sensing data;
  • Change detection in remote sensing images;
  • UAV image analysis.

Dr. Mohamed Lamine Mekhalfi
Dr. Yakoub Bazi
Dr. Edoardo Pasolli
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing image analysis
  • object detection
  • deep learning
  • image description
  • change detection

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 13298 KiB  
Article
Comparing Object-Based and Pixel-Based Methods for Local Climate Zones Mapping with Multi-Source Data
by Ziyun Yan, Lei Ma, Weiqiang He, Liang Zhou, Heng Lu, Gang Liu and Guoan Huang
Remote Sens. 2022, 14(15), 3744; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14153744 - 04 Aug 2022
Cited by 10 | Viewed by 2121
Abstract
The local climate zones (LCZs) system, a standard framework characterizing urban form and environment, effectively promotes urban remote sensing research, especially urban heat island (UHI) research. However, whether mapping with objects is more advantageous than with pixels in LCZ mapping remains uncertain. This [...] Read more.
The local climate zones (LCZs) system, a standard framework characterizing urban form and environment, effectively promotes urban remote sensing research, especially urban heat island (UHI) research. However, whether mapping with objects is more advantageous than with pixels in LCZ mapping remains uncertain. This study aims to compare object-based and pixel-based LCZ mapping with multi-source data in detail. By comparing the object-based method with the pixel-based method in 50 and 100 m, respectively, we found that the object-based method performed better with overall accuracy (OA) higher at approximately 2% and 5%, respectively. In per-class analysis, the object-based method showed a clear advantage in the land cover types and competitive performance in built types while LCZ2, LCZ5, and LCZ6 performed better with the pixel-based method in 50 m. We further employed correlation-based feature selection (CFS) to evaluate feature importance in the object-based paradigm, finding that building height (BH), sky view factor (SVF), building surface fraction (BSF), permeable surface fraction (PSF), and land use exhibited high selection frequency while image bands were scarcely selected. In summary, we concluded that the object-based method is capable of LCZ mapping and performs better than the pixel-based method under the same training condition unless in under-segmentation cases. Full article
(This article belongs to the Special Issue Pattern Analysis in Remote Sensing)
Show Figures

Graphical abstract

26 pages, 9001 KiB  
Article
Robust Object Categorization and Scene Classification over Remote Sensing Images via Features Fusion and Fully Convolutional Network
by Yazeed Yasin Ghadi, Adnan Ahmed Rafique, Tamara al Shloul, Suliman A. Alsuhibany, Ahmad Jalal and Jeongmin Park
Remote Sens. 2022, 14(7), 1550; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14071550 - 23 Mar 2022
Cited by 16 | Viewed by 2246
Abstract
The latest visionary technologies have made an evident impact on remote sensing scene classification. Scene classification is one of the most challenging yet important tasks in understanding high-resolution aerial and remote sensing scenes. In this discipline, deep learning models, particularly convolutional neural networks [...] Read more.
The latest visionary technologies have made an evident impact on remote sensing scene classification. Scene classification is one of the most challenging yet important tasks in understanding high-resolution aerial and remote sensing scenes. In this discipline, deep learning models, particularly convolutional neural networks (CNNs), have made outstanding accomplishments. Deep feature extraction from a CNN model is a frequently utilized technique in these approaches. Although CNN-based techniques have achieved considerable success, there is indeed ample space for improvement in terms of their classification accuracies. Certainly, fusion with other features has the potential to extensively improve the performance of distant imaging scene classification. This paper, thus, offers an effective hybrid model that is based on the concept of feature-level fusion. We use the fuzzy C-means segmentation technique to appropriately classify various objects in the remote sensing images. The segmented regions of the image are then labeled using a Markov random field (MRF). After the segmentation and labeling of the objects, classical and CNN features are extracted and combined to classify the objects. After categorizing the objects, object-to-object relations are studied. Finally, these objects are transmitted to a fully convolutional network (FCN) for scene classification along with their relationship triplets. The experimental evaluation of three publicly available standard datasets reveals the phenomenal performance of the proposed system. Full article
(This article belongs to the Special Issue Pattern Analysis in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 35718 KiB  
Article
NaGAN: Nadir-like Generative Adversarial Network for Off-Nadir Object Detection of Multi-View Remote Sensing Imagery
by Lei Ni, Chunlei Huo, Xin Zhang, Peng Wang, Luyang Zhang, Kangkang Guo and Zhixin Zhou
Remote Sens. 2022, 14(4), 975; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14040975 - 16 Feb 2022
Cited by 2 | Viewed by 2182
Abstract
Detecting off-nadir objects is a well-known challenge in remote sensing due to the distortion and mutable representation. Existing methods mainly focus on a narrow range of view angles, and they ignore broad-view pantoscopic remote sensing imagery. To address the off-nadir object detection problem [...] Read more.
Detecting off-nadir objects is a well-known challenge in remote sensing due to the distortion and mutable representation. Existing methods mainly focus on a narrow range of view angles, and they ignore broad-view pantoscopic remote sensing imagery. To address the off-nadir object detection problem in remote sensing, a new nadir-like generative adversarial network (NaGAN) is proposed in this paper by narrowing the representation differences between the off-nadir and nadir object. NaGAN consists of a generator and a discriminator, in which the generator learns to transform the off-nadir object to a nadir-like one so that they are difficult to discriminate by the discriminator, and the discriminator competes with the generator to learn more nadir-like features. With the progressive competition between the generator and discriminator, the performances of off-nadir object detection are improved significantly. Extensive evaluations on the challenging SpaceNet benchmark for remote sensing demonstrate the superiority of NaGAN to the well-established state-of-the-art in detecting off-nadir objects. Full article
(This article belongs to the Special Issue Pattern Analysis in Remote Sensing)
Show Figures

Figure 1

25 pages, 134662 KiB  
Article
Detection of Windthrown Tree Stems on UAV-Orthomosaics Using U-Net Convolutional Networks
by Stefan Reder, Jan-Peter Mund, Nicole Albert, Lilli Waßermann and Luis Miranda
Remote Sens. 2022, 14(1), 75; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14010075 - 24 Dec 2021
Cited by 8 | Viewed by 2623
Abstract
The increasing number of severe storm events is threatening European forests. Besides the primary damages directly caused by storms, there are secondary damages such as bark beetle outbreaks and tertiary damages due to negative effects on the market. These subsequent damages can be [...] Read more.
The increasing number of severe storm events is threatening European forests. Besides the primary damages directly caused by storms, there are secondary damages such as bark beetle outbreaks and tertiary damages due to negative effects on the market. These subsequent damages can be minimized if a detailed overview of the affected area and the amount of damaged wood can be obtained quickly and included in the planning of clearance measures. The present work utilizes UAV-orthophotos and an adaptation of the U-Net architecture for the semantic segmentation and localization of windthrown stems. The network was pre-trained with generic datasets, randomly combining stems and background samples in a copy–paste augmentation, and afterwards trained with a specific dataset of a particular windthrow. The models pre-trained with generic datasets containing 10, 50 and 100 augmentations per annotated windthrown stems achieved F1-scores of 73.9% (S1Mod10), 74.3% (S1Mod50) and 75.6% (S1Mod100), outperforming the baseline model (F1-score 72.6%), which was not pre-trained. These results emphasize the applicability of the method to correctly identify windthrown trees and suggest the collection of training samples from other tree species and windthrow areas to improve the ability to generalize. Further enhancements of the network architecture are considered to improve the classification performance and to minimize the calculative costs. Full article
(This article belongs to the Special Issue Pattern Analysis in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 29188 KiB  
Article
SFRS-Net: A Cloud-Detection Method Based on Deep Convolutional Neural Networks for GF-1 Remote-Sensing Images
by Xiaolong Li, Hong Zheng, Chuanzhao Han, Wentao Zheng, Hao Chen, Ying Jing and Kaihan Dong
Remote Sens. 2021, 13(15), 2910; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13152910 - 24 Jul 2021
Cited by 12 | Viewed by 2621
Abstract
Clouds constitute a major obstacle to the application of optical remote-sensing images as they destroy the continuity of the ground information in the images and reduce their utilization rate. Therefore, cloud detection has become an important preprocessing step for optical remote-sensing image applications. [...] Read more.
Clouds constitute a major obstacle to the application of optical remote-sensing images as they destroy the continuity of the ground information in the images and reduce their utilization rate. Therefore, cloud detection has become an important preprocessing step for optical remote-sensing image applications. Due to the fact that the features of clouds in current cloud-detection methods are mostly manually interpreted and the information in remote-sensing images is complex, the accuracy and generalization of current cloud-detection methods are unsatisfactory. As cloud detection aims to extract cloud regions from the background, it can be regarded as a semantic segmentation problem. A cloud-detection method based on deep convolutional neural networks (DCNN)—that is, a spatial folding–unfolding remote-sensing network (SFRS-Net)—is introduced in the paper, and the reason for the inaccuracy of DCNN during cloud region segmentation and the concept of space folding/unfolding is presented. The backbone network of the proposed method adopts an encoder–decoder structure, in which the pooling operation in the encoder is replaced by a folding operation, and the upsampling operation in the decoder is replaced by an unfolding operation. As a result, the accuracy of cloud detection is improved, while the generalization is guaranteed. In the experiment, the multispectral data of the GaoFen-1 (GF-1) satellite is collected to form a dataset, and the overall accuracy (OA) of this method reaches 96.98%, which is a satisfactory result. This study aims to develop a method that is suitable for cloud detection and can complement other cloud-detection methods, providing a reference for researchers interested in cloud detection of remote-sensing images. Full article
(This article belongs to the Special Issue Pattern Analysis in Remote Sensing)
Show Figures

Graphical abstract

22 pages, 4936 KiB  
Article
Early Identification of Root Rot Disease by Using Hyperspectral Reflectance: The Case of Pathosystem Grapevine/Armillaria
by Federico Calamita, Hafiz Ali Imran, Loris Vescovo, Mohamed Lamine Mekhalfi and Nicola La Porta
Remote Sens. 2021, 13(13), 2436; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132436 - 22 Jun 2021
Cited by 24 | Viewed by 4343
Abstract
Armillaria genus represents one of the most common causes of chronic root rot disease in woody plants. Prompt recognition of diseased plants is crucial to control the pathogen. However, the current disease detection methods are limited at a field scale. Therefore, an alternative [...] Read more.
Armillaria genus represents one of the most common causes of chronic root rot disease in woody plants. Prompt recognition of diseased plants is crucial to control the pathogen. However, the current disease detection methods are limited at a field scale. Therefore, an alternative approach is needed. In this study, we investigated the potential of hyperspectral techniques to identify fungi-infected vs. healthy plants of Vitis vinifera. We used the hyperspectral imaging sensor Specim-IQ to acquire leaves’ reflectance data of the Teroldego Rotaliano grapevine cultivar. We analyzed three different groups of plants: healthy, asymptomatic, and diseased. Highly significant differences were found in the near-infrared (NIR) spectral region with a decreasing pattern from healthy to diseased plants attributable to the leaf mesophyll changes. Asymptomatic plants emerged from the other groups due to a lower reflectance in the red edge spectrum (around 705 nm), ascribable to an accumulation of secondary metabolites involved in plant defense strategies. Further significant differences were observed in the wavelengths close to 550 nm in diseased vs. asymptomatic plants. We evaluated several machine learning paradigms to differentiate the plant groups. The Naïve Bayes (NB) algorithm, combined with the most discriminant variables among vegetation indices and spectral narrow bands, provided the best results with an overall accuracy of 90% and 75% in healthy vs. diseased and healthy vs. asymptomatic plants, respectively. To our knowledge, this study represents the first report on the possibility of using hyperspectral data for root rot disease diagnosis in woody plants. Although further validation studies are required, it appears that the spectral reflectance technique, possibly implemented on unmanned aerial vehicles (UAVs), could be a promising tool for a cost-effective, non-invasive method of Armillaria disease diagnosis and mapping in-field, contributing to a significant step forward in precision viticulture. Full article
(This article belongs to the Special Issue Pattern Analysis in Remote Sensing)
Show Figures

Graphical abstract

15 pages, 77805 KiB  
Article
Damage-Map Estimation Using UAV Images and Deep Learning Algorithms for Disaster Management System
by Dai Quoc Tran, Minsoo Park, Daekyo Jung and Seunghee Park
Remote Sens. 2020, 12(24), 4169; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12244169 - 19 Dec 2020
Cited by 32 | Viewed by 5103
Abstract
Estimating the damaged area after a forest fire is important for responding to this natural catastrophe. With the support of aerial remote sensing, typically with unmanned aerial vehicles (UAVs), the aerial imagery of forest-fire areas can be easily obtained; however, retrieving the burnt [...] Read more.
Estimating the damaged area after a forest fire is important for responding to this natural catastrophe. With the support of aerial remote sensing, typically with unmanned aerial vehicles (UAVs), the aerial imagery of forest-fire areas can be easily obtained; however, retrieving the burnt area from the image is still a challenge. We implemented a new approach for segmenting burnt areas from UAV images using deep learning algorithms. First, the data were collected from a forest fire in Andong, the Republic of Korea, in April 2020. Then, the proposed two-patch-level deep-learning models were implemented. A patch-level 1 network was trained using the UNet++ architecture. The output prediction of this network was used as a position input for the second network, which used UNet. It took the reference position from the first network as its input and refined the results. Finally, the final performance of our proposed method was compared with a state-of-the-art image-segmentation algorithm to prove its robustness. Comparative research on the loss functions was also performed. Our proposed approach demonstrated its effectiveness in extracting burnt areas from UAV images and can contribute to estimating maps showing the areas damaged by forest fires. Full article
(This article belongs to the Special Issue Pattern Analysis in Remote Sensing)
Show Figures

Figure 1

Back to TopTop