remotesensing-logo

Journal Browser

Journal Browser

Deep Learning Techniques Applied in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: 31 May 2024 | Viewed by 8731

Special Issue Editors


E-Mail Website
Guest Editor
Technologies of Vision, Digital Industry Center, Fondazione Bruno Kessler, Trento, Italy
Interests: pattern recognition; computer vision; remote sensing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Centre de Développement des Energies Renouvelables - CDERUnité de Recherche Appliquée en Energies Renouvelables, Ghardaia, Algeria
Interests: machine learning; pattern recognition; classification; computer vision; image processing

Special Issue Information

Dear Colleagues,

Remote sensing has become an active area of research over the past few years. Today, unlike in the past, large amounts of data are accessible, and many applications are now derived from crossovers of imaging, sensing, machine vision and artificial intelligence.

In particular, the advent of deep learning has marked the onset of a new era of research in machine vision in general and remote sensing in particular, owing mainly to unprecedented performance in various applications. Nevertheless, there is always room for improvement. To this end, this Special Issue undertakes a wide range of applications that make use of deep learning methodologies applied to remote sensing data. For instance, potential submissions may cover (but are not limited to) the following deep learning-driven topics:

  • Image classification, segmentation, fusion, super-resolution and inpainting;
  • Adversarial techniques;
  • Area reconstruction or removal;
  • UAV image analysis;
  • Changes in detection;
  • Remote Sensing of aquatic environments;
  • Remote sensing in forestry;
  • Object detection/tracking/re-identification;
  • Assessing natural hazards;
  • Image captioning;
  • Visual question answering;
  • Image retrieval (including text to image retrieval and vice versa);
  • Land use/cover;
  • Urban development assessment;
  • Soil analysis (e.g., mineral estimation);
  • Precision farming;
  • Remote sensing datasets;
  • Domain adaptation;
  • Internet of Things, big data;
  • Remote sensing data protection/security;
  • Forecasting (e.g., weather data);
  • Remote sensing for renewable energy assessment (e.g., PV power/solar radiation).

Dr. Mohamed Lamine Mekhalfi
Prof. Dr. Yakoub Bazi
Dr. Edoardo Pasolli
Dr. Mawloud Guermoui
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • deep learning
  • data analysis
  • modeling
  • machine vision
  • artificial intelligence

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

21 pages, 6228 KiB  
Article
An Improved SAR Ship Classification Method Using Text-to-Image Generation-Based Data Augmentation and Squeeze and Excitation
by Lu Wang, Yuhang Qi, P. Takis Mathiopoulos, Chunhui Zhao and Suleman Mazhar
Remote Sens. 2024, 16(7), 1299; https://0-doi-org.brum.beds.ac.uk/10.3390/rs16071299 - 07 Apr 2024
Viewed by 534
Abstract
Synthetic aperture radar (SAR) plays a crucial role in maritime surveillance due to its capability for all-weather, all-day operation. However, SAR ship recognition faces challenges, primarily due to the imbalance and inadequacy of ship samples in publicly available datasets, along with the presence [...] Read more.
Synthetic aperture radar (SAR) plays a crucial role in maritime surveillance due to its capability for all-weather, all-day operation. However, SAR ship recognition faces challenges, primarily due to the imbalance and inadequacy of ship samples in publicly available datasets, along with the presence of numerous outliers. To address these issues, this paper proposes a SAR ship classification method based on text-generated images to tackle dataset imbalance. Firstly, an image generation module is introduced to augment SAR ship data. This method generates images from textual descriptions to overcome the problem of insufficient samples and the imbalance between ship categories. Secondly, given the limited information content in the black background of SAR ship images, the Tokens-to-Token Vision Transformer (T2T-ViT) is employed as the backbone network. This approach effectively combines local information on the basis of global modeling, facilitating the extraction of features from SAR images. Finally, a Squeeze-and-Excitation (SE) model is incorporated into the backbone network to enhance the network’s focus on essential features, thereby improving the model’s generalization ability. To assess the model’s effectiveness, extensive experiments were conducted on the OpenSARShip2.0 and FUSAR-Ship datasets. The performance evaluation results indicate that the proposed method achieves higher classification accuracy in the context of imbalanced datasets compared to eight existing methods. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

17 pages, 6789 KiB  
Article
Multi-Year Time Series Transfer Learning: Application of Early Crop Classification
by Matej Račič, Krištof Oštir, Anže Zupanc and Luka Čehovin Zajc
Remote Sens. 2024, 16(2), 270; https://0-doi-org.brum.beds.ac.uk/10.3390/rs16020270 - 10 Jan 2024
Cited by 2 | Viewed by 1198
Abstract
Crop classification is an important task in remote sensing with many applications, such as estimating yields, detecting crop diseases and pests, and ensuring food security. In this study, we combined knowledge from remote sensing, machine learning, and agriculture to investigate the application of [...] Read more.
Crop classification is an important task in remote sensing with many applications, such as estimating yields, detecting crop diseases and pests, and ensuring food security. In this study, we combined knowledge from remote sensing, machine learning, and agriculture to investigate the application of transfer learning with a transformer model for variable length satellite image time series (SITS). The objective was to produce a map of agricultural land, reduce required interventions, and limit in-field visits. Specifically, we aimed to provide reliable agricultural land class predictions in a timely manner and quantify the necessary amount of reference parcels to achieve these outcomes. Our dataset consisted of Sentinel-2 satellite imagery and reference crop labels for Slovenia spanning over years 2019, 2020, and 2021. We evaluated adaptability through fine-tuning in a real-world scenario of early crop classification with limited up-to-date reference data. The base model trained on a different year achieved an average F1 score of 82.5% for the target year without having a reference from the target year. To increase accuracy with a new model trained from scratch, an average of 48,000 samples are required in the target year. Using transfer learning, the pre-trained models can be efficiently adapted to an unknown year, requiring less than 0.3% (1500) samples from the dataset. Building on this, we show that transfer learning can outperform the baseline in the context of early classification with only 9% of the data after 210 days in the year. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 10636 KiB  
Article
A Near Real-Time Mapping of Tropical Forest Disturbance Using SAR and Semantic Segmentation in Google Earth Engine
by John Burns Kilbride, Ate Poortinga, Biplov Bhandari, Nyein Soe Thwal, Nguyen Hanh Quyen, Jeff Silverman, Karis Tenneson, David Bell, Matthew Gregory, Robert Kennedy and David Saah
Remote Sens. 2023, 15(21), 5223; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15215223 - 03 Nov 2023
Cited by 2 | Viewed by 1799
Abstract
Satellite-based forest alert systems are an important tool for ecosystem monitoring, planning conservation, and increasing public awareness of forest cover change. Continuous monitoring in tropical regions, such as those experiencing pronounced monsoon seasons, can be complicated by spatially extensive and persistent cloud cover. [...] Read more.
Satellite-based forest alert systems are an important tool for ecosystem monitoring, planning conservation, and increasing public awareness of forest cover change. Continuous monitoring in tropical regions, such as those experiencing pronounced monsoon seasons, can be complicated by spatially extensive and persistent cloud cover. One solution is to use Synthetic Aperture Radar (SAR) imagery acquired by the European Space Agency’s Sentinel-1A and B satellites. The Sentinel 1A and B satellites acquire C-band radar data that penetrates cloud cover and can be acquired during the day or night. One challenge associated with operational use of radar imagery is that the speckle associated with the backscatter values can complicate traditional pixel-based analysis approaches. A potential solution is to use deep learning semantic segmentation models that can capture predictive features that are more robust to pixel-level noise. In this analysis, we present a prototype SAR-based forest alert system that utilizes deep learning classifiers, deployed using the Google Earth Engine cloud computing platform, to identify forest cover change with near real-time classification over two Cambodian wildlife sanctuaries. By leveraging a pre-existing forest cover change dataset derived from multispectral Landsat imagery, we present a method for efficiently developing a SAR-based semantic segmentation dataset. In practice, the proposed framework achieved good performance comparable to an existing forest alert system while offering more flexibility and ease of development from an operational standpoint. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

17 pages, 1045 KiB  
Article
Ship Detection via Multi-Scale Deformation Modeling and Fine Region Highlight-Based Loss Function
by Chao Li, Jianming Hu, Dawei Wang, Hanfu Li and Zhile Wang
Remote Sens. 2023, 15(17), 4337; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15174337 - 03 Sep 2023
Viewed by 1115
Abstract
Ship detection in optical remote sensing images plays a vital role in numerous civil and military applications, encompassing maritime rescue, port management and sea area surveillance. However, the multi-scale and deformation characteristics of ships in remote sensing images, as well as complex scene [...] Read more.
Ship detection in optical remote sensing images plays a vital role in numerous civil and military applications, encompassing maritime rescue, port management and sea area surveillance. However, the multi-scale and deformation characteristics of ships in remote sensing images, as well as complex scene interferences such as varying degrees of clouds, obvious shadows, and complex port facilities, pose challenges for ship detection performance. To address these problems, we propose a novel ship detection method by combining multi-scale deformation modeling and fine region highlight-based loss function. First, a visual saliency extraction network based on multiple receptive field and deformable convolution is proposed, which employs multiple receptive fields to mine the difference between the target and the background, and accurately extracts the complete features of the target through deformable convolution, thus improving the ability to distinguish the target from the complex background. Then, a customized loss function for the fine target region highlight is employed, which comprehensively considers the brightness, contrast and structural characteristics of ship targets, thus improving the classification performance in complex scenes with interferences. The experimental results on a high-quality ship dataset indicate that our method realizes state-of-the-art performance compared to eleven considered detection models. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

25 pages, 2163 KiB  
Article
LPMSNet: Location Pooling Multi-Scale Network for Cloud and Cloud Shadow Segmentation
by Xin Dai, Kai Chen, Min Xia, Liguo Weng and Haifeng Lin
Remote Sens. 2023, 15(16), 4005; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15164005 - 12 Aug 2023
Cited by 6 | Viewed by 928
Abstract
Among the most difficult difficulties in contemporary satellite image-processing subjects is cloud and cloud shade segmentation. Due to substantial background noise interference, existing cloud and cloud shadow segmentation techniques would result in false detection and missing detection. We propose a Location Pooling Multi-Scale [...] Read more.
Among the most difficult difficulties in contemporary satellite image-processing subjects is cloud and cloud shade segmentation. Due to substantial background noise interference, existing cloud and cloud shadow segmentation techniques would result in false detection and missing detection. We propose a Location Pooling Multi-Scale Network (LPMSNet) in this study. The residual network is utilised as the backbone in this method to acquire semantic info on various levels. Simultaneously, the Location Attention Multi-Scale Aggregation Module (LAMA) is introduced to obtain the image’s multi-scale info. The Channel Spatial Attention Module (CSA) is introduced to boost the network’s focus on segmentation goals. Finally, in view of the problem that the edge details of cloud as well as cloud shade are easily lost, this work designs the Scale Fusion Restoration Module (SFR). SFR can perform picture upsampling as well as the acquisition of edge detail information from cloud as well as cloud shade. The mean intersection over union (MIoU) accuracy of this network reached 94.36% and 81.60% on the Cloud and Cloud Shadow Dataset and the five-category dataset L8SPARCS, respectively. On the two-category HRC-WHU Dataset, the accuracy of the network on the intersection over union (IoU) reached 90.51%. In addition, in the Cloud and Cloud Shadow Dataset, our network achieves 97.17%, 96.83%, and 97.00% in precision (P), recall (R), and F1 score (F1) in cloud segmentation tasks, respectively. In the cloud shadow segmentation task, precision (P), recall (R), and F1 score (F1) reached 95.70%, 96.38%, and 96.04%, respectively. Therefore, this method has a significant advantage over the current cloud and cloud shade segmentation methods. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

22 pages, 3814 KiB  
Article
MBCNet: Multi-Branch Collaborative Change-Detection Network Based on Siamese Structure
by Dehao Wang, Liguo Weng, Min Xia and Haifeng Lin
Remote Sens. 2023, 15(9), 2237; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15092237 - 23 Apr 2023
Cited by 11 | Viewed by 1570
Abstract
The change-detection task is essentially a binary semantic segmentation task of changing and invariant regions. However, this is much more difficult than simple binary tasks, as the changing areas typically include multiple terrains such as factories, farmland, roads, buildings, and mining areas. This [...] Read more.
The change-detection task is essentially a binary semantic segmentation task of changing and invariant regions. However, this is much more difficult than simple binary tasks, as the changing areas typically include multiple terrains such as factories, farmland, roads, buildings, and mining areas. This requires the ability of the network to extract features. To this end, we propose a multi-branch collaborative change-detection network based on Siamese structure (MHCNet). In the model, three branches, the difference branch, global branch, and similar branch, are constructed to refine and extract semantic information from remote-sensing images. Four modules, a cross-scale feature-attention module (CSAM), global semantic filtering module (GSFM), double-branch information-fusion module (DBIFM), and similarity-enhancement module (SEM), are proposed to assist the three branches to extract semantic information better. The CSFM module is used to extract the semantic information related to the change in the remote-sensing image from the difference branch, the GSFM module is used to filter the rich semantic information in the remote-sensing image, and the DBIFM module is used to fuse the semantic information extracted from the difference branch and the global branch. Finally, the SEM module uses the similar information extracted with the similar branch to correct the details of the feature map in the feature-recovery stage. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

Other

Jump to: Research

15 pages, 4440 KiB  
Technical Note
Multi-Feature Dynamic Fusion Cross-Domain Scene Classification Model Based on Lie Group Space
by Chengjun Xu, Jingqian Shu and Guobin Zhu
Remote Sens. 2023, 15(19), 4790; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15194790 - 30 Sep 2023
Cited by 1 | Viewed by 734
Abstract
To address the problem of the expensive and time-consuming annotation of high-resolution remote sensing images (HRRSIs), scholars have proposed cross-domain scene classification models, which can utilize learned knowledge to classify unlabeled data samples. Due to the significant distribution difference between a source domain [...] Read more.
To address the problem of the expensive and time-consuming annotation of high-resolution remote sensing images (HRRSIs), scholars have proposed cross-domain scene classification models, which can utilize learned knowledge to classify unlabeled data samples. Due to the significant distribution difference between a source domain (training sample set) and a target domain (test sample set), scholars have proposed domain adaptation models based on deep learning to reduce the above differences. However, the existing models have the following shortcomings: (1) insufficient learning of feature information, resulting in feature loss and restricting the spatial extent of domain-invariant features; (2) models easily focus on background feature information, resulting in negative transfer; (3) the relationship between the marginal distribution and the conditional distribution is not fully considered, and the weight parameters between them are manually set, which is time-consuming and may fall into local optimum. To address the above problems, this study proposes a novel remote sensing cross-domain scene classification model based on Lie group spatial attention and adaptive multi-feature distribution. Concretely, the model first introduces Lie group feature learning and maps the samples to the Lie group manifold space. By learning features of different levels and different scales and feature fusion, richer features are obtained, and the spatial scope of domain-invariant features is expanded. In addition, we also design an attention mechanism based on dynamic feature fusion alignment, which effectively enhances the weight of key regions and dynamically balances the importance between marginal and conditional distributions. Extensive experiments are conducted on three publicly available and challenging datasets, and the experimental results show the advantages of our proposed method over other state-of-the-art deep domain adaptation methods. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

Back to TopTop