remotesensing-logo

Journal Browser

Journal Browser

Artificial Intelligence-Based Learning Approaches for Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (31 July 2022) | Viewed by 50227

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Embedded Systems Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Republic of Korea
Interests: remote sensing; deep learning; artificial intelligence; image processing; signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing is a tool for comprehending the earth and supporting human–earth communications. Recent advances in remote sensing have led to a high-resolution monitoring of Earth on a global scale, providing a massive amount of earth observation data. These data must be processed with new levels of accuracy, complexity, security, achievement, and reliability. Therefore, applicable and consistent research on artificial intelligence-based learning methods and its applied image processing are needed for remote sensing. These methods can be general and specific tools of artificial intelligence, including regression models, neural networks, decision trees, information retrieval, reinforcement learning, graphical models, and decision processes. We trust that artificial intelligence, deep learning and data science methods will provide promising tools to overcome many challenging issues in remote sensing in terms of accuracy and reliability. This Special Issue is the second edition of “Advanced Machine Learning for Time Series Remote Sensing Data Analysis”. In this second edition, our new Special Issue aims to report the latest advances and trends concerning the advanced artificial learning and data science techniques to the remote sensing data processing issues. Papers of both theoretical and applicative nature, as well as contributions regarding new advanced artificial learning and data science techniques for the remote sensing research community, are welcome.

Dr. Gwanggil Jeon
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI (architectures, models, learning, etc.) and data science approach for remote sensing
  • Explainable and interpretable machine learning
  • HPC-based and distributed machine learning for large-scale image analysis
  • Reinforcement learning for remote sensing
  • Information retrieval for remote sensing
  • Big data analytics for beyond 5G
  • Edge/fog computing for remote sensing
  • IoT data analytics in remote sensing
  • Data-driven applications in remote sensing

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

4 pages, 172 KiB  
Editorial
Artificial Intelligence-Based Learning Approaches for Remote Sensing
by Gwanggil Jeon
Remote Sens. 2022, 14(20), 5203; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14205203 - 18 Oct 2022
Cited by 1 | Viewed by 1257
Abstract
Remote sensing (RS) is a method for understanding the ground and for facilitating human–ground communications [...] Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)

Research

Jump to: Editorial

18 pages, 5473 KiB  
Article
Concrete Bridge Defects Identification and Localization Based on Classification Deep Convolutional Neural Networks and Transfer Learning
by Hajar Zoubir, Mustapha Rguig, Mohamed El Aroussi, Abdellah Chehri, Rachid Saadane and Gwanggil Jeon
Remote Sens. 2022, 14(19), 4882; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194882 - 30 Sep 2022
Cited by 14 | Viewed by 2617
Abstract
Conventional practices of bridge visual inspection present several limitations, including a tedious process of analyzing images manually to identify potential damages. Vision-based techniques, particularly Deep Convolutional Neural Networks, have been widely investigated to automatically identify, localize, and quantify defects in bridge images. However, [...] Read more.
Conventional practices of bridge visual inspection present several limitations, including a tedious process of analyzing images manually to identify potential damages. Vision-based techniques, particularly Deep Convolutional Neural Networks, have been widely investigated to automatically identify, localize, and quantify defects in bridge images. However, massive datasets with different annotation levels are required to train these deep models. This paper presents a dataset of more than 6900 images featuring three common defects of concrete bridges (i.e., cracks, efflorescence, and spalling). To overcome the challenge of limited training samples, three Transfer Learning approaches in fine-tuning the state-of-the-art Visual Geometry Group network were studied and compared to classify the three defects. The best-proposed approach achieved a high testing accuracy (97.13%), combined with high F1-scores of 97.38%, 95.01%, and 97.35% for cracks, efflorescence, and spalling, respectively. Furthermore, the effectiveness of interpretable networks was explored in the context of weakly supervised semantic segmentation using image-level annotations. Two gradient-based backpropagation interpretation techniques were used to generate pixel-level heatmaps and localize defects in test images. Qualitative results showcase the potential use of interpretation maps to provide relevant information on defect localization in a weak supervision framework. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

33 pages, 97899 KiB  
Article
Using Multi-Source Real Landform Data to Predict and Analyze Intercity Remote Interference of 5G Communication with Ducting and Troposcatter Effects
by Kai Yang, Xing Guo, Zhensen Wu, Jiaji Wu, Tao Wu, Kun Zhao, Tan Qu and Longxiang Linghu
Remote Sens. 2022, 14(18), 4515; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14184515 - 09 Sep 2022
Cited by 4 | Viewed by 1740
Abstract
At present, 5G base stations are densely distributed in major cities based on improving user concentrations and the large demand for services in urban hotspot areas. Moreover, 5G communication requires more accurate communication propagation loss (PL) monitoring. Low-build areas, such as suburbs and [...] Read more.
At present, 5G base stations are densely distributed in major cities based on improving user concentrations and the large demand for services in urban hotspot areas. Moreover, 5G communication requires more accurate communication propagation loss (PL) monitoring. Low-build areas, such as suburbs and rural areas, are prone to forming relatively stable tropospheric ducts, which can bend the signal to the surface in the duct-trapping layer for multiple reflections. Due to the random flow of the atmospheric air mass, each reflection of the communication signal is re-scattered in the troposphere through the top of the duct layer, thereby expanding the propagation range of the signal and changing the expected effect of radio wave propagation. If ducting and troposcatter effects happen in the 5G base station antenna layer, co-channel interference (CCI) could occur, affecting the quality of electromagnetic propagation. Urban links in the plain area have no major terrain obstacles, but ground fluctuations and land cover scattering have a greater impact on the signal scattering at the bottom of the duct. On the basis of the forward-propagation theory, this paper adds factors, such as ducting the forecast value using real weather parameters, terrain, and land cover-type distributions to evaluate the CCIs of over-the-horizon communications on the intercity link. Based on 1300 sets of randomly generated terrains and landforms, two deep learning (DL) models were used to predict the PL of over-the-horizon communications between cities in a land-based ducting environment. The accuracy of LSTM prediction could reach 98.4%. The verification of PL prediction using DL in this paper allows for quick and efficient prediction of PL in the land-based ducting of intercity links using land cover characteristics. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

25 pages, 3806 KiB  
Article
SORAG: Synthetic Data Over-Sampling Strategy on Multi-Label Graphs
by Yijun Duan, Xin Liu, Adam Jatowt, Hai-tao Yu, Steven Lynden, Kyoung-Sook Kim and Akiyoshi Matono
Remote Sens. 2022, 14(18), 4479; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14184479 - 08 Sep 2022
Cited by 4 | Viewed by 1632
Abstract
In many real-world networks of interest in the field of remote sensing (e.g., public transport networks), nodes are associated with multiple labels, and node classes are imbalanced; that is, some classes have significantly fewer samples than others. However, the research problem of imbalanced [...] Read more.
In many real-world networks of interest in the field of remote sensing (e.g., public transport networks), nodes are associated with multiple labels, and node classes are imbalanced; that is, some classes have significantly fewer samples than others. However, the research problem of imbalanced multi-label graph node classification remains unexplored. This non-trivial task challenges the existing graph neural networks (GNNs) because the majority class can dominate the loss functions of GNNs and result in the overfitting of the majority class features and label correlations. On non-graph data, minority over-sampling methods (such as the synthetic minority over-sampling technique and its variants) have been demonstrated to be effective for the imbalanced data classification problem. This study proposes and validates a new hypothesis with unlabeled data over-sampling, which is meaningless for imbalanced non-graph data; however, feature propagation and topological interplay mechanisms between graph nodes can facilitate the representation learning of imbalanced graphs. Furthermore, we determine empirically that ensemble data synthesis through the creation of virtual minority samples in the central region of a minority and generation of virtual unlabeled samples in the boundary region between a minority and majority is the best practice for the imbalanced multi-label graph node classification task. Our proposed novel data over-sampling framework is evaluated using multiple real-world network datasets, and it outperforms diverse, strong benchmark models by a large margin. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

20 pages, 4671 KiB  
Article
M-O SiamRPN with Weight Adaptive Joint MIoU for UAV Visual Localization
by Kailin Wen, Jie Chu, Jiayan Chen, Yu Chen and Jueping Cai
Remote Sens. 2022, 14(18), 4467; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14184467 - 07 Sep 2022
Cited by 5 | Viewed by 1612
Abstract
Vision-based unmanned aerial vehicle (UAV) localization is capable of providing real-time coordinates independently during GNSS interruption, which is important in security, agriculture, industrial mapping, and other fields. owever, there are problems with shadows, the tiny size of targets, interfering objects, and motion blurred [...] Read more.
Vision-based unmanned aerial vehicle (UAV) localization is capable of providing real-time coordinates independently during GNSS interruption, which is important in security, agriculture, industrial mapping, and other fields. owever, there are problems with shadows, the tiny size of targets, interfering objects, and motion blurred edges in aerial images captured by UAVs. Therefore, a multi-order Siamese region proposal network (M-O SiamRPN) with weight adaptive joint multiple intersection over union (MIoU) loss function is proposed to overcome the above limitations. The normalized covariance of 2-O information based on1-O features is introduced in the Siamese convolutional neural network to improve the representation and sensitivity of the network to edges. We innovatively propose a spatial continuity criterion to select 1-O features with richer local details for the calculation of 2-O information, to ensure the effectiveness of M-O features. To reduce the effect of unavoidable positive and negative sample imbalance in target detection, weight adaptive coefficients were designed to automatically modify the penalty factor of cross-entropy loss. Moreover, the MIoU was constructed to constrain the anchor box regression from multiple perspectives. In addition, we proposed an improved Wallis shadow automatic compensation method to pre-process aerial images, providing the basis for subsequent image matching procedures. We also built a consumer-grade UAV acquisition platform to construct an aerial image dataset for experimental validation. The results show that our framework achieved excellent performance for each quantitative and qualitative metric, with the highest precision being 0.979 and a success rate of 0.732. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Graphical abstract

21 pages, 12944 KiB  
Article
Spectral-Spatial Interaction Network for Multispectral Image and Panchromatic Image Fusion
by Zihao Nie, Lihui Chen, Seunggil Jeon and Xiaomin Yang
Remote Sens. 2022, 14(16), 4100; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14164100 - 21 Aug 2022
Cited by 5 | Viewed by 1615
Abstract
Recently, with the rapid development of deep learning (DL), an increasing number of DL-based methods are applied in pansharpening. Benefiting from the powerful feature extraction capability of deep learning, DL-based methods have achieved state-of-the-art performance in pansharpening. However, most DL-based methods simply fuse [...] Read more.
Recently, with the rapid development of deep learning (DL), an increasing number of DL-based methods are applied in pansharpening. Benefiting from the powerful feature extraction capability of deep learning, DL-based methods have achieved state-of-the-art performance in pansharpening. However, most DL-based methods simply fuse multi-spectral (MS) images and panchromatic (PAN) images by concatenating, which can not make full use of the spectral information and spatial information of MS and PAN images, respectively. To address this issue, we propose a spectral-spatial interaction Network (SSIN) for pansharpening. Different from previous works, we extract the features of PAN and MS, respectively, and then interact them repetitively to incorporate spectral and spatial information progressively. In order to enhance the spectral-spatial information fusion, we further propose spectral-spatial attention (SSA) module to yield a more effective spatial-spectral information transfer in the network. Extensive experiments on QuickBird, WorldView-4, and WorldView-2 images demonstrate that our SSIN significantly outperforms other methods in terms of both objective assessment and visual quality. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

25 pages, 3066 KiB  
Article
NeXtNow: A Convolutional Deep Learning Model for the Prediction of Weather Radar Data for Nowcasting Purposes
by Alexandra-Ioana Albu, Gabriela Czibula, Andrei Mihai, Istvan Gergely Czibula, Sorin Burcea and Abdelkader Mezghani
Remote Sens. 2022, 14(16), 3890; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14163890 - 11 Aug 2022
Cited by 7 | Viewed by 2177
Abstract
With the recent increase in the occurrence of severe weather phenomena, the development of accurate weather nowcasting is of paramount importance. Among the computational methods that are used to predict the evolution of weather, deep learning techniques offer a particularly appealing solution due [...] Read more.
With the recent increase in the occurrence of severe weather phenomena, the development of accurate weather nowcasting is of paramount importance. Among the computational methods that are used to predict the evolution of weather, deep learning techniques offer a particularly appealing solution due to their capability for learning patterns from large amounts of data and their fast inference times. In this paper, we propose a convolutional network for weather forecasting that is based on radar product prediction. Our model (NeXtNow) adapts the ResNeXt architecture that has been proposed in the computer vision literature to solve the spatiotemporal prediction problem. NeXtNow consists of an encoder–decoder convolutional architecture, which maps radar measurements from the past onto radar measurements that are recorded in the future. The ResNeXt architecture was chosen as the basis for our network due to its flexibility, which allows for the design of models that can be customized for specific tasks by stacking multiple blocks of the same type. We validated our approach using radar data that were collected from the Romanian National Meteorological Administration (NMA) and the Norwegian Meteorological Institute (MET) and we empirically showed that the inclusion of multiple past radar measurements led to more accurate predictions further in the future. We also showed that NeXtNow could outperform XNow, which is a convolutional architecture that has previously been proposed for short-term radar data prediction and has a performance that is comparable to those of other similar approaches in the nowcasting literature. Compared to XNow, NeXtNow provided improvements to the critical success index that ranged from 1% to 17% and improvements to the root mean square error that ranged from 5% to 6%. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Graphical abstract

23 pages, 10291 KiB  
Article
RBFA-Net: A Rotated Balanced Feature-Aligned Network for Rotated SAR Ship Detection and Classification
by Zikang Shao, Xiaoling Zhang, Tianwen Zhang, Xiaowo Xu and Tianjiao Zeng
Remote Sens. 2022, 14(14), 3345; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14143345 - 11 Jul 2022
Cited by 25 | Viewed by 2219
Abstract
Ship detection with rotated bounding boxes in synthetic aperture radar (SAR) images is now a hot spot. However, there are still some obstacles, such as multi-scale ships, misalignment between rotated anchors and features, and the opposite requirements for spatial sensitivity of regression tasks [...] Read more.
Ship detection with rotated bounding boxes in synthetic aperture radar (SAR) images is now a hot spot. However, there are still some obstacles, such as multi-scale ships, misalignment between rotated anchors and features, and the opposite requirements for spatial sensitivity of regression tasks and classification tasks. In order to solve these problems, we propose a rotated balanced feature-aligned network (RBFA-Net) where three targeted networks are designed. They are, respectively, a balanced attention feature pyramid network (BAFPN), an anchor-guided feature alignment network (AFAN) and a rotational detection network (RDN). BAFPN is an improved FPN, with attention module for fusing and enhancing multi-level features, by which we can decrease the negative impact of multi-scale ship feature differences. In AFAN, we adopt an alignment convolution layer to adaptively align the convolution features according to rotated anchor boxes for solving the misalignment problem. In RDN, we propose a task decoupling module (TDM) to adjust the feature maps, respectively, for solving the conflict between the regression task and classification task. In addition, we adopt a balanced L1 loss to balance the classification loss and regression loss. Based on the SAR rotation ship detection dataset, we conduct extensive ablation experiments and compare our RBFA-Net with eight other state-of-the-art rotated detection networks. The experiment results show that among the eight state-of-the-art rotated detection networks, RBFA-Net makes a 7.19% improvement with mean average precision compared to the second-best network. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Graphical abstract

14 pages, 3019 KiB  
Article
A Classifying-Inversion Method of Offshore Atmospheric Duct Parameters Using AIS Data Based on Artificial Intelligence
by Jie Han, Jiaji Wu, Lijun Zhang, Hongguang Wang, Qinglin Zhu, Chao Zhang, Hui Zhao and Shoubao Zhang
Remote Sens. 2022, 14(13), 3197; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14133197 - 03 Jul 2022
Cited by 8 | Viewed by 1689
Abstract
Atmospheric duct parameters inversion is an important aspect of microwave-band radar and communication system performance evaluation. AIS (Automatic Identification System) is one of the signal sources used for atmospheric duct parameters inversion. Before the inversion of atmospheric duct parameters, determining the type of [...] Read more.
Atmospheric duct parameters inversion is an important aspect of microwave-band radar and communication system performance evaluation. AIS (Automatic Identification System) is one of the signal sources used for atmospheric duct parameters inversion. Before the inversion of atmospheric duct parameters, determining the type of atmospheric duct plays an important role in the inversion results, but the current inversion methods ignore this point. We outlined a classifying-inversion method of atmospheric duct parameters using AIS signals combined with artificial intelligence. The method consists of an atmospheric duct classification model and a parameter inversion model. The classification model judges the type of atmospheric duct, and the inversion model inverts the atmospheric duct parameters according to the type of atmospheric duct. Our findings demonstrated that the accuracy of the atmospheric duct classification model based on deep neural network (DNN) even exceeds 97%, and the atmospheric duct parameters inversion model has better inversion accuracy than that of the traditional method, thereby illustrating the effectiveness and accuracy of this novel method. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

18 pages, 5309 KiB  
Article
Using Open Vector-Based Spatial Data to Create Semantic Datasets for Building Segmentation for Raster Data
by Szymon Glinka, Tomasz Owerko and Karolina Tomaszkiewicz
Remote Sens. 2022, 14(12), 2745; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14122745 - 07 Jun 2022
Cited by 6 | Viewed by 4558
Abstract
With increasing access to open spatial data, it is possible to improve the quality of analyses carried out in the preliminary stages of the investment process. The extraction of buildings from raster data is an important process, especially for urban, planning and environmental [...] Read more.
With increasing access to open spatial data, it is possible to improve the quality of analyses carried out in the preliminary stages of the investment process. The extraction of buildings from raster data is an important process, especially for urban, planning and environmental studies. It allows, after processing, to represent buildings registered on a given image, e.g., in a vector format. With an actual image it is possible to obtain current information on the location of buildings in a defined area. At the same time, in recent years, there has been huge progress in the use of machine learning algorithms for object identification purposes. In particular, the semantic segmentation algorithms of deep convolutional neural networks which are based on the extraction of features from an image by means of masking have proven themselves here. The main problem with the application of semantic segmentation is the limited availability of masks, i.e., labelled data for training the network. Creating datasets based on manual labelling of data is a tedious, time consuming and capital-intensive process. Furthermore, any errors may be reflected in later analysis results. Therefore, this paper aims to show how to automate the process of data labelling of cadastral data from open spatial databases using convolutional neural networks, and to identify and extract buildings from high resolution orthophotomaps based on this data. The conducted research has shown that automatic feature extraction using semantic ML segmentation on the basis of data from open spatial databases is possible and can provide adequate quality of results. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

22 pages, 27684 KiB  
Article
A Sparse-Model-Driven Network for Efficient and High-Accuracy InSAR Phase Filtering
by Nan Wang, Xiaoling Zhang, Tianwen Zhang, Liming Pu, Xu Zhan, Xiaowo Xu, Yunqiao Hu, Jun Shi and Shunjun Wei
Remote Sens. 2022, 14(11), 2614; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14112614 - 30 May 2022
Cited by 1 | Viewed by 1497
Abstract
Phase filtering is a vital step for interferometric synthetic aperture radar (InSAR) terrain elevation measurements. Existing phase filtering methods can be divided into two categories: traditional model-based and deep learning (DL)-based. Previous studies have shown that DL-based methods are frequently superior to traditional [...] Read more.
Phase filtering is a vital step for interferometric synthetic aperture radar (InSAR) terrain elevation measurements. Existing phase filtering methods can be divided into two categories: traditional model-based and deep learning (DL)-based. Previous studies have shown that DL-based methods are frequently superior to traditional ones. However, most of the existing DL-based methods are purely data-driven and neglect the filtering model, so that they often need to use a large-scale complex architecture to fit the huge training sets. The issue brings a challenge to improve the accuracy of interferometric phase filtering without sacrificing speed. Therefore, we propose a sparse-model-driven network (SMD-Net) for efficient and high-accuracy InSAR phase filtering by unrolling the sparse regularization (SR) algorithm to solve the filtering model into a network. Unlike the existing DL-based filtering methods, the SMD-Net models the physical process of filtering in the network and contains fewer layers and parameters. It is thus expected to ensure the accuracy of the filtering without sacrificing speed. In addition, unlike the traditional SR algorithm setting the spare transform by handcrafting, a convolutional neural network (CNN) module was established to adaptively learn such a transform, which significantly improved the filtering performance. Extensive experimental results on the simulated and measured data demonstrated that the proposed method outperformed several advanced InSAR phase filtering methods in both accuracy and speed. In addition, to verify the filtering performance of the proposed method under small training samples, the training samples were reduced to 10%. The results show that the performance of the proposed method was comparable on the simulated data and superior on the real data compared with another DL-based method, which demonstrates that our method is not constrained by the requirement of a huge number of training samples. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Graphical abstract

24 pages, 9513 KiB  
Article
Analysing Process and Probability of Built-Up Expansion Using Machine Learning and Fuzzy Logic in English Bazar, West Bengal
by Tanmoy Das, Shahfahad, Mohd Waseem Naikoo, Swapan Talukdar, Ayesha Parvez, Atiqur Rahman, Swades Pal, Md Sarfaraz Asgher, Abu Reza Md. Towfiqul Islam and Amir Mosavi
Remote Sens. 2022, 14(10), 2349; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14102349 - 12 May 2022
Cited by 14 | Viewed by 2252
Abstract
The study sought to investigate the process of built-up expansion and the probability of built-up expansion in the English Bazar Block of West Bengal, India, using multitemporal Landsat satellite images and an integrated machine learning algorithm and fuzzy logic model. The land use [...] Read more.
The study sought to investigate the process of built-up expansion and the probability of built-up expansion in the English Bazar Block of West Bengal, India, using multitemporal Landsat satellite images and an integrated machine learning algorithm and fuzzy logic model. The land use and land cover (LULC) classification were prepared using a support vector machine (SVM) classifier for 2001, 2011, and 2021. The landscape fragmentation technique using the landscape fragmentation tool (extension for ArcGIS software) and frequency approach were proposed to model the process of built-up expansion. To create the built-up expansion probability model, the dominance, diversity, and connectivity index of the built-up areas for each year were created and then integrated with fuzzy logic. The results showed that, during 2001–2021, the built-up areas increased by 21.67%, while vegetation and water bodies decreased by 9.28 and 4.63%, respectively. The accuracy of the LULC maps for 2001, 2011, and 2021 was 90.05, 93.67, and 96.24%, respectively. According to the built-up expansion model, 9.62% of the new built-up areas was created in recent decades. The built-up expansion probability model predicted that 21.46% of regions would be converted into built-up areas. This study will assist decision-makers in proposing management strategies for systematic urban growth that do not damage the environment. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Graphical abstract

21 pages, 8564 KiB  
Article
GCBANet: A Global Context Boundary-Aware Network for SAR Ship Instance Segmentation
by Xiao Ke, Xiaoling Zhang and Tianwen Zhang
Remote Sens. 2022, 14(9), 2165; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14092165 - 30 Apr 2022
Cited by 16 | Viewed by 4510
Abstract
Synthetic aperture radar (SAR) is an advanced microwave sensor, which has been widely used in ocean surveillance, and its operation is not affected by light and weather. SAR ship instance segmentation can provide not only the box-level ship location but also the pixel-level [...] Read more.
Synthetic aperture radar (SAR) is an advanced microwave sensor, which has been widely used in ocean surveillance, and its operation is not affected by light and weather. SAR ship instance segmentation can provide not only the box-level ship location but also the pixel-level ship contour, which plays an important role in ocean surveillance. However, most existing methods are provided with limited box positioning ability, hence hindering further accuracy improvement of instance segmentation. To solve the problem, we propose a global context boundary-aware network (GCBANet) for better SAR ship instance segmentation. Specifically, we propose two novel blocks to guarantee GCBANet’s excellent performance, i.e., a global context information modeling block (GCIM-Block) which is used to capture spatial global long-range dependences of ship contextual surroundings, enabling larger receptive fields, and a boundary-aware box prediction block (BABP-Block) which is used to estimate ship boundaries, achieving better cross-scale box prediction. We conduct ablation studies to confirm each block’s effectiveness. Ultimately, on two public SSDD and HRSID datasets, GCBANet outperforms the other nine competitive models. On SSDD, it achieves 2.8% higher box average precision (AP) and 3.5% higher mask AP than the existing best model; on HRSID, they are 2.7% and 1.9%, respectively. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

19 pages, 5378 KiB  
Article
SDTGAN: Generation Adversarial Network for Spectral Domain Translation of Remote Sensing Images of the Earth Background Based on Shared Latent Domain
by Biao Wang, Lingxuan Zhu, Xing Guo, Xiaobing Wang and Jiaji Wu
Remote Sens. 2022, 14(6), 1359; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14061359 - 11 Mar 2022
Cited by 2 | Viewed by 2005
Abstract
The synthesis of spectral remote sensing images of the Earth’s background is affected by various factors such as the atmosphere, illumination and terrain, which makes it difficult to simulate random disturbance and real textures. Based on the shared latent domain hypothesis and generation [...] Read more.
The synthesis of spectral remote sensing images of the Earth’s background is affected by various factors such as the atmosphere, illumination and terrain, which makes it difficult to simulate random disturbance and real textures. Based on the shared latent domain hypothesis and generation adversarial network, this paper proposes the SDTGAN method to mine the correlation between the spectrum and directly generate target spectral remote sensing images of the Earth’s background according to the source spectral images. The introduction of shared latent domain allows multi-spectral domains connect to each other without the need to build a one-to-one model. Meanwhile, additional feature maps are introduced to fill in the lack of information in the spectrum and improve the geographic accuracy. Through supervised training with a paired dataset, cycle consistency loss, and perceptual loss, the uniqueness of the output result is guaranteed. Finally, the experiments on the Fengyun satellite observation data show that the proposed SDTGAN method performs better than the baseline models in remote sensing image spectrum translation. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

27 pages, 9867 KiB  
Article
Lite-YOLOv5: A Lightweight Deep Learning Detector for On-Board Ship Detection in Large-Scene Sentinel-1 SAR Images
by Xiaowo Xu, Xiaoling Zhang and Tianwen Zhang
Remote Sens. 2022, 14(4), 1018; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14041018 - 20 Feb 2022
Cited by 119 | Viewed by 9288
Abstract
Synthetic aperture radar (SAR) satellites can provide microwave remote sensing images without weather and light constraints, so they are widely applied in the maritime monitoring field. Current SAR ship detection methods based on deep learning (DL) are difficult to deploy on satellites, because [...] Read more.
Synthetic aperture radar (SAR) satellites can provide microwave remote sensing images without weather and light constraints, so they are widely applied in the maritime monitoring field. Current SAR ship detection methods based on deep learning (DL) are difficult to deploy on satellites, because these methods usually have complex models and huge calculations. To solve this problem, based on the You Only Look Once version 5 (YOLOv5) algorithm, we propose a lightweight on-board SAR ship detector called Lite-YOLOv5, which (1) reduces the model volume; (2) decreases the floating-point operations (FLOPs); and (3) realizes the on-board ship detection without sacrificing accuracy. First, in order to obtain a lightweight network, we design a lightweight cross stage partial (L-CSP) module to reduce the amount of calculation and we apply network pruning for a more compact detector. Then, in order to ensure the excellent detection performance, we integrate a histogram-based pure backgrounds classification (HPBC) module, a shape distance clustering (SDC) module, a channel and spatial attention (CSA) module, and a hybrid spatial pyramid pooling (H-SPP) module to improve detection performance. To evaluate the on-board SAR ship detection ability of Lite-YOLOv5, we also transplant it to the embedded platform NVIDIA Jetson TX2. Experimental results on the Large-Scale SAR Ship Detection Dataset-v1.0 (LS-SSDD-v1.0) show that Lite-YOLOv5 can realize lightweight architecture with a 2.38 M model volume (14.18% of model size of YOLOv5), on-board ship detection with a low computation cost (26.59% of FLOPs of YOLOv5), and superior detection accuracy (1.51% F1 improvement compared with YOLOv5). Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Graphical abstract

30 pages, 12327 KiB  
Article
ShadowDeNet: A Moving Target Shadow Detection Network for Video SAR
by Jinyu Bao, Xiaoling Zhang, Tianwen Zhang and Xiaowo Xu
Remote Sens. 2022, 14(2), 320; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14020320 - 11 Jan 2022
Cited by 11 | Viewed by 2633
Abstract
Most existing SAR moving target shadow detectors not only tend to generate missed detections because of their limited feature extraction capacity among complex scenes, but also tend to bring about numerous perishing false alarms due to their poor foreground–background discrimination capacity. Therefore, to [...] Read more.
Most existing SAR moving target shadow detectors not only tend to generate missed detections because of their limited feature extraction capacity among complex scenes, but also tend to bring about numerous perishing false alarms due to their poor foreground–background discrimination capacity. Therefore, to solve these problems, this paper proposes a novel deep learning network called “ShadowDeNet” for better shadow detection of moving ground targets on video synthetic aperture radar (SAR) images. It utilizes five major tools to guarantee its superior detection performance, i.e., (1) histogram equalization shadow enhancement (HESE) for enhancing shadow saliency to facilitate feature extraction, (2) transformer self-attention mechanism (TSAM) for focusing on regions of interests to suppress clutter interferences, (3) shape deformation adaptive learning (SDAL) for learning moving target deformed shadows to conquer motion speed variations, (4) semantic-guided anchor-adaptive learning (SGAAL) for generating optimized anchors to match shadow location and shape, and (5) online hard-example mining (OHEM) for selecting typical difficult negative samples to improve background discrimination capacity. We conduct extensive ablation studies to confirm the effectiveness of the above each contribution. We perform experiments on the public Sandia National Laboratories (SNL) video SAR data. Experimental results reveal the state-of-the-art performance of ShadowDeNet, with a 66.01% best f1 accuracy, in contrast to the other five competitive methods. Specifically, ShadowDeNet is superior to the experimental baseline Faster R-CNN by a 9.00% f1 accuracy, and superior to the existing first-best model by a 4.96% f1 accuracy. Furthermore, ShadowDeNet merely sacrifices a slight detection speed in an acceptable range. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

20 pages, 19385 KiB  
Article
A Deep Learning-Based Generalized System for Detecting Pine Wilt Disease Using RGB-Based UAV Images
by Jie You, Ruirui Zhang and Joonwhoan Lee
Remote Sens. 2022, 14(1), 150; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14010150 - 30 Dec 2021
Cited by 22 | Viewed by 3109
Abstract
Pine wilt is a devastating disease that typically kills affected pine trees within a few months. In this paper, we confront the problem of detecting pine wilt disease. In the image samples that have been used for pine wilt disease detection, there is [...] Read more.
Pine wilt is a devastating disease that typically kills affected pine trees within a few months. In this paper, we confront the problem of detecting pine wilt disease. In the image samples that have been used for pine wilt disease detection, there is high ambiguity due to poor image resolution and the presence of “disease-like” objects. We therefore created a new dataset using large-sized orthophotographs collected from 32 cities, 167 regions, and 6121 pine wilt disease hotspots in South Korea. In our system, pine wilt disease was detected in two stages: n the first stage, the disease and hard negative samples were collected using a convolutional neural network. Because the diseased areas varied in size and color, and as the disease manifests differently from the early stage to the late stage, hard negative samples were further categorized into six different classes to simplify the complexity of the dataset. Then, in the second stage, we used an object detection model to localize the disease and “disease-like” hard negative samples. We used several image augmentation methods to boost system performance and avoid overfitting. The test process was divided into two phases: a patch-based test and a real-world test. During the patch-based test, we used the test-time augmentation method to obtain the average prediction of our system across multiple augmented samples of data, and the prediction results showed a mean average precision of 89.44% in five-fold cross validation, thus representing an increase of around 5% over the alternative system. In the real-world test, we collected 10 orthophotographs in various resolutions and areas, and our system successfully detected 711 out of 730 potential disease spots. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

Back to TopTop