remotesensing-logo

Journal Browser

Journal Browser

2nd Edition GeoAI: Integration of Artificial Intelligence, Machine Learning and Deep Learning with Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (15 June 2022) | Viewed by 18948

Special Issue Editors


E-Mail Website
Guest Editor
Department of Geoinformatics, University of Salzburg, 5020 Salzburg, Austria
Interests: artificial intelligence for remote sensing (AI4RS); artificial intelligence for natural hazards (AI4NH); land surface monitoring and change detection
Special Issues, Collections and Topics in MDPI journals

grade E-Mail Website
Guest Editor
Interfaculty Department of Geoinformatics - Z GIS, University of Salzburg, A-5020 Salzburg, Austria
Interests: GIS; remote sensing; spatial analysis and GIS-based spatial decision support systems; object-based image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue focuses on advancements and innovative methods and solutions of Artificial Intelligence (AI) in remote sensing (RS) and Earth observation (EO). In particular, we call for contributions that describe methods and ongoing research, including algorithm development, data training strategies, and implementations. 

Recent advancements in hardware and high-performance computing platforms have resulted in developing and implementing several state-of-the-art machine learning approaches (e.g., decision tree learning, reinforcement learning, inductive logic programming, Bayesian networks, and clustering) that can be applied for conducting satellite image analyses. In particular, deep-learning methods have become a fast-growing trend in RS applications and, above all, supervised deep convolutional neural networks have attracted a lot of interest in the computer vision and image processing communities.

These developments are triggered by an increasing need to mine the large amount of data generated by a new generation of satellites, including, for example, the European Copernicus system with its Sentinel satellites or the many satellites that were recently launched in China. The amount of data generated today almost necessitates the use of AI for the exploration of big data.

Still, many AI algorithms are in their infancy regarding a scientific explanation. For instance, the construction of CNNs is often done in a trial-and-error manner. How many layers should really be used? Researchers have access to a massive pool of a wide range of AI algorithms, but AI needs to be used together with physical principles and scientific interpretation.

This Special Issue seeks to clarify how AI methods can be selected and used in a way that they make them practicable and appropriate for RS applications. The performance of these choices may depend on the application case, the theory behind the AI algorithms, and how algorithms and AI architectures are developed and trained. Moreover, the capabilities of novel and hybrid AI algorithms have not yet been sufficiently investigated equally in different fields. There is a need to determine the performance of standalone and hybrid approaches in satellite image analysis.

To highlight new solutions of AI algorithms for RS image understanding tasks and problems, manuscript submissions are encouraged from a broad range of related topics, which may include but are not limited to the following activities:

  • Big data
  • Data fusion
  • Satellite images
  • Image processing and classification
  • Superpixels
  • Multiscale and multisensor data calibration
  • The hierarchical feature learning process
  • Data augmentation strategies
  • Feature representation
  • Patch-wise semantic segmentation
  • Data processing from UAVs
  • Hyperspectral imageries
  • Scale issues and hierarchical analysis
  • Scale parameter estimation
  • Training/testing data collection
  • Multiresolution segmentation
  • Semantic segmentation
  • Classifiers
  • Object detection and instance segmentation
  • Change detection and monitoring
  • Natural hazard monitoring and susceptibility mapping (e.g., landslide, flood, wildfire, soil erosion)
  • Disaster assessment, mapping, and quantification
  • Humanitarian operations
  • Scene recognition
  • Urban land use classification
  • Land use/land cover
  • Complex ecosystem dynamics, e.g., wetland and coastal mapping
  • Agriculture and crop mapping
  • Vegetation monitoring
  • Time series analysis

Dr. Omid Ghorbanzadeh
Dr. Omid Rahmati
Prof. Thomas Blaschke
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Remote sensing
  • Pixel-based classification
  • Object-based image analysis (OBIA)
  • Artificial intelligence
  • Machine learning
  • Deep learning
  • Convolutional neural networks (CNNs)
  • Integrated architectures

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 12511 KiB  
Article
Full-Coupled Convolutional Transformer for Surface-Based Duct Refractivity Inversion
by Jiajing Wu, Zhiqiang Wei, Jinpeng Zhang, Yushi Zhang, Dongning Jia, Bo Yin and Yunchao Yu
Remote Sens. 2022, 14(17), 4385; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14174385 - 03 Sep 2022
Cited by 1 | Viewed by 1092
Abstract
A surface-based duct (SBD) is an abnormal atmospheric structure with a low probability of occurrence buta strong ability to trap electromagnetic waves. However, the existing research is based on the assumption that the range direction of the surface duct is homogeneous, which will [...] Read more.
A surface-based duct (SBD) is an abnormal atmospheric structure with a low probability of occurrence buta strong ability to trap electromagnetic waves. However, the existing research is based on the assumption that the range direction of the surface duct is homogeneous, which will lead to low productivity and large errors when applied in a real-marine environment. To alleviate these issues, we propose a framework for the inversion of inhomogeneous SBD M-profile based on a full-coupled convolutional Transformer (FCCT) deep learning network. We first designed a one-dimensional residual dilated causal convolution autoencoder to extract the feature representations from a high-dimension range direction inhomogeneous M-profile. Second, to improve efficiency and precision, we proposed a full-coupled convolutional Transformer (FCCT) that incorporated dilated causal convolutional layers to gain exponentially receptive field growth of the M-profile and help Transformer-like models improve the receptive field of each range direction inhomogeneous SBD M-profile information. We tested our proposed method performance on two sets of simulated sea clutter power data where the inversion of the simulated data reached 96.99% and 97.69%, which outperformed the existing baseline methods. Full article
Show Figures

Figure 1

23 pages, 3388 KiB  
Article
Dual-Branch Remote Sensing Spatiotemporal Fusion Network Based on Selection Kernel Mechanism
by Weisheng Li, Fengyan Wu and Dongwen Cao
Remote Sens. 2022, 14(17), 4282; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14174282 - 30 Aug 2022
Cited by 3 | Viewed by 1545
Abstract
Popular deep-learning-based spatiotemporal fusion methods for creating high-temporal–high-spatial-resolution images have certain limitations. The reconstructed images suffer from insufficient retention of high-frequency information and the model suffers from poor robustness, owing to the lack of training datasets. We propose a dual-branch remote sensing spatiotemporal [...] Read more.
Popular deep-learning-based spatiotemporal fusion methods for creating high-temporal–high-spatial-resolution images have certain limitations. The reconstructed images suffer from insufficient retention of high-frequency information and the model suffers from poor robustness, owing to the lack of training datasets. We propose a dual-branch remote sensing spatiotemporal fusion network based on a selection kernel mechanism. The network model comprises a super-resolution network module, a high-frequency feature extraction module, and a difference reconstruction module. Convolution kernel adaptive mechanisms are added to the high-frequency feature extraction module and difference reconstruction module to improve robustness. The super-resolution module upgrades the coarse image to a transition image matching the fine image; the high-frequency feature extraction module extracts the high-frequency features of the fine image to supplement the high-frequency features for the difference reconstruction module; the difference reconstruction module uses the structural similarity for fine-difference image reconstruction. The fusion result is obtained by combining the reconstructed fine-difference image with the known fine image. The compound loss function is used to help network training. Experiments are carried out on three datasets and five representative spatiotemporal fusion algorithms are used for comparison. Subjective and objective evaluations validate the superiority of our proposed method. Full article
Show Figures

Figure 1

29 pages, 7328 KiB  
Article
Double-Stack Aggregation Network Using a Feature-Travel Strategy for Pansharpening
by Weisheng Li, Maolin He and Minghao Xiang
Remote Sens. 2022, 14(17), 4224; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14174224 - 27 Aug 2022
Viewed by 1221
Abstract
Pansharpening methods based on deep learning can obtain high-quality, high-resolution multispectral images and are gradually becoming an active research topic. To combine deep learning and remote sensing domain knowledge more efficiently, we propose a double-stack aggregation network using a feature-travel strategy for pansharpening. [...] Read more.
Pansharpening methods based on deep learning can obtain high-quality, high-resolution multispectral images and are gradually becoming an active research topic. To combine deep learning and remote sensing domain knowledge more efficiently, we propose a double-stack aggregation network using a feature-travel strategy for pansharpening. The proposed network comprises two important designs. First, we propose a double-stack feature aggregation module that can efficiently retain useful feature information by aggregating features extracted at different levels. The module introduces a new multiscale, large-kernel convolutional block in the feature extraction stage to maintain the overall computational power while expanding the receptive field and obtaining detailed feature information. We also introduce a feature-travel strategy to effectively complement feature details on multiple scales. By resampling the source images, we use three pairs of source images at various scales as the input to the network. The feature-travel strategy lets the extracted features loop through the three scales to supplement the effective feature details. Extensive experiments on three satellite datasets show that the proposed model achieves significant improvements in both spatial and spectral quality measurements compared to state-of-the-art methods. Full article
Show Figures

Figure 1

18 pages, 33075 KiB  
Article
Large Aerial Image Tie Point Matching in Real and Difficult Survey Areas via Deep Learning Method
by Xiuliu Yuan, Xiuxiao Yuan, Jun Chen and Xunping Wang
Remote Sens. 2022, 14(16), 3907; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14163907 - 12 Aug 2022
Cited by 3 | Viewed by 2220
Abstract
Image tie point matching is an essential task in real aerial photogrammetry, especially for model tie points. In current photogrammetry production, SIFT is still the main matching algorithm because of the high robustness for most aerial image tie points matching. However, when there [...] Read more.
Image tie point matching is an essential task in real aerial photogrammetry, especially for model tie points. In current photogrammetry production, SIFT is still the main matching algorithm because of the high robustness for most aerial image tie points matching. However, when there is a certain number of weak texture images in a surveying area (mountain, grassland, woodland, etc.), these models often lack tie points, resulting in the failure of building an airline network. Some studies have shown that the image matching method based on deep learning is better than the SIFT method and other traditional methods to some extent (even for weak texture images). Unfortunately, these methods are often only used in small images, and they cannot be directly applied to large image tie point matching in real photogrammetry. Considering the actual photogrammetry needs and motivated by the Block-SIFT and SuperGlue, this paper proposes a SuperGlue-based LR-Superglue matching method for large aerial image tie points matching, which makes learned image matching possible in photogrammetry application and promotes the photogrammetry towards artificial intelligence. Experiments on real and difficult aerial surveying areas show that LR-Superglue obtains more model tie points in forward direction (on average, there are 60 more model points in each model) and more image tie points between airline(on average, there are 36 more model points in each adjacent images). Most importantly, the LR-Superglue method requires a certain number of points between each adjacent model, while the Block-SIFT method made a few models have no tie points. At the same time, the relative orientation accuracy of the image tie points matched by the proposed method is significantly better than block-SIFT, which reduced from 3.64 μm to 2.85 μm on average in each model (the camera pixel is 4.6 μm). Full article
Show Figures

Graphical abstract

14 pages, 1995 KiB  
Article
Time Series of Remote Sensing Data for Interaction Analysis of the Vegetation Coverage and Dust Activity in the Middle East
by Soodabeh Namdari, Ali Ibrahim Zghair Alnasrawi, Omid Ghorbanzadeh, Armin Sorooshian, Khalil Valizadeh Kamran and Pedram Ghamisi
Remote Sens. 2022, 14(13), 2963; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14132963 - 21 Jun 2022
Cited by 6 | Viewed by 2199
Abstract
Motivated by the lack of research on land cover and dust activity in the Middle East, this study seeks to increase the understanding of the sensitivity of dust centers to climatic and surface conditions in this specific region. In this regard, we explore [...] Read more.
Motivated by the lack of research on land cover and dust activity in the Middle East, this study seeks to increase the understanding of the sensitivity of dust centers to climatic and surface conditions in this specific region. In this regard, we explore vegetation cover and dust emission interactions using 16-day long-term Normalized Difference Vegetation Index (NDVI) data and daily Aerosol Optical Depth (AOD) data from Moderate Resolution Imaging Spectroradiometer (MODIS) and conduct spatiotemporal and statistical analyses. Eight major dust hotspots were identified based on long-term AOD data (2000–2019). Despite the relatively uniform climate conditions prevailing throughout the region during the study period, there is considerable spatial variability in interannual relationships between AOD and NDVI. Three subsets of periods (2000–2006, 2007–2013, 2014–2019) were examined to assess periodic spatiotemporal changes. In the second period (2007–2013), AOD increased significantly (6% to 32%) across the studied hotspots, simultaneously with a decrease in NDVI (−0.9% to −14.3%) except in Yemen−Oman. Interannual changes over 20 years showed a strong relationship between reduced vegetation cover and increased dust intensity. The correlation between NDVI and AOD (−0.63) for the cumulative region confirms the significant effect of vegetation canopy on annual dust fluctuations. According to the results, changes in vegetation cover have an essential role in dust storm fluctuations. Therefore, this factor must be regarded along with wind speed and other climate factors in Middle East dust hotspots related to research and management efforts. Full article
Show Figures

Figure 1

23 pages, 3026 KiB  
Article
Inversion of Ocean Subsurface Temperature and Salinity Fields Based on Spatio-Temporal Correlation
by Tao Song, Wei Wei, Fan Meng, Jiarong Wang, Runsheng Han and Danya Xu
Remote Sens. 2022, 14(11), 2587; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14112587 - 27 May 2022
Cited by 11 | Viewed by 2270
Abstract
Ocean observation is essential for studying ocean dynamics, climate change, and carbon cycles. Due to the difficulty and high cost of in situ observations, existing ocean observations are inadequate, and satellite observations are mostly surface observations. Previous work has not adequately considered the [...] Read more.
Ocean observation is essential for studying ocean dynamics, climate change, and carbon cycles. Due to the difficulty and high cost of in situ observations, existing ocean observations are inadequate, and satellite observations are mostly surface observations. Previous work has not adequately considered the spatio-temporal correlation within the ocean itself. This paper proposes a new method—convolutional long short-term memory network (ConvLSTM)—for the inversion of the ocean subsurface temperature and salinity fields with the sea surface satellite observations (sea surface temperature, sea surface salinity, sea surface height, and sea surface wind) and subsurface Argo reanalyze data. Given the time dependence and spatial correlation of the ocean dynamic parameters, the ConvLSTM model can improve inversion models’ robustness and generalizability by considering ocean variability’s significant spatial and temporal correlation characteristics. Taking the 2018 results as an example, our average inversion results in an overall normalized root mean square error (NRMSE) of 0.0568 °C/0.0027 PSS and a correlation coefficient (R) of 0.9819/0.9997 for subsurface temperature (ST)/subsurface salinity (SS). The results show that SSTA, SSSA SSHA, and SSWA together are valuable parameters for obtaining accurate ST/SS estimates, and the use of multiple channels in shallow seas is effective. This study demonstrates that ConvLSTM is superior in modeling the subsurface temperature and salinity fields, fully taking global ocean data’s spatial and temporal correlation into account, and outperforms the classic random forest and LSTM approaches in predicting subsurface temperature and salinity fields. Full article
Show Figures

Graphical abstract

20 pages, 25917 KiB  
Article
Memory-Augmented Transformer for Remote Sensing Image Semantic Segmentation
by Xin Zhao, Jiayi Guo, Yueting Zhang and Yirong Wu
Remote Sens. 2021, 13(22), 4518; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13224518 - 10 Nov 2021
Cited by 9 | Viewed by 2603
Abstract
The semantic segmentation of remote sensing images requires distinguishing local regions of different classes and exploiting a uniform global representation of the same-class instances. Such requirements make it necessary for the segmentation methods to extract discriminative local features between different classes and to [...] Read more.
The semantic segmentation of remote sensing images requires distinguishing local regions of different classes and exploiting a uniform global representation of the same-class instances. Such requirements make it necessary for the segmentation methods to extract discriminative local features between different classes and to explore representative features for all instances of a given class. While common deep convolutional neural networks (DCNNs) can effectively focus on local features, they are limited by their receptive field to obtain consistent global information. In this paper, we propose a memory-augmented transformer (MAT) to effectively model both the local and global information. The feature extraction pipeline of the MAT is split into a memory-based global relationship guidance module and a local feature extraction module. The local feature extraction module mainly consists of a transformer, which is used to extract features from the input images. The global relationship guidance module maintains a memory bank for the consistent encoding of the global information. Global guidance is performed by memory interaction. Bidirectional information flow between the global and local branches is conducted by a memory-query module, as well as a memory-update module, respectively. Experiment results on the ISPRS Potsdam and ISPRS Vaihingen datasets demonstrated that our method can perform competitively with state-of-the-art methods. Full article
Show Figures

Graphical abstract

19 pages, 7112 KiB  
Article
Knowledge-Driven GeoAI: Integrating Spatial Knowledge into Multi-Scale Deep Learning for Mars Crater Detection
by Chia-Yu Hsu, Wenwen Li and Sizhe Wang
Remote Sens. 2021, 13(11), 2116; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13112116 - 28 May 2021
Cited by 23 | Viewed by 3992
Abstract
This paper introduces a new GeoAI solution to support automated mapping of global craters on the Mars surface. Traditional crater detection algorithms suffer from the limitation of working only in a semiautomated or multi-stage manner, and most were developed to handle a specific [...] Read more.
This paper introduces a new GeoAI solution to support automated mapping of global craters on the Mars surface. Traditional crater detection algorithms suffer from the limitation of working only in a semiautomated or multi-stage manner, and most were developed to handle a specific dataset in a small subarea of Mars’ surface, hindering their transferability for global crater detection. As an alternative, we propose a GeoAI solution based on deep learning to tackle this problem effectively. Three innovative features are integrated into our object detection pipeline: (1) a feature pyramid network is leveraged to generate feature maps with rich semantics across multiple object scales; (2) prior geospatial knowledge based on the Hough transform is integrated to enable more accurate localization of potential craters; and (3) a scale-aware classifier is adopted to increase the prediction accuracy of both large and small crater instances. The results show that the proposed strategies bring a significant increase in crater detection performance than the popular Faster R-CNN model. The integration of geospatial domain knowledge into the data-driven analytics moves GeoAI research up to the next level to enable knowledge-driven GeoAI. This research can be applied to a wide variety of object detection and image analysis tasks. Full article
Show Figures

Graphical abstract

Back to TopTop