remotesensing-logo

Journal Browser

Journal Browser

Explainable Deep Neural Networks for Remote Sensing Image Understanding

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (15 April 2022) | Viewed by 35479

Special Issue Editors

School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi’an 710021, China
Interests: image processing; machine learning
Special Issues, Collections and Topics in MDPI journals
School of Geophysics and Geomatics, China University of Geosciences, Wuhan, China
Interests: intelligent representation and calculation of geological information; geological environment monitoring and evaluation; geospatial information
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronic and Electrical Engineering, Brunel University London, London, UK
Interests: signal processing; wireless communication; machine condition monitoring; biomedical signal processing; data analytics; machine learning; higher order statistics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Deep convolutional neural networks have been widely used in remote sensing image analysis and applications, e.g., classification, detection, regression, and inversion. Although these networks are somewhat successful in remote sensing image understanding, they still face a black-box problem, since both the feature extraction and classifier design are automatically learned. This problem seriously limits the development of deep learning and its applications in the field of remote sensing image understanding. In recent years, many explainable deep network models have been reported in the machine learning society, such as channel attention, spatial attention, self-attention, and non-local network. These networks, to some extent, promote the development of explainable deep learning and address some important problems in remote sensing image analysis. On the other hand, remote sensing applications usually involve several exact physical models, e.g., the radiance transfer model, linear unmixing model, and spatiotemporal autocorrelation, which can also effectively model the formation process from the remote sensing data to land-cover observation and environmental parameter monitoring. However, how to effectively integrate popular deep neural networks with the traditional remote sensing physical models is currently the main challenge in remote sensing image understanding. Therefore, the research of theoretically and physically explainable deep convolutional neural networks is currently one of the most popular topics and can offer important advantages in the applications of remote sensing image understanding. This Special Issue aims to publish high-quality research papers and salient and informative review articles addressing emerging trends in remote sensing image understanding using the combination of explainable deep network and remote sensing physical models. Original contributions, not currently under review in a journal or a conference, are solicited in relevant areas including, but not limited to, the following:

  • Attention-aware deep convolutional neural networks for objection detection, segmentation, and recognition in remote sensing images.
  • Non-local convolutional neural networks for remote sensing image applications.
  • Compact deep network models for remote sensing image applications.
  • Physical model integrating with deep convolutional neural networks for remote sensing image applications.
  • Hybrid models of joining data-driven and model-driven for remote sensing image applications.
  • Incorporating geographical laws into deep convolutional neural netowrks for remote sensing image applications.
  • Review/Surveys of remote sensing image processing.
  • New remote sensing image datasets.

Dr. Tao Lei
Dr. Tao Chen
Dr. Lefei Zhang
Prof. Dr. Asoke K. Nandi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • remote sensing image analysis
  • land-cover observation
  • Earth environmental monitoring
  • attention mechanism
  • data driven and model driven
  • physical models

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

19 pages, 4304 KiB  
Article
SAR-BagNet: An Ante-hoc Interpretable Recognition Model Based on Deep Network for SAR Image
by Peng Li, Cunqian Feng, Xiaowei Hu and Zixiang Tang
Remote Sens. 2022, 14(9), 2150; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14092150 - 30 Apr 2022
Cited by 7 | Viewed by 2178
Abstract
Convolutional neural networks (CNNs) have been widely used in SAR image recognition and have achieved high recognition accuracy on some public datasets. However, due to the opacity of the decision-making mechanism, the reliability and credibility of CNNs are insufficient at present, which hinders [...] Read more.
Convolutional neural networks (CNNs) have been widely used in SAR image recognition and have achieved high recognition accuracy on some public datasets. However, due to the opacity of the decision-making mechanism, the reliability and credibility of CNNs are insufficient at present, which hinders their application in some important fields such as SAR image recognition. In recent years, various interpretable network structures have been proposed to discern the relationship between a CNN’s decision and image regions. Unfortunately, most interpretable networks are based on optical images, which have poor recognition performance for SAR images, and most of them cannot accurately explain the relationship between image parts and classification decisions. Based on the above problems, in this study, we present SAR-BagNet, which is a novel interpretable recognition framework for SAR images. SAR-BagNet can provide a clear heatmap that can accurately reflect the impact of each part of a SAR image on the final network decision. Except for the good interpretability, SAR-BagNet also has high recognition accuracy and can achieve 98.25% test accuracy. Full article
Show Figures

Graphical abstract

20 pages, 40098 KiB  
Article
SHAP-Based Interpretable Object Detection Method for Satellite Imagery
by Hiroki Kawauchi and Takashi Fuse
Remote Sens. 2022, 14(9), 1970; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14091970 - 19 Apr 2022
Cited by 3 | Viewed by 6994
Abstract
There is a growing need for algorithms to automatically detect objects in satellite images. Object detection algorithms using deep learning have demonstrated a significant improvement in object detection performance. However, deep-learning models have difficulty in interpreting the features for inference. This difficulty is [...] Read more.
There is a growing need for algorithms to automatically detect objects in satellite images. Object detection algorithms using deep learning have demonstrated a significant improvement in object detection performance. However, deep-learning models have difficulty in interpreting the features for inference. This difficulty is practically problematic when analyzing earth-observation images, which are often used as evidence for public decision-making. In addition, for the same reason, it is difficult to set an explicit policy or criteria to improve the models. To deal with these challenges, we introduce a feature attribution method that defines an approximate model and calculates the attribution of input features to the output of a deep-learning model. For the object detection models of satellite images with complex textures, we propose a method to visualize the basis of inference using pixel-wise feature attribution. Furthermore, we propose new methods for model evaluation, regularization, and data selection, based on feature attribution. Experimental results demonstrate the feasibility of the proposed methods for basis visualization and model evaluation. Moreover, the results illustrate that the model using the proposed regularization method can avoid over-fitting and achieve higher performance, and the proposed data selection method allows for the efficient selection of new training data. Full article
Show Figures

Figure 1

16 pages, 16607 KiB  
Article
Super-Resolution Network for Remote Sensing Images via Preclassification and Deep–Shallow Features Fusion
by Xiuchao Yue, Xiaoxuan Chen, Wanxu Zhang, Hang Ma, Lin Wang, Jiayang Zhang, Mengwei Wang and Bo Jiang
Remote Sens. 2022, 14(4), 925; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14040925 - 14 Feb 2022
Cited by 6 | Viewed by 1978
Abstract
A novel super-resolution (SR) method is proposed in this paper to reconstruct high-resolution (HR) remote sensing images. Different scenes of remote sensing images have great disparities in structural complexity. Nevertheless, most existing SR methods ignore these differences, which increases the difficulty to train [...] Read more.
A novel super-resolution (SR) method is proposed in this paper to reconstruct high-resolution (HR) remote sensing images. Different scenes of remote sensing images have great disparities in structural complexity. Nevertheless, most existing SR methods ignore these differences, which increases the difficulty to train an SR network. Therefore, we first propose a preclassification strategy and adopt different SR networks to process the remote sensing images with different structural complexity. Furthermore, the main edge of low-resolution images are extracted as the shallow features and fused with the deep features extracted by the network to solve the blurry edge problem in remote sensing images. Finally, an edge loss function and a cycle consistent loss function are added to guide the training process to keep the edge details and main structures in a reconstructed image. A large number of comparative experiments on two typical remote sensing images datasets (WHURS and AID) illustrate that our approach achieves better performance than state-of-the-art approaches in both quantitative indicators and visual qualities. The peak signal-to-noise ratio (PSNR) value and the structural similarity (SSIM) value using the proposed method are improved by 0.5353 dB and 0.0262, respectively, over the average values of five typical deep learning methods on the ×4 AID testing set. Our method obtains satisfactory reconstructed images for the subsequent applications based on HR remote sensing images. Full article
Show Figures

Graphical abstract

14 pages, 1755 KiB  
Article
LIME-Based Data Selection Method for SAR Images Generation Using GAN
by Mingzhe Zhu, Bo Zang, Linlin Ding, Tao Lei, Zhenpeng Feng and Jingyuan Fan
Remote Sens. 2022, 14(1), 204; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14010204 - 03 Jan 2022
Cited by 7 | Viewed by 2929
Abstract
Deep learning has obtained remarkable achievements in computer vision, especially image and video processing. However, in synthetic aperture radar (SAR) image recognition, the application of DNNs is usually restricted due to data insufficiency. To augment datasets, generative adversarial networks (GANs) are usually used [...] Read more.
Deep learning has obtained remarkable achievements in computer vision, especially image and video processing. However, in synthetic aperture radar (SAR) image recognition, the application of DNNs is usually restricted due to data insufficiency. To augment datasets, generative adversarial networks (GANs) are usually used to generate numerous photo-realistic SAR images. Although there are many pixel-level metrics to measure GAN’s performance from the quality of generated SAR images, there are few measurements to evaluate whether the generated SAR images include the most representative features of the target. In this case, the classifier probably categorizes a SAR image into the corresponding class based on “wrong” criterion, i.e., “Clever Hans”. In this paper, local interpretable model-agnostic explanation (LIME) is innovatively utilized to evaluate whether a generated SAR image possessed the most representative features of a specific kind of target. Firstly, LIME is used to visualize positive contributions of the input SAR image to the correct prediction of the classifier. Subsequently, these representative SAR images can be selected handily by evaluating how much the positive contribution region matches the target. Experimental results demonstrate that the proposed method can ally “Clever Hans” phenomenon greatly caused by the spurious relationship between generated SAR images and the corresponding classes. Full article
Show Figures

Graphical abstract

19 pages, 5189 KiB  
Article
A scSE-LinkNet Deep Learning Model for Daytime Sea Fog Detection
by Xiaofei Guo, Jianhua Wan, Shanwei Liu, Mingming Xu, Hui Sheng and Muhammad Yasir
Remote Sens. 2021, 13(24), 5163; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13245163 - 20 Dec 2021
Cited by 6 | Viewed by 2829
Abstract
Sea fog is a precarious weather disaster affecting transportation on the sea. The accuracy of the threshold method for sea fog detection is limited by time and region. In comparison, the deep learning method learns features of objects through different network layers and [...] Read more.
Sea fog is a precarious weather disaster affecting transportation on the sea. The accuracy of the threshold method for sea fog detection is limited by time and region. In comparison, the deep learning method learns features of objects through different network layers and can therefore accurately extract fog data and is less affected by temporal and spatial factors. This study proposes a scSE-LinkNet model for daytime sea fog detection that leverages residual blocks to encoder feature maps and attention module to learn the features of sea fog data by considering spectral and spatial information of nodes. With the help of satellite radar data from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP), a ground sample database was extracted from Moderate Resolution Imaging Spectroradiometer (MODIS) L1B data. The scSE-LinkNet was trained on the training set, and quantitative evaluation was performed on the test set. Results showed the probability of detection (POD), false alarm rate (FAR), critical success index (CSI), and Heidke skill scores (HSS) were 0.924, 0.143, 0.800, and 0.864, respectively. Compared with other neural networks (FCN, U-Net, and LinkNet), the CSI of scSE-LinkNet was improved, with a maximum increase of nearly 8%. Moreover, the sea fog detection results were consistent with the measured data and CALIOP products. Full article
Show Figures

Graphical abstract

14 pages, 4836 KiB  
Article
SC-SM CAM: An Efficient Visual Interpretation of CNN for SAR Images Target Recognition
by Zhenpeng Feng, Hongbing Ji, Ljubiša Stanković, Jingyuan Fan and Mingzhe Zhu
Remote Sens. 2021, 13(20), 4139; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13204139 - 15 Oct 2021
Cited by 6 | Viewed by 1831
Abstract
Convolutional neural networks (CNNs) have successfully achieved high accuracy in synthetic aperture radar (SAR) target recognition; however, the intransparency of CNNs is still a limiting or even disqualifying factor. Therefore, visually interpreting CNNs with SAR images has recently drawn increasing attention. Various class [...] Read more.
Convolutional neural networks (CNNs) have successfully achieved high accuracy in synthetic aperture radar (SAR) target recognition; however, the intransparency of CNNs is still a limiting or even disqualifying factor. Therefore, visually interpreting CNNs with SAR images has recently drawn increasing attention. Various class activation mapping (CAM) methods are adopted to discern the relationship between CNN’s decision and image regions. Unfortunately, most existing CAM methods are based on optical images; thus, they usually lead to a limiting visualization effect for SAR images. Although a recently proposed Self-Matching CAM can obtain a satisfactory effect for SAR images, it is quite time-consuming, due to there being hundreds of self-matching operations per image. G-SM-CAM reduces the time of such operation dramatically, but at the cost of visualization effect. Based on the limitations of the above methods, we propose an efficient method, Spectral-Clustering Self-Matching CAM (SC-SM CAM). Spectral clustering is first adopted to divide feature maps into groups for efficient computation. In each group, similar feature maps are merged into an enhanced feature map with more concentrated energy in a specific region; thus, the saliency heatmaps may more accurately tally with the target. Experimental results demonstrate that SC-SM CAM outperforms other SOTA CAM methods in both effect and efficiency. Full article
Show Figures

Graphical abstract

23 pages, 8565 KiB  
Article
Converting Optical Videos to Infrared Videos Using Attention GAN and Its Impact on Target Detection and Classification Performance
by Mohammad Shahab Uddin, Reshad Hoque, Kazi Aminul Islam, Chiman Kwan, David Gribben and Jiang Li
Remote Sens. 2021, 13(16), 3257; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163257 - 18 Aug 2021
Cited by 16 | Viewed by 3789
Abstract
To apply powerful deep-learning-based algorithms for object detection and classification in infrared videos, it is necessary to have more training data in order to build high-performance models. However, in many surveillance applications, one can have a lot more optical videos than infrared videos. [...] Read more.
To apply powerful deep-learning-based algorithms for object detection and classification in infrared videos, it is necessary to have more training data in order to build high-performance models. However, in many surveillance applications, one can have a lot more optical videos than infrared videos. This lack of IR video datasets can be mitigated if optical-to-infrared video conversion is possible. In this paper, we present a new approach for converting optical videos to infrared videos using deep learning. The basic idea is to focus on target areas using attention generative adversarial network (attention GAN), which will preserve the fidelity of target areas. The approach does not require paired images. The performance of the proposed attention GAN has been demonstrated using objective and subjective evaluations. Most importantly, the impact of attention GAN has been demonstrated in improved target detection and classification performance using real-infrared videos. Full article
Show Figures

Figure 1

21 pages, 8157 KiB  
Article
Lithology Classification Using TASI Thermal Infrared Hyperspectral Data with Convolutional Neural Networks
by Huize Liu, Ke Wu, Honggen Xu and Ying Xu
Remote Sens. 2021, 13(16), 3117; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163117 - 06 Aug 2021
Cited by 18 | Viewed by 2778
Abstract
In recent decades, lithological mapping techniques using hyperspectral remotely sensed imagery have developed rapidly. The processing chains using visible-near infrared (VNIR) and shortwave infrared (SWIR) hyperspectral data are proven to be available in practice. The thermal infrared (TIR) portion of the electromagnetic spectrum [...] Read more.
In recent decades, lithological mapping techniques using hyperspectral remotely sensed imagery have developed rapidly. The processing chains using visible-near infrared (VNIR) and shortwave infrared (SWIR) hyperspectral data are proven to be available in practice. The thermal infrared (TIR) portion of the electromagnetic spectrum has considerable potential for mineral and lithology mapping. In particular, the abovementioned rocks at wavelengths of 8–12 μm were found to be discriminative, which can be seen as a characteristic to apply to lithology classification. Moreover, it was found that most of the lithology mapping and classification for hyperspectral thermal infrared data are still carried out by traditional spectral matching methods, which are not very reliable due to the complex diversity of geological lithology. In recent years, deep learning has made great achievements in hyperspectral imagery classification feature extraction. It usually captures abstract features through a multilayer network, especially convolutional neural networks (CNNs), which have received more attention due to their unique advantages. Hence, in this paper, lithology classification with CNNs was tested on thermal infrared hyperspectral data using a Thermal Airborne Spectrographic Imager (TASI) at three small sites in Liuyuan, Gansu Province, China. Three different CNN algorithms, including one-dimensional CNN (1-D CNN), two-dimensional CNN (2-D CNN) and three-dimensional CNN (3-D CNN), were implemented and compared to the six relevant state-of-the-art methods. At the three sites, the maximum overall accuracy (OA) based on CNNs was 94.70%, 96.47% and 98.56%, representing improvements of 22.58%, 25.93% and 16.88% over the worst OA. Meanwhile, the average accuracy of all classes (AA) and kappa coefficient (kappa) value were consistent with the OA, which confirmed that the focal method effectively improved accuracy and outperformed other methods. Full article
Show Figures

Graphical abstract

16 pages, 1859 KiB  
Article
Self-Matching CAM: A Novel Accurate Visual Explanation of CNNs for SAR Image Interpretation
by Zhenpeng Feng, Mingzhe Zhu, Ljubiša Stanković and Hongbing Ji
Remote Sens. 2021, 13(9), 1772; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13091772 - 01 May 2021
Cited by 31 | Viewed by 3249
Abstract
Synthetic aperture radar (SAR) image interpretation has long been an important but challenging task in SAR imaging processing. Generally, SAR image interpretation comprises complex procedures including filtering, feature extraction, image segmentation, and target recognition, which greatly reduce the efficiency of data processing. In [...] Read more.
Synthetic aperture radar (SAR) image interpretation has long been an important but challenging task in SAR imaging processing. Generally, SAR image interpretation comprises complex procedures including filtering, feature extraction, image segmentation, and target recognition, which greatly reduce the efficiency of data processing. In an era of deep learning, numerous automatic target recognition methods have been proposed based on convolutional neural networks (CNNs) due to their strong capabilities for data abstraction and mining. In contrast to general methods, CNNs own an end-to-end structure where complex data preprocessing is not needed, thus the efficiency can be improved dramatically once a CNN is well trained. However, the recognition mechanism of a CNN is unclear, which hinders its application in many scenarios. In this paper, Self-Matching class activation mapping (CAM) is proposed to visualize what a CNN learns from SAR images to make a decision. Self-Matching CAM assigns a pixel-wise weight matrix to feature maps of different channels by matching them with the input SAR image. By using Self-Matching CAM, the detailed information of the target can be well preserved in an accurate visual explanation heatmap of a CNN for SAR image interpretation. Numerous experiments on a benchmark dataset (MSTAR) verify the validity of Self-Matching CAM. Full article
Show Figures

Figure 1

19 pages, 3894 KiB  
Article
Deep Metric Learning with Online Hard Mining for Hyperspectral Classification
by Yanni Dong, Cong Yang and Yuxiang Zhang
Remote Sens. 2021, 13(7), 1368; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13071368 - 02 Apr 2021
Cited by 19 | Viewed by 2934
Abstract
Recently, deep learning has developed rapidly, while it has also been quite successfully applied in the field of hyperspectral classification. Generally, training the parameters of a deep neural network to the best is the core step of a deep learning-based method, which usually [...] Read more.
Recently, deep learning has developed rapidly, while it has also been quite successfully applied in the field of hyperspectral classification. Generally, training the parameters of a deep neural network to the best is the core step of a deep learning-based method, which usually requires a large number of labeled samples. However, in remote sensing analysis tasks, we only have limited labeled data because of the high cost of their collection. Therefore, in this paper, we propose a deep metric learning with online hard mining (DMLOHM) method for hyperspectral classification, which can maximize the inter-class distance and minimize the intra-class distance, utilizing a convolutional neural network (CNN) as an embedded network. First of all, we utilized the triplet network to learn better representations of raw data so that raw data were capable of having their dimensionality reduced. Afterward, an online hard mining method was used to mine the most valuable information from the limited hyperspectral data. To verify the performance of the proposed DMLOHM, we utilized three well-known hyperspectral datasets: Salinas Scene, Pavia University, and HyRANK for verification. Compared with CNN and DMLTN, the experimental results showed that the proposed method improved the classification accuracy from 0.13% to 4.03% with 85 labeled samples per class. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

18 pages, 32802 KiB  
Technical Note
Application of Shape Moments for Cloudiness Assessment in Marine Environmental Research
by Marcin Paszkuta, Adam Krężel and Natalia Ryłko
Remote Sens. 2022, 14(4), 883; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14040883 - 12 Feb 2022
Cited by 2 | Viewed by 1534
Abstract
The search for clouds in satellite images is a challenging subject which still attracts a lot of attention due to the amount and quality of data, which is growing at a tremendous pace, the development of satellite techniques and methods, inexpensive equipment, and [...] Read more.
The search for clouds in satellite images is a challenging subject which still attracts a lot of attention due to the amount and quality of data, which is growing at a tremendous pace, the development of satellite techniques and methods, inexpensive equipment, and automation of satellite imaging processes. This paper presents a new approach to the assessment of cloudiness based on the use of the theory of moments with invariants. The values of moments with invariants, determined on the basis of the available cloudiness maps, create a new, valuable set of data, which are the geometrical parameters of the scene representing the cloud cover. In further research, the obtained data sets will be used in machine learning methods, deep machine learning methods, etc. The method is used for different conditions, including different angular positions of the Sun and time periods. The effectiveness of the method is checked on the basis of comparing the entropy results of the input maps after subtracting clouds masked by various methods. The obtained results additionally indicate the potential of the moments method as a support for the existing methods of estimating cloudiness over the sea surface. Full article
Show Figures

Figure 1

Back to TopTop