Special Issue "Explainable Deep Neural Networks for Remote Sensing Image Understanding"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 December 2021.

Special Issue Editors

Dr. Tao Lei
E-Mail Website
Guest Editor
School of Electronic Inormation and Artificial Intelligence, Shaanxi University of Science and Technology, Xi’an 710021, China
Interests: machine learning; remote sensing image analysis; visual understand
Dr. Tao Chen
E-Mail Website
Guest Editor
Institute of Geophysics and Geomatics, China University of Geosciences, Wuhan 430074, China
Interests: image processing; machine learning; geological remote sensing
Prof. Dr. Lefei Zhang
E-Mail Website
Guest Editor
School of Computer Science, Wuhan University, Wuhan 430072, China
Interests: pattern recognition; machine learning; image processing; remote sensing
Special Issues and Collections in MDPI journals
Prof. Dr. Asoke K. Nandi
E-Mail Website
Guest Editor
Department of Electronic and Electrical Engineering, Brunel University London, Uxbridge UB8 3PU, UK
Interests: machine learning; applications of remote sensing; image processing

Special Issue Information

Dear Colleagues,

Deep convolutional neural networks have been widely used in remote sensing image analysis and applications, e.g., classification, detection, regression, and inversion. Although these networks are somewhat successful in remote sensing image understanding, they still face a black-box problem, since both the feature extraction and classifier design are automatically learned. This problem seriously limits the development of deep learning and its applications in the field of remote sensing image understanding. In recent years, many explainable deep network models have been reported in the machine learning society, such as channel attention, spatial attention, self-attention, and non-local network. These networks, to some extent, promote the development of explainable deep learning and address some important problems in remote sensing image analysis. On the other hand, remote sensing applications usually involve several exact physical models, e.g., the radiance transfer model, linear unmixing model, and spatiotemporal autocorrelation, which can also effectively model the formation process from the remote sensing data to land-cover observation and environmental parameter monitoring. However, how to effectively integrate popular deep neural networks with the traditional remote sensing physical models is currently the main challenge in remote sensing image understanding. Therefore, the research of theoretically and physically explainable deep convolutional neural networks is currently one of the most popular topics and can offer important advantages in the applications of remote sensing image understanding. This Special Issue aims to publish high-quality research papers and salient and informative review articles addressing emerging trends in remote sensing image understanding using the combination of explainable deep network and remote sensing physical models. Original contributions, not currently under review in a journal or a conference, are solicited in relevant areas including, but not limited to, the following:

  • Attention-aware deep convolutional neural networks for objection detection, segmentation, and recognition in remote sensing images.
  • Non-local convolutional neural networks for remote sensing image applications.
  • Compact deep network models for remote sensing image applications.
  • Physical model integrating with deep convolutional neural networks for remote sensing image applications.
  • Hybrid models of joining data-driven and model-driven for remote sensing image applications.
  • Incorporating geographical laws into deep convolutional neural netowrks for remote sensing image applications.
  • Review/Surveys of remote sensing image processing.
  • New remote sensing image datasets.

Dr. Tao Lei
Dr. Tao Chen
Dr. Lefei Zhang
Prof. Dr. Asoke K. Nandi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • remote sensing image analysis
  • land-cover observation
  • Earth environmental monitoring
  • attention mechanism
  • data driven and model driven
  • physical models

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Converting Optical Videos to Infrared Videos Using Attention GAN and Its Impact on Target Detection and Classification Performance
Remote Sens. 2021, 13(16), 3257; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163257 - 18 Aug 2021
Viewed by 305
Abstract
To apply powerful deep-learning-based algorithms for object detection and classification in infrared videos, it is necessary to have more training data in order to build high-performance models. However, in many surveillance applications, one can have a lot more optical videos than infrared videos. [...] Read more.
To apply powerful deep-learning-based algorithms for object detection and classification in infrared videos, it is necessary to have more training data in order to build high-performance models. However, in many surveillance applications, one can have a lot more optical videos than infrared videos. This lack of IR video datasets can be mitigated if optical-to-infrared video conversion is possible. In this paper, we present a new approach for converting optical videos to infrared videos using deep learning. The basic idea is to focus on target areas using attention generative adversarial network (attention GAN), which will preserve the fidelity of target areas. The approach does not require paired images. The performance of the proposed attention GAN has been demonstrated using objective and subjective evaluations. Most importantly, the impact of attention GAN has been demonstrated in improved target detection and classification performance using real-infrared videos. Full article
Show Figures

Figure 1

Article
Lithology Classification Using TASI Thermal Infrared Hyperspectral Data with Convolutional Neural Networks
Remote Sens. 2021, 13(16), 3117; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163117 - 06 Aug 2021
Viewed by 476
Abstract
In recent decades, lithological mapping techniques using hyperspectral remotely sensed imagery have developed rapidly. The processing chains using visible-near infrared (VNIR) and shortwave infrared (SWIR) hyperspectral data are proven to be available in practice. The thermal infrared (TIR) portion of the electromagnetic spectrum [...] Read more.
In recent decades, lithological mapping techniques using hyperspectral remotely sensed imagery have developed rapidly. The processing chains using visible-near infrared (VNIR) and shortwave infrared (SWIR) hyperspectral data are proven to be available in practice. The thermal infrared (TIR) portion of the electromagnetic spectrum has considerable potential for mineral and lithology mapping. In particular, the abovementioned rocks at wavelengths of 8–12 μm were found to be discriminative, which can be seen as a characteristic to apply to lithology classification. Moreover, it was found that most of the lithology mapping and classification for hyperspectral thermal infrared data are still carried out by traditional spectral matching methods, which are not very reliable due to the complex diversity of geological lithology. In recent years, deep learning has made great achievements in hyperspectral imagery classification feature extraction. It usually captures abstract features through a multilayer network, especially convolutional neural networks (CNNs), which have received more attention due to their unique advantages. Hence, in this paper, lithology classification with CNNs was tested on thermal infrared hyperspectral data using a Thermal Airborne Spectrographic Imager (TASI) at three small sites in Liuyuan, Gansu Province, China. Three different CNN algorithms, including one-dimensional CNN (1-D CNN), two-dimensional CNN (2-D CNN) and three-dimensional CNN (3-D CNN), were implemented and compared to the six relevant state-of-the-art methods. At the three sites, the maximum overall accuracy (OA) based on CNNs was 94.70%, 96.47% and 98.56%, representing improvements of 22.58%, 25.93% and 16.88% over the worst OA. Meanwhile, the average accuracy of all classes (AA) and kappa coefficient (kappa) value were consistent with the OA, which confirmed that the focal method effectively improved accuracy and outperformed other methods. Full article
Show Figures

Graphical abstract

Article
Self-Matching CAM: A Novel Accurate Visual Explanation of CNNs for SAR Image Interpretation
Remote Sens. 2021, 13(9), 1772; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13091772 - 01 May 2021
Cited by 2 | Viewed by 604
Abstract
Synthetic aperture radar (SAR) image interpretation has long been an important but challenging task in SAR imaging processing. Generally, SAR image interpretation comprises complex procedures including filtering, feature extraction, image segmentation, and target recognition, which greatly reduce the efficiency of data processing. In [...] Read more.
Synthetic aperture radar (SAR) image interpretation has long been an important but challenging task in SAR imaging processing. Generally, SAR image interpretation comprises complex procedures including filtering, feature extraction, image segmentation, and target recognition, which greatly reduce the efficiency of data processing. In an era of deep learning, numerous automatic target recognition methods have been proposed based on convolutional neural networks (CNNs) due to their strong capabilities for data abstraction and mining. In contrast to general methods, CNNs own an end-to-end structure where complex data preprocessing is not needed, thus the efficiency can be improved dramatically once a CNN is well trained. However, the recognition mechanism of a CNN is unclear, which hinders its application in many scenarios. In this paper, Self-Matching class activation mapping (CAM) is proposed to visualize what a CNN learns from SAR images to make a decision. Self-Matching CAM assigns a pixel-wise weight matrix to feature maps of different channels by matching them with the input SAR image. By using Self-Matching CAM, the detailed information of the target can be well preserved in an accurate visual explanation heatmap of a CNN for SAR image interpretation. Numerous experiments on a benchmark dataset (MSTAR) verify the validity of Self-Matching CAM. Full article
Show Figures

Figure 1

Article
Deep Metric Learning with Online Hard Mining for Hyperspectral Classification
Remote Sens. 2021, 13(7), 1368; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13071368 - 02 Apr 2021
Cited by 2 | Viewed by 651
Abstract
Recently, deep learning has developed rapidly, while it has also been quite successfully applied in the field of hyperspectral classification. Generally, training the parameters of a deep neural network to the best is the core step of a deep learning-based method, which usually [...] Read more.
Recently, deep learning has developed rapidly, while it has also been quite successfully applied in the field of hyperspectral classification. Generally, training the parameters of a deep neural network to the best is the core step of a deep learning-based method, which usually requires a large number of labeled samples. However, in remote sensing analysis tasks, we only have limited labeled data because of the high cost of their collection. Therefore, in this paper, we propose a deep metric learning with online hard mining (DMLOHM) method for hyperspectral classification, which can maximize the inter-class distance and minimize the intra-class distance, utilizing a convolutional neural network (CNN) as an embedded network. First of all, we utilized the triplet network to learn better representations of raw data so that raw data were capable of having their dimensionality reduced. Afterward, an online hard mining method was used to mine the most valuable information from the limited hyperspectral data. To verify the performance of the proposed DMLOHM, we utilized three well-known hyperspectral datasets: Salinas Scene, Pavia University, and HyRANK for verification. Compared with CNN and DMLTN, the experimental results showed that the proposed method improved the classification accuracy from 0.13% to 4.03% with 85 labeled samples per class. Full article
Show Figures

Graphical abstract

Back to TopTop