remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing Data and Classification Algorithms

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (1 June 2022) | Viewed by 21746

Special Issue Editor


E-Mail Website
Guest Editor
University of Rennes1/IETR UMR CNRS, 6, rue de Kerampont, 22305 Lannion, France
Interests: decision-making systems; image filtering; blind restoration; unsupervised classification; optimization; estimation; hyperspectral image; remote sensing.

Special Issue Information

Dear Colleagues,

Decision-making systems represent a growing and innovative field of research, covering a wide range of applications. The development of these systems is based on reliable remote sensing data and efficient classification algorithms. The data exploited in these systems must be accurate regardless of the application field. Thus, the analysis and interpretation of remote sensing images must be objective and relevant, considering their physical nature. To effectively achieve these objectives and to meet expectations, classification algorithms must provide optimal partitions according to one or more criteria. Despite numerous existing methods in the literature, achieving accurate image partitioning remains a challenge. Indeed, to resolve this problem, one must take into account the diversity of objects and their classes in an image, the partial and limited knowledge of ground truth data, and spatial resolution, regardless of the remote sensing data acquisition system (satellite, aircraft, UAV). Classification algorithms must therefore be adapted to the nature of the data for better image partitioning. The classification of remote sensing data remains an open and complex problem for the three categories of supervised, semi-supervised, and unsupervised classification methods. Thus, the development of an efficient classification algorithm remains the main contributor to the quality of decision-making systems.

Prof. Dr. Kacem Chehdi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • decision-making systems
  • optimization
  • algorithms
  • classification
  • semi-supervised classification
  • unsupervised classification
  • remote sensing
  • hyperspectral image
  • assessment
  • validation

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 3307 KiB  
Article
Detection of Small Floating Target on Sea Surface Based on Gramian Angular Field and Improved EfficientNet
by Caiping Xi and Renqiao Liu
Remote Sens. 2022, 14(17), 4364; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14174364 - 02 Sep 2022
Cited by 13 | Viewed by 1601
Abstract
In order to exploit the advantages of CNN models in the detection of small floating targets on the sea surface, this paper proposes a new framework for encoding radar echo Doppler spectral sequences into images and explores two different ways of encoding time [...] Read more.
In order to exploit the advantages of CNN models in the detection of small floating targets on the sea surface, this paper proposes a new framework for encoding radar echo Doppler spectral sequences into images and explores two different ways of encoding time series: Gramian Angular Summation Field (GASF) and Gramian Angular Difference Field (GADF). To emphasize the importance of the location of texture information in the GAF-encoded map, this paper introduces the coordinate attention (CA) mechanism into the mobile inverted bottleneck convolution (MBConv) structure in EfficientNet and optimizes the model convergence by the adaptive AdamW optimization algorithm. Finally, the improved EfficientNet model is used to train and test on the constructed GADF and GASF datasets, respectively. The experimental results demonstrate the effectiveness of the proposed algorithm. The recognition accuracy of the improved EfficientNet model reaches 96.13% and 96.28% on the GADF and GASF datasets, respectively, which is 1.74% and 2.06% higher than that that of the pre-improved network model. The number of parameters of the improved EfficientNet model is 5.38 M, which is 0.09 M higher than that of the pre-improved network model. Compared with the classical image classification algorithm, the proposed algorithm achieves higher accuracy and maintains lighter computation. Full article
(This article belongs to the Special Issue Remote Sensing Data and Classification Algorithms)
Show Figures

Graphical abstract

26 pages, 1719 KiB  
Article
A Combination of Lie Group Machine Learning and Deep Learning for Remote Sensing Scene Classification Using Multi-Layer Heterogeneous Feature Extraction and Fusion
by Chengjun Xu, Guobin Zhu and Jingqian Shu
Remote Sens. 2022, 14(6), 1445; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14061445 - 17 Mar 2022
Cited by 13 | Viewed by 2583
Abstract
Discriminative feature learning is the key to remote sensing scene classification. Previous research has found that most of the existing convolutional neural networks (CNN) focus on the global semantic features and ignore shallower features (low-level and middle-level features). This study proposes a novel [...] Read more.
Discriminative feature learning is the key to remote sensing scene classification. Previous research has found that most of the existing convolutional neural networks (CNN) focus on the global semantic features and ignore shallower features (low-level and middle-level features). This study proposes a novel Lie Group deep learning model for remote sensing scene classification to solve the above-mentioned challenges. Firstly, we extract shallower and higher-level features from images based on Lie Group machine learning (LGML) and deep learning to improve the feature representation ability of the model. In addition, a parallel dilated convolution, a kernel decomposition, and a Lie Group kernel function are adopted to reduce the model’s parameters to prevent model degradation and over-fitting caused by the deepening of the model. Then, the spatial attention mechanism can enhance local semantic features and suppress irrelevant feature information. Finally, feature-level fusion is adopted to reduce redundant features and improve computational performance, and cross-entropy loss function based on label smoothing is used to improve the classification accuracy of the model. Comparative experiments on three public and challenging large-scale remote-sensing datasets show that our model improves the discriminative ability of features and achieves competitive accuracy against other state-of-the-art methods. Full article
(This article belongs to the Special Issue Remote Sensing Data and Classification Algorithms)
Show Figures

Graphical abstract

22 pages, 4765 KiB  
Article
Hierarchical Unsupervised Partitioning of Large Size Data and Its Application to Hyperspectral Images
by Jihan Alameddine, Kacem Chehdi and Claude Cariou
Remote Sens. 2021, 13(23), 4874; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13234874 - 30 Nov 2021
Cited by 5 | Viewed by 1563
Abstract
In this paper, we propose a true unsupervised method to partition large-size images, where the number of classes, training samples, and other a priori information is not known. Thus, partitioning an image without any knowledge is a great challenge. This novel adaptive and [...] Read more.
In this paper, we propose a true unsupervised method to partition large-size images, where the number of classes, training samples, and other a priori information is not known. Thus, partitioning an image without any knowledge is a great challenge. This novel adaptive and hierarchical classification method is based on affinity propagation, where all criteria and parameters are adaptively calculated from the image to be partitioned. It is reliable to objectively discover classes of an image without user intervention and therefore satisfies all the objectives of an unsupervised method. Hierarchical partitioning adopted allows the user to analyze and interpret the data very finely. The optimal partition maximizing an objective criterion provides the number of classes and the exemplar of each class. The efficiency of the proposed method is demonstrated through experimental results on hyperspectral images. The obtained results show its superiority over the most widely used unsupervised and semi-supervised methods. The developed method can be used in several application domains to partition large-size images or data. It allows the user to consider all or part of the obtained classes and gives the possibility to select the samples in an objective way during a learning process. Full article
(This article belongs to the Special Issue Remote Sensing Data and Classification Algorithms)
Show Figures

Figure 1

17 pages, 1467 KiB  
Article
UAV Recognition Based on Micro-Doppler Dynamic Attribute-Guided Augmentation Algorithm
by Caidan Zhao, Gege Luo, Yilin Wang, Caiyun Chen and Zhiqiang Wu
Remote Sens. 2021, 13(6), 1205; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13061205 - 22 Mar 2021
Cited by 15 | Viewed by 3137
Abstract
A micro-Doppler signature (m-DS) based on the rotation of drone blades is an effective way to detect and identify small drones. Deep-learning-based recognition algorithms can achieve higher recognition performance, but they needs a large amount of sample data to train models. In addition [...] Read more.
A micro-Doppler signature (m-DS) based on the rotation of drone blades is an effective way to detect and identify small drones. Deep-learning-based recognition algorithms can achieve higher recognition performance, but they needs a large amount of sample data to train models. In addition to the hovering state, the signal samples of small unmanned aerial vehicles (UAVs) should also include flight dynamics, such as vertical, pitch, forward and backward, roll, lateral, and yaw. However, it is difficult to collect all dynamic UAV signal samples under actual flight conditions, and these dynamic flight characteristics will lead to the deviation of the original features, thus affecting the performance of the recognizer. In this paper, we propose a small UAV m-DS recognition algorithm based on dynamic feature enhancement. We extract the combined principal component analysis and discrete wavelet transform (PCA-DWT) time–frequency characteristics and texture features of the UAV’s micro-Doppler signal and use a dynamic attribute-guided augmentation (DAGA) algorithm to expand the feature domain for model training to achieve an adaptive, accurate, and efficient multiclass recognition model in complex environments. After the training model is stable, the average recognition accuracy rate can reach 98% during dynamic flight. Full article
(This article belongs to the Special Issue Remote Sensing Data and Classification Algorithms)
Show Figures

Graphical abstract

27 pages, 4938 KiB  
Article
Effects of Training Set Size on Supervised Machine-Learning Land-Cover Classification of Large-Area High-Resolution Remotely Sensed Data
by Christopher A. Ramezan, Timothy A. Warner, Aaron E. Maxwell and Bradley S. Price
Remote Sens. 2021, 13(3), 368; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13030368 - 21 Jan 2021
Cited by 70 | Viewed by 8826
Abstract
The size of the training data set is a major determinant of classification accuracy. Nevertheless, the collection of a large training data set for supervised classifiers can be a challenge, especially for studies covering a large area, which may be typical of many [...] Read more.
The size of the training data set is a major determinant of classification accuracy. Nevertheless, the collection of a large training data set for supervised classifiers can be a challenge, especially for studies covering a large area, which may be typical of many real-world applied projects. This work investigates how variations in training set size, ranging from a large sample size (n = 10,000) to a very small sample size (n = 40), affect the performance of six supervised machine-learning algorithms applied to classify large-area high-spatial-resolution (HR) (1–5 m) remotely sensed data within the context of a geographic object-based image analysis (GEOBIA) approach. GEOBIA, in which adjacent similar pixels are grouped into image-objects that form the unit of the classification, offers the potential benefit of allowing multiple additional variables, such as measures of object geometry and texture, thus increasing the dimensionality of the classification input data. The six supervised machine-learning algorithms are support vector machines (SVM), random forests (RF), k-nearest neighbors (k-NN), single-layer perceptron neural networks (NEU), learning vector quantization (LVQ), and gradient-boosted trees (GBM). RF, the algorithm with the highest overall accuracy, was notable for its negligible decrease in overall accuracy, 1.0%, when training sample size decreased from 10,000 to 315 samples. GBM provided similar overall accuracy to RF; however, the algorithm was very expensive in terms of training time and computational resources, especially with large training sets. In contrast to RF and GBM, NEU, and SVM were particularly sensitive to decreasing sample size, with NEU classifications generally producing overall accuracies that were on average slightly higher than SVM classifications for larger sample sizes, but lower than SVM for the smallest sample sizes. NEU however required a longer processing time. The k-NN classifier saw less of a drop in overall accuracy than NEU and SVM as training set size decreased; however, the overall accuracies of k-NN were typically less than RF, NEU, and SVM classifiers. LVQ generally had the lowest overall accuracy of all six methods, but was relatively insensitive to sample size, down to the smallest sample sizes. Overall, due to its relatively high accuracy with small training sample sets, and minimal variations in overall accuracy between very large and small sample sets, as well as relatively short processing time, RF was a good classifier for large-area land-cover classifications of HR remotely sensed data, especially when training data are scarce. However, as performance of different supervised classifiers varies in response to training set size, investigating multiple classification algorithms is recommended to achieve optimal accuracy for a project. Full article
(This article belongs to the Special Issue Remote Sensing Data and Classification Algorithms)
Show Figures

Figure 1

24 pages, 30333 KiB  
Article
Learning Rotation Domain Deep Mutual Information Using Convolutional LSTM for Unsupervised PolSAR Image Classification
by Lei Wang, Xin Xu, Rong Gui, Rui Yang and Fangling Pu
Remote Sens. 2020, 12(24), 4075; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12244075 - 12 Dec 2020
Cited by 9 | Viewed by 2239
Abstract
Deep learning can archive state-of-the-art performance in polarimetric synthetic aperture radar (PolSAR) image classification with plenty of labeled data. However, obtaining large number of accurately labeled samples of PolSAR data is very hard, which limits the practical use of deep learning. Therefore, unsupervised [...] Read more.
Deep learning can archive state-of-the-art performance in polarimetric synthetic aperture radar (PolSAR) image classification with plenty of labeled data. However, obtaining large number of accurately labeled samples of PolSAR data is very hard, which limits the practical use of deep learning. Therefore, unsupervised PolSAR image classification is worthy of further investigation that is based on deep learning. Inspired by the superior performance of deep mutual information in natural image feature learning and clustering, an end-to-end Convolutional Long Short Term Memory (ConvLSTM) network is used in order to learn the deep mutual information of polarimetric coherent matrices in the rotation domain with different polarimetric orientation angles (POAs) for unsupervised PolSAR image classification. First, for each pixel, paired “POA-spatio” samples are generated from the polarimetric coherent matrices with different POAs. Second, a special designed ConvLSTM network, along with deep mutual information losses, is used in order to learn the discriminative deep mutual information feature representation of the paired data. Finally, the classification results can be output directly from the trained network model. The proposed method is trained in an end-to-end manner and does not have cumbersome pipelines. Experiments on four real PolSAR datasets show that the performance of proposed method surpasses some state-of-the-art deep learning unsupervised classification methods. Full article
(This article belongs to the Special Issue Remote Sensing Data and Classification Algorithms)
Show Figures

Graphical abstract

Back to TopTop