remotesensing-logo

Journal Browser

Journal Browser

Classification and Feature Extraction for Remote Sensing Image Analysis

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 September 2018) | Viewed by 121432

Special Issue Editors

University of Toulouse, INP-ENSAT, UMR 1201 DYNAFOR, F-31326 Castanet Tolosan, France
Interests: Signal and Image processing; pattern recognition; remote sensing; kernel methods
Special Issues, Collections and Topics in MDPI journals
CESBIO/CNES, BPI 811, 18 Avenue E. Berlin, 31401 Toulouse, CEDEX 9, France
Interests: land cover mapping; satellite image time series; image classification
Special Issues, Collections and Topics in MDPI journals
Luxembourg Institute of Science and Technology (LIST), 41, rue du Brill, L-4422 Belvaux, Luxembourg
Interests: flood mapping; earthquakes damage detection; analysis of multitemporal data; classification; feature extraction; data fusion; segmentation; SAR and optical data; SAR interferometry
Special Issues, Collections and Topics in MDPI journals
DigitalGlobe, Inc.
Interests: image classification; geospatial big data; time series; change detection; image

Special Issue Information

Dear Colleagues,

With the recent launch of several satellites with various modalities and resolutions, the huge amount of Earth observation remote sensing data has to be dealt with. For the same geographical area, satellite image time series, very high spatial resolution images, or hyperspectral images, are now easily available. Some of them, are freely available through open-access programs, such as Copernicus or the Landsat Open Archive. One of the most important applications of these remote sensing data is the classification of pixels, in terms of land cover, land use, or cover changes.

However, this massive flow of data, in terms of spatial coverage and temporal sampling, has the potential to develop applications at a global scale. However, conventional algorithms defined for small or moderate sized remote sensing images do not scale well. Hence, specific works are needed to fully exploit the data. In particular, the relevant information for the classification purposes might be hidden by the full set of features. Hence, feature extraction from the spectral, temporal and spatial dimensions is mandatory to provide detailed and accurate thematic maps.

This Special Issue will focus on state-of-the-art or newly-developed methods for the classification and feature extraction for remote sensing images. It will cover (but will not be  limited to) the following topics: Spatial, temporal and spectral feature extraction, data-driven feature extraction, classification of multimodal remote sensing data for any thematic application (urban, agricultural or ecological ones) and for any scales, from locals to global ones.

Dr. Mathieu Fauvel
Dr. Jordi Inglada
Dr. Marco Chini
Dr. Fabio Pacifici
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Big Earth Observation Data
  • Very High Resolution
  • Linear and Non Linear Feature Extraction
  • High Performance Computing
  • Classification
  • Active and passive remote sensing image time series

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 10148 KiB  
Article
Fully Connected Conditional Random Fields for High-Resolution Remote Sensing Land Use/Land Cover Classification with Convolutional Neural Networks
by Bin Zhang, Cunpeng Wang, Yonglin Shen and Yueyan Liu
Remote Sens. 2018, 10(12), 1889; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10121889 - 27 Nov 2018
Cited by 21 | Viewed by 4298
Abstract
The interpretation of land use and land cover (LULC) is an important issue in the fields of high-resolution remote sensing (RS) image processing and land resource management. Fully training a new or existing convolutional neural network (CNN) architecture for LULC classification requires a [...] Read more.
The interpretation of land use and land cover (LULC) is an important issue in the fields of high-resolution remote sensing (RS) image processing and land resource management. Fully training a new or existing convolutional neural network (CNN) architecture for LULC classification requires a large amount of remote sensing images. Thus, fine-tuning a pre-trained CNN for LULC detection is required. To improve the classification accuracy for high resolution remote sensing images, it is necessary to use another feature descriptor and to adopt a classifier for post-processing. A fully connected conditional random fields (FC-CRF), to use the fine-tuned CNN layers, spectral features, and fully connected pairwise potentials, is proposed for image classification of high-resolution remote sensing images. First, an existing CNN model is adopted, and the parameters of CNN are fine-tuned by training datasets. Then, the probabilities of image pixels belong to each class type are calculated. Second, we consider the spectral features and digital surface model (DSM) and combined with a support vector machine (SVM) classifier, the probabilities belong to each LULC class type are determined. Combined with the probabilities achieved by the fine-tuned CNN, new feature descriptors are built. Finally, FC-CRF are introduced to produce the classification results, whereas the unary potentials are achieved by the new feature descriptors and SVM classifier, and the pairwise potentials are achieved by the three-band RS imagery and DSM. Experimental results show that the proposed classification scheme achieves good performance when the total accuracy is about 85%. Full article
Show Figures

Figure 1

19 pages, 41006 KiB  
Article
Towards a 20 m Global Building Map from Sentinel-1 SAR Data
by Marco Chini, Ramona Pelich, Renaud Hostache, Patrick Matgen and Carlos Lopez-Martinez
Remote Sens. 2018, 10(11), 1833; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10111833 - 19 Nov 2018
Cited by 44 | Viewed by 8607
Abstract
This study introduces a technique for automatically mapping built-up areas using synthetic aperture radar (SAR) backscattering intensity and interferometric multi-temporal coherence generated from Sentinel-1 data in the framework of the Copernicus program. The underlying hypothesis is that, in SAR images, built-up areas exhibit [...] Read more.
This study introduces a technique for automatically mapping built-up areas using synthetic aperture radar (SAR) backscattering intensity and interferometric multi-temporal coherence generated from Sentinel-1 data in the framework of the Copernicus program. The underlying hypothesis is that, in SAR images, built-up areas exhibit very high backscattering values that are coherent in time. Several particular characteristics of the Sentinel-1 satellite mission are put to good use, such as its high revisit time, the availability of dual-polarized data, and its small orbital tube. The newly developed algorithm is based on an adaptive parametric thresholding that first identifies pixels with high backscattering values in both VV and VH polarimetric channels. The interferometric SAR coherence is then used to reduce false alarms. These are caused by land cover classes (other than buildings) that are characterized by high backscattering values that are not coherent in time (e.g., certain types of vegetated areas). The algorithm was tested on Sentinel-1 Interferometric Wide Swath data from five different test sites located in semiarid and arid regions in the Mediterranean region and Northern Africa. The resulting building maps were compared with the Global Urban Footprint (GUF) derived from the TerraSAR-X mission data and, on average, a 92% agreement was obtained. Full article
Show Figures

Graphical abstract

18 pages, 14531 KiB  
Article
Exploiting SAR Tomography for Supervised Land-Cover Classification
by Olivier D’Hondt, Ronny Hänsch, Nicolas Wagener and Olaf Hellwich
Remote Sens. 2018, 10(11), 1742; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10111742 - 05 Nov 2018
Cited by 5 | Viewed by 4348
Abstract
In this paper, we provide the first in-depth evaluation of exploiting Tomographic Synthetic Aperture Radar (TomoSAR) for the task of supervised land-cover classification. Our main contribution is the design of specific TomoSAR features to reach this objective. In particular, we show that classification [...] Read more.
In this paper, we provide the first in-depth evaluation of exploiting Tomographic Synthetic Aperture Radar (TomoSAR) for the task of supervised land-cover classification. Our main contribution is the design of specific TomoSAR features to reach this objective. In particular, we show that classification based on TomoSAR significantly outperforms PolSAR data provided relevant features are extracted from the tomograms. We also provide a comparison of classification results obtained from covariance matrices versus tomogram features as well as obtained by different reference methods, i.e., the traditional Wishart classifier and the more sophisticated Random Forest. Extensive qualitative and quantitative results are shown on a fully polarimetric and multi-baseline dataset from the E-SAR sensor from the German Aerospace Center (DLR). Full article
Show Figures

Graphical abstract

22 pages, 5818 KiB  
Article
Multi-Stream Convolutional Neural Network for SAR Automatic Target Recognition
by Pengfei Zhao, Kai Liu, Hao Zou and Xiantong Zhen
Remote Sens. 2018, 10(9), 1473; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10091473 - 14 Sep 2018
Cited by 64 | Viewed by 7050
Abstract
Despite the fact that automatic target recognition (ATR) in Synthetic aperture radar (SAR) images has been extensively researched due to its practical use in both military and civil applications, it remains an unsolved problem. The major challenges of ATR in SAR stem from [...] Read more.
Despite the fact that automatic target recognition (ATR) in Synthetic aperture radar (SAR) images has been extensively researched due to its practical use in both military and civil applications, it remains an unsolved problem. The major challenges of ATR in SAR stem from severe data scarcity and great variation of SAR images. Recent work started to adopt convolutional neural networks (CNNs), which, however, remain unable to handle the aforementioned challenges due to their high dependency on large quantities of data. In this paper, we propose a novel deep convolutional learning architecture, called Multi-Stream CNN (MS-CNN), for ATR in SAR by leveraging SAR images from multiple views. Specifically, we deploy a multi-input architecture that fuses information from multiple views of the same target in different aspects; therefore, the elaborated multi-view design of MS-CNN enables it to make full use of limited SAR image data to improve recognition performance. We design a Fourier feature fusion framework derived from kernel approximation based on random Fourier features which allows us to unravel the highly nonlinear relationship between images and classes. More importantly, MS-CNN is qualified with the desired characteristic of easy and quick manoeuvrability in real SAR ATR scenarios, because it only needs to acquire real-time GPS information from airborne SAR to calculate aspect differences used for constructing testing samples. The effectiveness and generalization ability of MS-CNN have been demonstrated by extensive experiments under both the Standard Operating Condition (SOC) and Extended Operating Condition (EOC) on the MSTAR dataset. Experimental results have shown that our proposed MS-CNN can achieve high recognition rates and outperform other state-of-the-art ATR methods. Full article
Show Figures

Figure 1

16 pages, 5130 KiB  
Article
Hyperspectral Image Classification Based on Parameter-Optimized 3D-CNNs Combined with Transfer Learning and Virtual Samples
by Xuefeng Liu, Qiaoqiao Sun, Yue Meng, Min Fu and Salah Bourennane
Remote Sens. 2018, 10(9), 1425; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10091425 - 07 Sep 2018
Cited by 36 | Viewed by 5163
Abstract
Recent research has shown that spatial-spectral information can help to improve the classification of hyperspectral images (HSIs). Therefore, three-dimensional convolutional neural networks (3D-CNNs) have been applied to HSI classification. However, a lack of HSI training samples restricts the performance of 3D-CNNs. To solve [...] Read more.
Recent research has shown that spatial-spectral information can help to improve the classification of hyperspectral images (HSIs). Therefore, three-dimensional convolutional neural networks (3D-CNNs) have been applied to HSI classification. However, a lack of HSI training samples restricts the performance of 3D-CNNs. To solve this problem and improve the classification, an improved method based on 3D-CNNs combined with parameter optimization, transfer learning, and virtual samples is proposed in this paper. Firstly, to optimize the network performance, the parameters of the 3D-CNN of the HSI to be classified (target data) are adjusted according to the single variable principle. Secondly, in order to relieve the problem caused by insufficient samples, the weights in the bottom layers of the parameter-optimized 3D-CNN of the target data can be transferred from another well trained 3D-CNN by a HSI (source data) with enough samples and the same feature space as the target data. Then, some virtual samples can be generated from the original samples of the target data to further alleviate the lack of HSI training samples. Finally, the parameter-optimized 3D-CNN with transfer learning can be trained by the training samples consisting of the virtual and the original samples. Experimental results on real-world hyperspectral satellite images have shown that the proposed method has great potential prospects in HSI classification. Full article
Show Figures

Graphical abstract

28 pages, 19299 KiB  
Article
A Super-Resolution and Fusion Approach to Enhancing Hyperspectral Images
by Chiman Kwan, Joon Hee Choi, Stanley H. Chan, Jin Zhou and Bence Budavari
Remote Sens. 2018, 10(9), 1416; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10091416 - 06 Sep 2018
Cited by 63 | Viewed by 5146
Abstract
High resolution (HR) hyperspectral (HS) images have found widespread applications in terrestrial remote sensing applications, including vegetation monitoring, military surveillance and reconnaissance, fire damage assessment, and many others. They also find applications in planetary missions such as Mars surface characterization. However, resolutions of [...] Read more.
High resolution (HR) hyperspectral (HS) images have found widespread applications in terrestrial remote sensing applications, including vegetation monitoring, military surveillance and reconnaissance, fire damage assessment, and many others. They also find applications in planetary missions such as Mars surface characterization. However, resolutions of most HS imagers are limited to tens of meters. Existing resolution enhancement techniques either require additional multispectral (MS) band images or use a panchromatic (pan) band image. The former poses hardware challenges, whereas the latter may have limited performance. In this paper, we present a new resolution enhancement algorithm for HS images that only requires an HR color image and a low resolution (LR) HS image cube. Our approach integrates two newly developed techniques: (1) A hybrid color mapping (HCM) algorithm, and (2) A Plug-and-Play algorithm for single image super-resolution. Comprehensive experiments (objective (five performance metrics), subjective (synthesized fused images in multiple spectral ranges), and pixel clustering) using real HS images and comparative studies with 20 representative algorithms in the literature were conducted to validate and evaluate the proposed method. Results demonstrated that the new algorithm is very promising. Full article
Show Figures

Graphical abstract

25 pages, 24979 KiB  
Article
Dense Connectivity Based Two-Stream Deep Feature Fusion Framework for Aerial Scene Classification
by Yunlong Yu and Fuxian Liu
Remote Sens. 2018, 10(7), 1158; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10071158 - 23 Jul 2018
Cited by 63 | Viewed by 6315
Abstract
Aerial scene classification is an active and challenging problem in high-resolution remote sensing imagery understanding. Deep learning models, especially convolutional neural networks (CNNs), have achieved prominent performance in this field. The extraction of deep features from the layers of a CNN model is [...] Read more.
Aerial scene classification is an active and challenging problem in high-resolution remote sensing imagery understanding. Deep learning models, especially convolutional neural networks (CNNs), have achieved prominent performance in this field. The extraction of deep features from the layers of a CNN model is widely used in these CNN-based methods. Although the CNN-based approaches have obtained great success, there is still plenty of room to further increase the classification accuracy. As a matter of fact, the fusion with other features has great potential for leading to the better performance of aerial scene classification. Therefore, we propose two effective architectures based on the idea of feature-level fusion. The first architecture, i.e., texture coded two-stream deep architecture, uses the raw RGB network stream and the mapped local binary patterns (LBP) coded network stream to extract two different sets of features and fuses them using a novel deep feature fusion model. In the second architecture, i.e., saliency coded two-stream deep architecture, we employ the saliency coded network stream as the second stream and fuse it with the raw RGB network stream using the same feature fusion model. For sake of validation and comparison, our proposed architectures are evaluated via comprehensive experiments with three publicly available remote sensing scene datasets. The classification accuracies of saliency coded two-stream architecture with our feature fusion model achieve 97.79%, 98.90%, 94.09%, 95.99%, 85.02%, and 87.01% on the UC-Merced dataset (50% and 80% training samples), the Aerial Image Dataset (AID) (20% and 50% training samples), and the NWPU-RESISC45 dataset (10% and 20% training samples), respectively, overwhelming state-of-the-art methods. Full article
Show Figures

Graphical abstract

25 pages, 9937 KiB  
Article
A Novel Object-Based Supervised Classification Method with Active Learning and Random Forest for PolSAR Imagery
by Wensong Liu, Jie Yang, Pingxiang Li, Yue Han, Jinqi Zhao and Hongtao Shi
Remote Sens. 2018, 10(7), 1092; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10071092 - 09 Jul 2018
Cited by 21 | Viewed by 4669
Abstract
Most of the traditional supervised classification methods using full-polarimetric synthetic aperture radar (PolSAR) imagery are dependent on sufficient training samples, whereas the results of pixel-based supervised classification methods show a high false alarm rate due to the influence of speckle noise. In this [...] Read more.
Most of the traditional supervised classification methods using full-polarimetric synthetic aperture radar (PolSAR) imagery are dependent on sufficient training samples, whereas the results of pixel-based supervised classification methods show a high false alarm rate due to the influence of speckle noise. In this paper, to solve these problems, an object-based supervised classification method with an active learning (AL) method and random forest (RF) classifier is presented, which can enhance the classification performance for PolSAR imagery. The first step of the proposed method is used to reduce the influence of speckle noise through the generalized statistical region merging (GSRM) algorithm. A reliable training set is then selected from the different polarimetric features of the PolSAR imagery by the AL method. Finally, the RF classifier is applied to identify the different types of land cover in the three PolSAR images acquired by different sensors. The experimental results demonstrate that the proposed method can not only better suppress the influence of speckle noise, but can also significantly improve the overall accuracy and Kappa coefficient of the classification results, when compared with the traditional supervised classification methods. Full article
Show Figures

Graphical abstract

22 pages, 8345 KiB  
Article
Bilateral Filter Regularized L2 Sparse Nonnegative Matrix Factorization for Hyperspectral Unmixing
by Zuoyu Zhang, Shouyi Liao, Hexin Zhang, Shicheng Wang and Yongchao Wang
Remote Sens. 2018, 10(6), 816; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10060816 - 24 May 2018
Cited by 18 | Viewed by 3490
Abstract
Hyperspectral unmixing (HU) is one of the most active hyperspectral image (HSI) processing research fields, which aims to identify the materials and their corresponding proportions in each HSI pixel. The extensions of the nonnegative matrix factorization (NMF) have been proved effective for HU, [...] Read more.
Hyperspectral unmixing (HU) is one of the most active hyperspectral image (HSI) processing research fields, which aims to identify the materials and their corresponding proportions in each HSI pixel. The extensions of the nonnegative matrix factorization (NMF) have been proved effective for HU, which usually uses the sparsity of abundances and the correlation between the pixels to alleviate the non-convex problem. However, the commonly used L 1 / 2 sparse constraint will introduce an additional local minima because of the non-convexity, and the correlation between the pixels is not fully utilized because of the separation of the spatial and structural information. To overcome these limitations, a novel bilateral filter regularized L 2 sparse NMF is proposed for HU. Firstly, the L 2 -norm is utilized in order to improve the sparsity of the abundance matrix. Secondly, a bilateral filter regularizer is adopted so as to explore both the spatial information and the manifold structure of the abundance maps. In addition, NeNMF is used to solve the object function in order to improve the convergence rate. The results of the simulated and real data experiments have demonstrated the advantage of the proposed method. Full article
Show Figures

Graphical abstract

18 pages, 3202 KiB  
Article
Deep Cube-Pair Network for Hyperspectral Imagery Classification
by Wei Wei, Jinyang Zhang, Lei Zhang, Chunna Tian and Yanning Zhang
Remote Sens. 2018, 10(5), 783; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10050783 - 18 May 2018
Cited by 30 | Viewed by 6877
Abstract
Advanced classification methods, which can fully utilize the 3D characteristic of hyperspectral image (HSI) and generalize well to the test data given only limited labeled training samples (i.e., small training dataset), have long been the research objective for HSI classification problem. Witnessing the [...] Read more.
Advanced classification methods, which can fully utilize the 3D characteristic of hyperspectral image (HSI) and generalize well to the test data given only limited labeled training samples (i.e., small training dataset), have long been the research objective for HSI classification problem. Witnessing the success of deep-learning-based methods, a cube-pair-based convolutional neural networks (CNN) classification architecture is proposed to cope this objective in this study, where cube-pair is used to address the small training dataset problem as well as preserve the 3D local structure of HSI data. Within this architecture, a 3D fully convolutional network is further modeled, which has less parameters compared with traditional CNN. Provided the same amount of training samples, the modeled network can go deeper than traditional CNN and thus has superior generalization ability. Experimental results on several HSI datasets demonstrate that the proposed method has superior classification results compared with other state-of-the-art competing methods. Full article
Show Figures

Graphical abstract

22 pages, 14801 KiB  
Article
Multilayer Perceptron Neural Network for Surface Water Extraction in Landsat 8 OLI Satellite Images
by Wei Jiang, Guojin He, Tengfei Long, Yuan Ni, Huichan Liu, Yan Peng, Kenan Lv and Guizhou Wang
Remote Sens. 2018, 10(5), 755; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10050755 - 15 May 2018
Cited by 85 | Viewed by 9004
Abstract
Surface water mapping is essential for monitoring climate change, water resources, ecosystem services and the hydrological cycle. In this study, we adopt a multilayer perceptron (MLP) neural network to identify surface water in Landsat 8 satellite images. To evaluate the performance of the [...] Read more.
Surface water mapping is essential for monitoring climate change, water resources, ecosystem services and the hydrological cycle. In this study, we adopt a multilayer perceptron (MLP) neural network to identify surface water in Landsat 8 satellite images. To evaluate the performance of the proposed method when extracting surface water, eight images of typical regions are collected, and a water index and support vector machine are employed for comparison. Through visual inspection and a quantitative index, the performance of the proposed algorithm in terms of the entire scene classification, various surface water types and noise suppression is comprehensively compared with those of the water index and support vector machine. Moreover, band optimization, image preprocessing and a training sample for the proposed algorithm are analyzed and discussed. We find that (1) based on the quantitative evaluation, the performance of the surface water extraction for the entire scene when using the MLP is better than that when using the water index or support vector machine. The overall accuracy of the MLP ranges from 98.25–100%, and the kappa coefficients of the MLP range from 0.965–1. (2) The MLP can precisely extract various surface water types and effectively suppress noise caused by shadows and ice/snow. (3) The 1–7-band composite provides a better band optimization strategy for the proposed algorithm, and image preprocessing and high-quality training samples can benefit from the accuracy of the classification. In future studies, the automation and universality of the proposed algorithm can be further enhanced with the generation of training samples based on newly-released global surface water products. Therefore, this method has the potential to map surface water based on Landsat series images or other high-resolution images and can be implemented for global surface water mapping, which will help us better understand our changing planet. Full article
Show Figures

Graphical abstract

20 pages, 2455 KiB  
Article
Training Small Networks for Scene Classification of Remote Sensing Images via Knowledge Distillation
by Guanzhou Chen, Xiaodong Zhang, Xiaoliang Tan, Yufeng Cheng, Fan Dai, Kun Zhu, Yuanfu Gong and Qing Wang
Remote Sens. 2018, 10(5), 719; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10050719 - 07 May 2018
Cited by 47 | Viewed by 7852
Abstract
Scene classification, aiming to identify the land-cover categories of remotely sensed image patches, is now a fundamental task in the remote sensing image analysis field. Deep-learning-model-based algorithms are widely applied in scene classification and achieve remarkable performance, but these high-level methods are computationally [...] Read more.
Scene classification, aiming to identify the land-cover categories of remotely sensed image patches, is now a fundamental task in the remote sensing image analysis field. Deep-learning-model-based algorithms are widely applied in scene classification and achieve remarkable performance, but these high-level methods are computationally expensive and time-consuming. Consequently in this paper, we introduce a knowledge distillation framework, currently a mainstream model compression method, into remote sensing scene classification to improve the performance of smaller and shallower network models. Our knowledge distillation training method makes the high-temperature softmax output of a small and shallow student model match the large and deep teacher model. In our experiments, we evaluate knowledge distillation training method for remote sensing scene classification on four public datasets: AID dataset, UCMerced dataset, NWPU-RESISC dataset, and EuroSAT dataset. Results show that our proposed training method was effective and increased overall accuracy (3% in AID experiments, 5% in UCMerced experiments, 1% in NWPU-RESISC and EuroSAT experiments) for small and shallow models. We further explored the performance of the student model on small and unbalanced datasets. Our findings indicate that knowledge distillation can improve the performance of small network models on datasets with lower spatial resolution images, numerous categories, as well as fewer training samples. Full article
Show Figures

Figure 1

22 pages, 4212 KiB  
Article
Supervised Classification High-Resolution Remote-Sensing Image Based on Interval Type-2 Fuzzy Membership Function
by Chunyan Wang, Aigong Xu and Xiaoli Li
Remote Sens. 2018, 10(5), 710; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10050710 - 04 May 2018
Cited by 14 | Viewed by 5826
Abstract
Because of the degradation of classification accuracy that is caused by the uncertainty of pixel class and classification decisions of high-resolution remote-sensing images, we proposed a supervised classification method that is based on an interval type-2 fuzzy membership function for high-resolution remote-sensing images. [...] Read more.
Because of the degradation of classification accuracy that is caused by the uncertainty of pixel class and classification decisions of high-resolution remote-sensing images, we proposed a supervised classification method that is based on an interval type-2 fuzzy membership function for high-resolution remote-sensing images. We analyze the data features of a high-resolution remote-sensing image and construct a type-1 membership function model in a homogenous region by supervised sampling in order to characterize the uncertainty of the pixel class. On the basis of the fuzzy membership function model in the homogeneous region and in accordance with the 3σ criterion of normal distribution, we proposed a method for modeling three types of interval type-2 membership functions and analyze the different types of functions to improve the uncertainty of pixel class expressed by the type-1 fuzzy membership function and to enhance the accuracy of classification decision. According to the principle that importance will increase with a decrease in the distance between the original, upper, and lower fuzzy membership of the training data and the corresponding frequency value in the histogram, we use the weighted average sum of three types of fuzzy membership as the new fuzzy membership of the pixel to be classified and then integrated into the neighborhood pixel relations, constructing a classification decision model. We use the proposed method to classify real high-resolution remote-sensing images and synthetic images. Additionally, we qualitatively and quantitatively evaluate the test results. The results show that a higher classification accuracy can be achieved with the proposed algorithm. Full article
Show Figures

Graphical abstract

24 pages, 32944 KiB  
Article
Object-Based Features for House Detection from RGB High-Resolution Images
by Renxi Chen, Xinhui Li and Jonathan Li
Remote Sens. 2018, 10(3), 451; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10030451 - 13 Mar 2018
Cited by 54 | Viewed by 10130
Abstract
Automatic building extraction from satellite images, an open research topic in remote sensing, continues to represent a challenge and has received substantial attention for decades. This paper presents an object-based and machine learning-based approach for automatic house detection from RGB high-resolution images. The [...] Read more.
Automatic building extraction from satellite images, an open research topic in remote sensing, continues to represent a challenge and has received substantial attention for decades. This paper presents an object-based and machine learning-based approach for automatic house detection from RGB high-resolution images. The images are first segmented by an algorithm combing a thresholding watershed transformation and hierarchical merging, and then shadows and vegetation are eliminated from the initial segmented regions to generate building candidates. Subsequently, the candidate regions are subjected to feature extraction to generate training data. In order to capture the characteristics of house regions well, we propose two kinds of new features, namely edge regularity indices (ERI) and shadow line indices (SLI). Finally, three classifiers, namely AdaBoost, random forests, and Support Vector Machine (SVM), are employed to identify houses from test images and quality assessments are conducted. The experiments show that our method is effective and applicable for house identification. The proposed ERI and SLI features can improve the precision and recall by 5.6% and 11.2%, respectively. Full article
Show Figures

Graphical abstract

20 pages, 7442 KiB  
Article
Classifying Wheat Hyperspectral Pixels of Healthy Heads and Fusarium Head Blight Disease Using a Deep Neural Network in the Wild Field
by Xiu Jin, Lu Jie, Shuai Wang, Hai Jun Qi and Shao Wen Li
Remote Sens. 2018, 10(3), 395; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10030395 - 04 Mar 2018
Cited by 132 | Viewed by 8106
Abstract
Classification of healthy and diseased wheat heads in a rapid and non-destructive manner for the early diagnosis of Fusarium head blight disease research is difficult. Our work applies a deep neural network classification algorithm to the pixels of hyperspectral image to accurately discern [...] Read more.
Classification of healthy and diseased wheat heads in a rapid and non-destructive manner for the early diagnosis of Fusarium head blight disease research is difficult. Our work applies a deep neural network classification algorithm to the pixels of hyperspectral image to accurately discern the disease area. The spectra of hyperspectral image pixels in a manually selected region of interest are preprocessed via mean removal to eliminate interference, due to the time interval and the environment. The generalization of the classification model is considered, and two improvements are made to the model framework. First, the pixel spectra data are reshaped into a two-dimensional data structure for the input layer of a Convolutional Neural Network (CNN). After training two types of CNNs, the assessment shows that a two-dimensional CNN model is more efficient than a one-dimensional CNN. Second, a hybrid neural network with a convolutional layer and bidirectional recurrent layer is reconstructed to improve the generalization of the model. When considering the characteristics of the dataset and models, the confusion matrices that are based on the testing dataset indicate that the classification model is effective for background and disease classification of hyperspectral image pixels. The results of the model show that the two-dimensional convolutional bidirectional gated recurrent unit neural network (2D-CNN-BidGRU) has an F1 score and accuracy of 0.75 and 0.743, respectively, for the total testing dataset. A comparison of all the models shows that the hybrid neural network of 2D-CNN-BidGRU is the best at preventing over-fitting and optimize the generalization. Our results illustrate that the hybrid structure deep neural network is an excellent classification algorithm for healthy and Fusarium head blight diseased classification in the field of hyperspectral imagery. Full article
Show Figures

Graphical abstract

21 pages, 5996 KiB  
Article
Sparse Subspace Clustering-Based Feature Extraction for PolSAR Imagery Classification
by Bo Ren, Biao Hou, Jin Zhao and Licheng Jiao
Remote Sens. 2018, 10(3), 391; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10030391 - 02 Mar 2018
Cited by 10 | Viewed by 4919
Abstract
Features play an important role in the learning technologies and pattern recognition methods for polarimetric synthetic aperture (PolSAR) image interpretation. In this paper, based on the subspace clustering algorithms, we combine sparse representation, low-rank representation, and manifold graphs to investigate the intrinsic property [...] Read more.
Features play an important role in the learning technologies and pattern recognition methods for polarimetric synthetic aperture (PolSAR) image interpretation. In this paper, based on the subspace clustering algorithms, we combine sparse representation, low-rank representation, and manifold graphs to investigate the intrinsic property of PolSAR data. In this algorithm framework, the features are projected through the projection matrix with the sparse or/and the low rank characteristic in the low dimensional space. Meanwhile, different kinds of manifold graphs explore the geometry structure of PolSAR data to make the projected feature more discriminative. Those learned matrices, that are constrained by the sparsity and low rank terms can search for a few points from the samples and capture the global structure. The proposed algorithms aim at constructing a projection matrix from the subspace clustering algorithms to achieve the features benefiting for the subsequent PolSAR image classification. Experiments test the different combinations of those constraints. It demonstrates that the proposed algorithms outperform other state-of-art linear and nonlinear approaches with better quantization and visualization performance in PolSAR data from spaceborne and airborne platforms. Full article
Show Figures

Graphical abstract

26 pages, 7779 KiB  
Article
Land Cover Classification Using Integrated Spectral, Temporal, and Spatial Features Derived from Remotely Sensed Images
by Yongguang Zhai, Zhongyi Qu and Lei Hao
Remote Sens. 2018, 10(3), 383; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10030383 - 01 Mar 2018
Cited by 31 | Viewed by 6503
Abstract
Obtaining accurate and timely land cover information is an important topic in many remote sensing applications. Using satellite image time series data should achieve high-accuracy land cover classification. However, most satellite image time-series classification methods do not fully exploit the available data for [...] Read more.
Obtaining accurate and timely land cover information is an important topic in many remote sensing applications. Using satellite image time series data should achieve high-accuracy land cover classification. However, most satellite image time-series classification methods do not fully exploit the available data for mining the effective features to identify different land cover types. Therefore, a classification method that can take full advantage of the rich information provided by time-series data to improve the accuracy of land cover classification is needed. In this paper, a novel method for time-series land cover classification using spectral, temporal, and spatial information at an annual scale was introduced. Based on all the available data from time-series remote sensing images, a refined nonlinear dimensionality reduction method was used to extract the spectral and temporal features, and a modified graph segmentation method was used to extract the spatial features. The proposed classification method was applied in three study areas with land cover complexity, including Illinois, South Dakota, and Texas. All the Landsat time series data in 2014 were used, and different study areas have different amounts of invalid data. A series of comparative experiments were conducted on the annual time-series images using training data generated from Cropland Data Layer. The results demonstrated higher overall and per-class classification accuracies and kappa index values using the proposed spectral-temporal-spatial method compared to spectral-temporal classification methods. We also discuss the implications of this study and possibilities for future applications and developments of the method. Full article
Show Figures

Graphical abstract

22 pages, 1619 KiB  
Article
SAR Target Recognition via Local Sparse Representation of Multi-Manifold Regularized Low-Rank Approximation
by Meiting Yu, Ganggang Dong, Haiyan Fan and Gangyao Kuang
Remote Sens. 2018, 10(2), 211; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10020211 - 01 Feb 2018
Cited by 55 | Viewed by 4882
Abstract
The extraction of a valuable set of features and the design of a discriminative classifier are crucial for target recognition in SAR image. Although various features and classifiers have been proposed over the years, target recognition under extended operating conditions (EOCs) is still [...] Read more.
The extraction of a valuable set of features and the design of a discriminative classifier are crucial for target recognition in SAR image. Although various features and classifiers have been proposed over the years, target recognition under extended operating conditions (EOCs) is still a challenging problem, e.g., target with configuration variation, different capture orientations, and articulation. To address these problems, this paper presents a new strategy for target recognition. We first propose a low-dimensional representation model via incorporating multi-manifold regularization term into the low-rank matrix factorization framework. Two rules, pairwise similarity and local linearity, are employed for constructing multiple manifold regularization. By alternately optimizing the matrix factorization and manifold selection, the feature representation model can not only acquire the optimal low-rank approximation of original samples, but also capture the intrinsic manifold structure information. Then, to take full advantage of the local structure property of features and further improve the discriminative ability, local sparse representation is proposed for classification. Finally, extensive experiments on moving and stationary target acquisition and recognition (MSTAR) database demonstrate the effectiveness of the proposed strategy, including target recognition under EOCs, as well as the capability of small training size. Full article
Show Figures

Graphical abstract

17 pages, 5142 KiB  
Article
Classification of PolSAR Images Using Multilayer Autoencoders and a Self-Paced Learning Approach
by Wenshuai Chen, Shuiping Gou, Xinlin Wang, Xiaofeng Li and Licheng Jiao
Remote Sens. 2018, 10(1), 110; https://0-doi-org.brum.beds.ac.uk/10.3390/rs10010110 - 15 Jan 2018
Cited by 33 | Viewed by 5329
Abstract
In this paper, a novel polarimetric synthetic aperture radar (PolSAR) image classification method based on multilayer autoencoders and self-paced learning (SPL) is proposed. The multilayer autoencoders network is used to learn the features, which convert raw data into more abstract expressions. Then, softmax [...] Read more.
In this paper, a novel polarimetric synthetic aperture radar (PolSAR) image classification method based on multilayer autoencoders and self-paced learning (SPL) is proposed. The multilayer autoencoders network is used to learn the features, which convert raw data into more abstract expressions. Then, softmax regression is applied to produce the predicted probability distributions over all the classes of each pixel. When we optimize the multilayer autoencoders network, self-paced learning is used to accelerate the learning convergence and achieve a stronger generalization capability. Under this learning paradigm, the network learns the easier samples first and gradually involves more difficult samples in the training process. The proposed method achieves the overall classification accuracies of 94.73%, 94.82% and 78.12% on the Flevoland dataset from AIRSAR, Flevoland dataset from RADARSAT-2 and Yellow River delta dataset, respectively. Such results are comparable with other state-of-the-art methods. Full article
Show Figures

Graphical abstract

Back to TopTop