remotesensing-logo

Journal Browser

Journal Browser

Classification and Feature Extraction Based on Remote Sensing Imagery

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 March 2021) | Viewed by 27789

Special Issue Editors

Lecturer/Course Director MSc Data Science, Ulster University, Northland Rd, Derry BT48 8HE, UK
Interests: digital image processing; computer vision; feature extraction; pattern recognition; object classification; machine learning; cognitive robotics; multi-modal sensing
Senior Lecturer, Ulster University, Cromore Road, Coleraine, NI BT52 1SA, USA
Interests: benthic habitat mapping; benthic ecology; marine geophysics; landscape ecology

Special Issue Information

Dear Colleagues,

Classification and feature extraction for remote sensing image analysis is applicable to a wide range of different environments and ecological systems, at a range of spatial and temporal scales. Emerging methodological approaches include big data analytics, deep learning, machine learning, and object-based image analysis (OBIA), many of which are now commonplace in a range of different contexts, from geomorphological time-lapse analysis to the broad-scale characterization of terrestrial and aquatic ecosystems. These approaches are allowing environmental, earth, and marine scientists to unlock the potential capacity for research into vitally important areas, such as climate change, susceptibility to geohazards, biodiversity loss, and habitat fragmentation.

For remote sensing image analysis, the process of feature extraction and classification is applicable at the scale of the landscape (e.g., geomorphometry) and also in terms of ground validation where this is achieved by optical means (e.g., photoquadrats). Boundaries between these spatial scales of observation and analysis are increasingly becoming blurred with developments in sensors and computing power, allowing for mapping of larger contexts at higher resolutions. Independent of spatial scale, feature extraction from landscape-level features and ground validation imagery are united by their potential capacity for automation in the analytical process.

In spite of recent technological advances, a great challenge remains in the development of new computational procedures for gaining a more accurate representation of complex environments. Recent breakthroughs in computer vision methods and deep learning models for image fusion, image classification, and object detection assist with obtaining a much more accurate model of environmental features than could be achieved previously; however, further investigation is required on the development of new algorithms for automatic feature extraction, monitoring, and integration of high-quality multi-modal data.

This Special Issue focuses on feature extraction and classification using remote sensing data and novel machine learning techniques. It aims to explore the potential of new ideas and technologies from the field of machine learning and pattern recognition in remote sensing applications in a variety of different environments and spatial scales (from landscape geomorphometry to ground validation) and to further investigate the overlap between remote sensing and computer vision/image analysis.

This Special Issue will include, but not be limited to, the following topics:

  • Feature extraction approaches related to the characterization of terrestrial and marine ecosystems;
  • Novel technologies or procedures for dynamic acquisition and processing of 3D point clouds, from a variety of sensors (e.g., LiDAR, laser line scanner, multibeam echosounder, photogrammetry);
  • Pattern recognition/machine learning/deep learning for remote sensing;
  • Innovative approaches to the classification of remote sensing data, from the scale of landscapes to ground validation data;
  • Novel approaches for the quantification of biodiversity from remote sensing data;
  • Automated approaches to analysis of ecological information from photographs.

Contributions with an emphasis on open-source code and data sharing are particularly welcome.

Dr. Bryan Gardiner
Dr. Chris McGonigle
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • feature extraction
  • object detection
  • 3D point clouds
  • machine learning
  • remote sensing
  • deep learning
  • laser line scanner (LLS)
  • benthic mapping

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

18 pages, 37106 KiB  
Article
Horizon Line Detection in Historical Terrestrial Images in Mountainous Terrain Based on the Region Covariance
by Sebastian Mikolka-Flöry and Norbert Pfeifer
Remote Sens. 2021, 13(9), 1705; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13091705 - 28 Apr 2021
Viewed by 2577
Abstract
Horizon line detection is an important prerequisite for numerous tasks including the automatic estimation of the unknown camera parameters for images taken in mountainous terrain. In contrast to modern images, historical photographs contain no color information and have reduced image quality. In particular, [...] Read more.
Horizon line detection is an important prerequisite for numerous tasks including the automatic estimation of the unknown camera parameters for images taken in mountainous terrain. In contrast to modern images, historical photographs contain no color information and have reduced image quality. In particular, missing color information in combination with high alpine terrain, partly covered with snow or glaciers, poses a challenge for automatic horizon detection. Therefore, a robust and accurate approach for horizon line detection in historical monochrome images in mountainous terrain was developed. For the detection of potential horizon pixels, an edge detector is learned based on the region covariance as texture descriptor. In combination with shortest path search the horizon in monochrome images is accurately detected. We evaluated our approach on 250 selected historical monochrome images in average dating back to 1950. In 85% of the images the horizon was detected with an error less than 10 pixels. In order to further evaluate the performance, an additional dataset consisting of modern color images was used. Our method, using only grayscale information, achieves comparable results with methods based on color information. In comparison with other methods using only grayscale information, accuracy of the detected horizons is significantly improved. Furthermore, the influence of color, choice of neighborhood for the shortest path calculation, and patch size for the calculation of the region covariance were investigated. The results show that both the availability of color information and increasing the patch size for the calculation of the region covariance improve the accuracy of the detected horizons. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Graphical abstract

22 pages, 16654 KiB  
Article
Examining the Links between Multi-Frequency Multibeam Backscatter Data and Sediment Grain Size
by Robert Mzungu Runya, Chris McGonigle, Rory Quinn, John Howe, Jenny Collier, Clive Fox, James Dooley, Rory O’Loughlin, Jay Calvert, Louise Scott, Colin Abernethy and Will Evans
Remote Sens. 2021, 13(8), 1539; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13081539 - 15 Apr 2021
Cited by 9 | Viewed by 4244
Abstract
Acoustic methods are routinely used to provide broad scale information on the geographical distribution of benthic marine habitats and sedimentary environments. Although single-frequency multibeam echosounder surveys have dominated seabed characterisation for decades, multifrequency approaches are now gaining favour in order to capture different [...] Read more.
Acoustic methods are routinely used to provide broad scale information on the geographical distribution of benthic marine habitats and sedimentary environments. Although single-frequency multibeam echosounder surveys have dominated seabed characterisation for decades, multifrequency approaches are now gaining favour in order to capture different frequency responses from the same seabed type. The aim of this study is to develop a robust modelling framework for testing the potential application and value of multifrequency (30, 95, and 300 kHz) multibeam backscatter responses to characterize sediments’ grain size in an area with strong geomorphological gradients and benthic ecological variability. We fit a generalized linear model on a multibeam backscatter and its derivatives to examine the explanatory power of single-frequency and multifrequency models with respect to the mean sediment grain size obtained from the grab samples. A strong and statistically significant (p < 0.05) correlation between the mean backscatter and the absolute values of the mean sediment grain size for the data was noted. The root mean squared error (RMSE) values identified the 30 kHz model as the best performing model responsible for explaining the most variation (84.3%) of the mean grain size at a statistically significant output (p < 0.05) with an adjusted r2 = 0.82. Overall, the single low-frequency sources showed a marginal gain on the multifrequency model, with the 30 kHz model driving the significance of this multifrequency model, and the inclusion of the higher frequencies diminished the level of agreement. We recommend further detailed and sufficient ground-truth data to better predict sediment properties and to discriminate benthic habitats to enhance the reliability of multifrequency backscatter data for the monitoring and management of marine protected areas. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Graphical abstract

20 pages, 8404 KiB  
Article
Development of a Parcel-Level Land Boundary Extraction Algorithm for Aerial Imagery of Regularly Arranged Agricultural Areas
by Rokgi Hong, Jinseok Park, Seongju Jang, Hyungjin Shin, Hakkwan Kim and Inhong Song
Remote Sens. 2021, 13(6), 1167; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13061167 - 18 Mar 2021
Cited by 20 | Viewed by 3834
Abstract
The boundary extraction of an object from remote sensing imagery has been an important issue in the field of research. The automation of farmland boundary extraction is particularly in demand for rapid updates of the digital farm maps in Korea. This study aimed [...] Read more.
The boundary extraction of an object from remote sensing imagery has been an important issue in the field of research. The automation of farmland boundary extraction is particularly in demand for rapid updates of the digital farm maps in Korea. This study aimed to develop a boundary extraction algorithm by systematically reconstructing a series of computational and mathematical methods, including the Suzuki85 algorithm, Canny edge detection, and Hough transform. Since most irregular farmlands in Korea have been consolidated into large rectangular arrangements for agricultural productivity, the boundary between two adjacent land parcels was assumed to be a straight line. The developed algorithm was applied over six different study sites to evaluate its performance at the boundary level and sectional area level. The correctness, completeness, and quality of the extracted boundaries were approximately 80.7%, 79.7%, and 67.0%, at the boundary level, and 89.7%, 90.0%, and 81.6%, at the area-based level, respectively. These performances are comparable with the results of previous studies on similar subjects; thus, this algorithm can be used for land parcel boundary extraction. The developed algorithm tended to subdivide land parcels for distinctive features, such as greenhouse structures or isolated irregular land parcels within the land blocks. The developed algorithm is currently applicable only to regularly arranged land parcels, and further study coupled with a decision tree or artificial intelligence may allow for boundary extraction from irregularly shaped land parcels. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Figure 1

20 pages, 1743 KiB  
Article
MFANet: A Multi-Level Feature Aggregation Network for Semantic Segmentation of Land Cover
by Bingyu Chen, Min Xia and Junqing Huang
Remote Sens. 2021, 13(4), 731; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13040731 - 17 Feb 2021
Cited by 62 | Viewed by 5750
Abstract
Detailed information regarding land utilization/cover is a valuable resource in various fields. In recent years, remote sensing images, especially aerial images, have become higher in resolution and larger span in time and space, and the phenomenon that the objects in an identical category [...] Read more.
Detailed information regarding land utilization/cover is a valuable resource in various fields. In recent years, remote sensing images, especially aerial images, have become higher in resolution and larger span in time and space, and the phenomenon that the objects in an identical category may yield a different spectrum would lead to the fact that relying on spectral features only is often insufficient to accurately segment the target objects. In convolutional neural networks, down-sampling operations are usually used to extract abstract semantic features, which leads to loss of details and fuzzy edges. To solve these problems, the paper proposes a Multi-level Feature Aggregation Network (MFANet), which is improved in two aspects: deep feature extraction and up-sampling feature fusion. Firstly, the proposed Channel Feature Compression module extracts the deep features and filters the redundant channel information from the backbone to optimize the learned context. Secondly, the proposed Multi-level Feature Aggregation Upsample module nestedly uses the idea that high-level features provide guidance information for low-level features, which is of great significance for positioning the restoration of high-resolution remote sensing images. Finally, the proposed Channel Ladder Refinement module is used to refine the restored high-resolution feature maps. Experimental results show that the proposed method achieves state-of-the-art performance 86.45% mean IOU on LandCover dataset. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Graphical abstract

21 pages, 8074 KiB  
Article
Dual-Weighted Kernel Extreme Learning Machine for Hyperspectral Imagery Classification
by Xumin Yu, Yan Feng, Yanlong Gao, Yingbiao Jia and Shaohui Mei
Remote Sens. 2021, 13(3), 508; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13030508 - 01 Feb 2021
Cited by 21 | Viewed by 2662
Abstract
Due to its excellent performance in high-dimensional space, the kernel extreme learning machine has been widely used in pattern recognition and machine learning fields. In this paper, we propose a dual-weighted kernel extreme learning machine for hyperspectral imagery classification. First, diverse spatial features [...] Read more.
Due to its excellent performance in high-dimensional space, the kernel extreme learning machine has been widely used in pattern recognition and machine learning fields. In this paper, we propose a dual-weighted kernel extreme learning machine for hyperspectral imagery classification. First, diverse spatial features are extracted by guided filtering. Then, the spatial features and spectral features are composited by a weighted kernel summation form. Finally, the weighted extreme learning machine is employed for the hyperspectral imagery classification task. This dual-weighted framework guarantees that the subtle spatial features are extracted, while the importance of minority samples is emphasized. Experiments carried on three public data sets demonstrate that the proposed dual-weighted kernel extreme learning machine (DW-KELM) performs better than other kernel methods, in terms of accuracy of classification, and can achieve satisfactory results. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Graphical abstract

24 pages, 15164 KiB  
Article
PolSAR Image Classification Using a Superpixel-Based Composite Kernel and Elastic Net
by Yice Cao, Yan Wu, Ming Li, Wenkai Liang and Peng Zhang
Remote Sens. 2021, 13(3), 380; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13030380 - 22 Jan 2021
Cited by 2 | Viewed by 2049
Abstract
The presence of speckles and the absence of discriminative features make it difficult for the pixel-level polarimetric synthetic aperture radar (PolSAR) image classification to achieve more accurate and coherent interpretation results, especially in the case of limited available training samples. To this end, [...] Read more.
The presence of speckles and the absence of discriminative features make it difficult for the pixel-level polarimetric synthetic aperture radar (PolSAR) image classification to achieve more accurate and coherent interpretation results, especially in the case of limited available training samples. To this end, this paper presents a composite kernel-based elastic net classifier (CK-ENC) for better PolSAR image classification. First, based on superpixel segmentation of different scales, three types of features are extracted to consider more discriminative information, thereby effectively suppressing the interference of speckles and achieving better target contour preservation. Then, a composite kernel (CK) is constructed to map these features and effectively implement feature fusion under the kernel framework. The CK exploits the correlation and diversity between different features to improve the representation and discrimination capabilities of features. Finally, an ENC integrated with CK (CK-ENC) is proposed to achieve better PolSAR image classification performance with limited training samples. Experimental results on airborne and spaceborne PolSAR datasets demonstrate that the proposed CK-ENC can achieve better visual coherence and yield higher classification accuracies than other state-of-art methods, especially in the case of limited training samples. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Graphical abstract

Other

Jump to: Research

12 pages, 4453 KiB  
Technical Note
Identification of Typical Solid Hazardous Chemicals Based on Hyperspectral Imaging
by Yanlong Sun, Xinming Qian, Yangyang Liu, Jianwei Wang, Qunbo Lv and Mengqi Yuan
Remote Sens. 2021, 13(13), 2608; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132608 - 02 Jul 2021
Cited by 2 | Viewed by 1672
Abstract
The identification of hazardous chemicals based on hyperspectral imaging is an important emergent means for the prevention of explosion accidents and the early warning of secondary hazards. In this study, we used a combination of spectral curve matching based on full-waveform characteristics and [...] Read more.
The identification of hazardous chemicals based on hyperspectral imaging is an important emergent means for the prevention of explosion accidents and the early warning of secondary hazards. In this study, we used a combination of spectral curve matching based on full-waveform characteristics and spectral matching based on spectral characteristics to identify the hazardous chemicals, and proposed a method to quantitatively characterize the matching degree of the spectral curves of hazardous chemicals. The results showed that the four hazardous chemicals, sulfur, red phosphorus, potassium permanganate, and corn starch had bright colors, distinct spectral curve characteristics, and obvious changes in reflectivity, which were easy to identify. Moreover, the matching degree of their spectral curves was positively correlated with their reflectivity. However, the spectral characteristics of carbon powder, strontium nitrate, wheat starch, and magnesium–aluminum alloy powder were not obvious, with no obvious characteristic peaks or trends of change in reflectivity. Except for the reflectivity and the matching degree of the carbon powder being maintained at a low level, the reflectivity of the remaining three samples was relatively close, so that it was difficult to identify with the spectral curves alone, and color information should be considered for further identification. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Figure 1

15 pages, 8684 KiB  
Letter
JL-GFDN: A Novel Gabor Filter-Based Deep Network Using Joint Spectral-Spatial Local Binary Pattern for Hyperspectral Image Classification
by Tao Zhang, Puzhao Zhang, Weilin Zhong, Zhen Yang and Fan Yang
Remote Sens. 2020, 12(12), 2016; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12122016 - 23 Jun 2020
Cited by 10 | Viewed by 3327
Abstract
The traditional local binary pattern (LBP, hereinafter we also call it a two-dimensional local binary pattern 2D-LBP) is unable to depict the spectral characteristics of a hyperspectral image (HSI). To cure this deficiency, this paper develops a joint spectral-spatial 2D-LBP feature (J2D-LBP) by [...] Read more.
The traditional local binary pattern (LBP, hereinafter we also call it a two-dimensional local binary pattern 2D-LBP) is unable to depict the spectral characteristics of a hyperspectral image (HSI). To cure this deficiency, this paper develops a joint spectral-spatial 2D-LBP feature (J2D-LBP) by averaging three different 2D-LBP features in a three-dimensional hyperspectral data cube. Subsequently, J2D-LBP is added into the Gabor filter-based deep network (GFDN), and then a novel classification method JL-GFDN is proposed. Different from the original GFDN framework, JL-GFDN further fuses the spectral and spatial features together for HSI classification. Three real data sets are adopted to evaluate the effectiveness of JL-GFDN, and the experimental results verify that (i) JL-GFDN has a better classification accuracy than the original GFDN; (ii) J2D-LBP is more effective in HSI classification in comparison with the traditional 2D-LBP. Full article
(This article belongs to the Special Issue Classification and Feature Extraction Based on Remote Sensing Imagery)
Show Figures

Graphical abstract

Back to TopTop