remotesensing-logo

Journal Browser

Journal Browser

Geospatial Intelligence in Remote Sensing: Scene Perception, Semantic Interpretation, and Sensor Fusion

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (31 May 2021) | Viewed by 7096

Special Issue Editors

Department of Photogrammetry and Remote Sensing, Technical University of Munich, 80333 Munich, Germany
Interests: spaceborne photogrammetry; LiDAR; point cloud processing; 3D reconstruction
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of New Imaging Technologies, University Jaume I, Castelló de la Plana, Spain
Interests: pattern recognition, image analysis, data fusion, and their applications in remote sensing; land-cover visual understanding; image classification and retrieval; spectral unmixing and image super-resolution
Faculty of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
Interests: machine learning, signal processing, and their applications in remote sensing; Radar imaging; SAR interferometry and denoising; geophysical parameter estimation; semantic segmentation; scene classification and image retrieval
Special Issues, Collections and Topics in MDPI journals
Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong, China
Interests: LiDAR; 3D scene perception and analysis; environmental remote sensing; sensor fusion
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Visual interpretation of 2D and 3D remote sensing data plays a vital role in addressing a wide variety of critical urban mapping tasks, including construction monitoring, forest investigation, population estimation, and urban land-cover mapping. In recent years, the rapid growth of machine learning techniques, especially deep learning-based ones, has significantly uncovered the potential for developing algorithms and broadening applications in the scenario of urban mapping, with less manual activities involved. In addition, an increasing amount of multiple sources with 2D and 3D urban datasets are becoming accessible, which provides rich and diverse information facilitating the development of data-driven methods and the application of smart data processing techniques. To this end, we present this Special Issue with the scope focusing on the cutting-edge intelligent interpretation for urban mapping based on 2D and 3D remote sensing data.

Potential topics for this Special Issue include, but are not limited to the following:  

  • Learning-based algorithms for low-level remote sensing image processing, including image super-resolution, image denoising, image fusion, etc.
  • Intelligent methods for 3D point cloud processing, including point cloud filtering, point cloud registration, etc.
  • Advanced semantic interpretation methods of 2D images and 3D point clouds in urban scenarios, including semantic segmentation, scene classification, object detection, etc.
  • Advanced data fusion methods based on machine learning techniques using 2D images and 3D point clouds
  • Innovative mapping applications using multi-source 2D images and 3D point clouds

 

Dr. Yusheng Xu
Dr. Rubén Fernández-Beltrán
Dr. Jian Kang
Prof. Wei Yao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Geopatial data
  • Artifical intellegence
  • Scene perception
  • Semantic interpretation
  • Sensor fusion

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

16 pages, 38354 KiB  
Article
Component Decomposition-Based Hyperspectral Resolution Enhancement for Mineral Mapping
by Puhong Duan, Jibao Lai, Pedram Ghamisi, Xudong Kang, Robert Jackisch, Jian Kang and Richard Gloaguen
Remote Sens. 2020, 12(18), 2903; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12182903 - 07 Sep 2020
Cited by 14 | Viewed by 3088
Abstract
Combining both spectral and spatial information with enhanced resolution provides not only elaborated qualitative information on surfacing mineralogy but also mineral interactions of abundance, mixture, and structure. This enhancement in the resolutions helps geomineralogic features such as small intrusions and mineralization become detectable. [...] Read more.
Combining both spectral and spatial information with enhanced resolution provides not only elaborated qualitative information on surfacing mineralogy but also mineral interactions of abundance, mixture, and structure. This enhancement in the resolutions helps geomineralogic features such as small intrusions and mineralization become detectable. In this paper, we investigate the potential of the resolution enhancement of hyperspectral images (HSIs) with the guidance of RGB images for mineral mapping. In more detail, a novel resolution enhancement method is proposed based on component decomposition. Inspired by the principle of the intrinsic image decomposition (IID) model, the HSI is viewed as the combination of a reflectance component and an illumination component. Based on this idea, the proposed method is comprised of several steps. First, the RGB image is transformed into the luminance component, blue-difference and red-difference chroma components (YCbCr), and the luminance channel is considered as the illumination component of the HSI with an ideal high spatial resolution. Then, the reflectance component of the ideal HSI is estimated with the downsampled HSI image and the downsampled luminance channel. Finally, the HSI with high resolution can be reconstructed by utilizing the obtained illumination and the reflectance components. Experimental results verify that the fused results can successfully achieve mineral mapping, producing better results qualitatively and quantitatively over single sensor data. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

12 pages, 2353 KiB  
Letter
Deep Transfer Learning for Vulnerable Road Users Detection using Smartphone Sensors Data
by Mohammed Elhenawy, Huthaifa I. Ashqar, Mahmoud Masoud, Mohammed H. Almannaa, Andry Rakotonirainy and Hesham A. Rakha
Remote Sens. 2020, 12(21), 3508; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12213508 - 25 Oct 2020
Cited by 7 | Viewed by 2468
Abstract
As the Autonomous Vehicle (AV) industry is rapidly advancing, the classification of non-motorized (vulnerable) road users (VRUs) becomes essential to ensure their safety and to smooth operation of road applications. The typical practice of non-motorized road users’ classification usually takes significant training time [...] Read more.
As the Autonomous Vehicle (AV) industry is rapidly advancing, the classification of non-motorized (vulnerable) road users (VRUs) becomes essential to ensure their safety and to smooth operation of road applications. The typical practice of non-motorized road users’ classification usually takes significant training time and ignores the temporal evolution and behavior of the signal. In this research effort, we attempt to detect VRUs with high accuracy be proposing a novel framework that includes using Deep Transfer Learning, which saves training time and cost, to classify images constructed from Recurrence Quantification Analysis (RQA) that reflect the temporal dynamics and behavior of the signal. Recurrence Plots (RPs) were constructed from low-power smartphone sensors without using GPS data. The resulted RPs were used as inputs for different pre-trained Convolutional Neural Network (CNN) classifiers including constructing 227 × 227 images to be used for AlexNet and SqueezeNet; and constructing 224 × 224 images to be used for VGG16 and VGG19. Results show that the classification accuracy of Convolutional Neural Network Transfer Learning (CNN-TL) reaches 98.70%, 98.62%, 98.71%, and 98.71% for AlexNet, SqueezeNet, VGG16, and VGG19, respectively. Moreover, we trained resnet101 and shufflenet for a very short time using one epoch of data and then used them as weak learners, which yielded 98.49% classification accuracy. The results of the proposed framework outperform other results in the literature (to the best of our knowledge) and show that using CNN-TL is promising for VRUs classification. Because of its relative straightforwardness, ability to be generalized and transferred, and potential high accuracy, we anticipate that this framework might be able to solve various problems related to signal classification. Full article
Show Figures

Graphical abstract

Back to TopTop