Special Issue "Geospatial Intelligence in Remote Sensing: Scene Perception, Semantic Interpretation, and Sensor Fusion"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (31 May 2021).

Special Issue Editors

Dr. Yusheng Xu
E-Mail Website1 Website2
Guest Editor
Photogrammetry and Remote Sensing, Technische Universität München, Munich, Germany
Interests: point cloud processing; spaceborne photogrammetry; computer vision; image analysis; 3D reconstruction
Special Issues and Collections in MDPI journals
Dr. Rubén Fernández-Beltrán
E-Mail Website
Guest Editor
Institute of New Imaging Technologies, University Jaume I, Castelló de la Plana, Spain
Interests: pattern recognition, image analysis, data fusion, and their applications in remote sensing; land-cover visual understanding; image classification and retrieval; spectral unmixing and image super-resolution
Dr. Jian Kang
E-Mail Website
Guest Editor
Faculty of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
Interests: machine learning, signal processing, and their applications in remote sensing; Radar imaging; SAR interferometry and denoising; geophysical parameter estimation; semantic segmentation; scene classification and image retrieval
Prof. Dr. Wei Yao
E-Mail Website
Guest Editor
Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong, China
Interests: LiDAR; 3D scene perception and analysis; Environmental remote sensing; Sensor fusion
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Visual interpretation of 2D and 3D remote sensing data plays a vital role in addressing a wide variety of critical urban mapping tasks, including construction monitoring, forest investigation, population estimation, and urban land-cover mapping. In recent years, the rapid growth of machine learning techniques, especially deep learning-based ones, has significantly uncovered the potential for developing algorithms and broadening applications in the scenario of urban mapping, with less manual activities involved. In addition, an increasing amount of multiple sources with 2D and 3D urban datasets are becoming accessible, which provides rich and diverse information facilitating the development of data-driven methods and the application of smart data processing techniques. To this end, we present this Special Issue with the scope focusing on the cutting-edge intelligent interpretation for urban mapping based on 2D and 3D remote sensing data.

Potential topics for this Special Issue include, but are not limited to the following:  

  • Learning-based algorithms for low-level remote sensing image processing, including image super-resolution, image denoising, image fusion, etc.
  • Intelligent methods for 3D point cloud processing, including point cloud filtering, point cloud registration, etc.
  • Advanced semantic interpretation methods of 2D images and 3D point clouds in urban scenarios, including semantic segmentation, scene classification, object detection, etc.
  • Advanced data fusion methods based on machine learning techniques using 2D images and 3D point clouds
  • Innovative mapping applications using multi-source 2D images and 3D point clouds

 

Dr. Yusheng Xu
Dr. Rubén Fernández-Beltrán
Dr. Jian Kang
Prof. Wei Yao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Geopatial data
  • Artifical intellegence
  • Scene perception
  • Semantic interpretation
  • Sensor fusion

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
Component Decomposition-Based Hyperspectral Resolution Enhancement for Mineral Mapping
Remote Sens. 2020, 12(18), 2903; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12182903 - 07 Sep 2020
Cited by 3 | Viewed by 1234
Abstract
Combining both spectral and spatial information with enhanced resolution provides not only elaborated qualitative information on surfacing mineralogy but also mineral interactions of abundance, mixture, and structure. This enhancement in the resolutions helps geomineralogic features such as small intrusions and mineralization become detectable. [...] Read more.
Combining both spectral and spatial information with enhanced resolution provides not only elaborated qualitative information on surfacing mineralogy but also mineral interactions of abundance, mixture, and structure. This enhancement in the resolutions helps geomineralogic features such as small intrusions and mineralization become detectable. In this paper, we investigate the potential of the resolution enhancement of hyperspectral images (HSIs) with the guidance of RGB images for mineral mapping. In more detail, a novel resolution enhancement method is proposed based on component decomposition. Inspired by the principle of the intrinsic image decomposition (IID) model, the HSI is viewed as the combination of a reflectance component and an illumination component. Based on this idea, the proposed method is comprised of several steps. First, the RGB image is transformed into the luminance component, blue-difference and red-difference chroma components (YCbCr), and the luminance channel is considered as the illumination component of the HSI with an ideal high spatial resolution. Then, the reflectance component of the ideal HSI is estimated with the downsampled HSI image and the downsampled luminance channel. Finally, the HSI with high resolution can be reconstructed by utilizing the obtained illumination and the reflectance components. Experimental results verify that the fused results can successfully achieve mineral mapping, producing better results qualitatively and quantitatively over single sensor data. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

Letter
Deep Transfer Learning for Vulnerable Road Users Detection using Smartphone Sensors Data
Remote Sens. 2020, 12(21), 3508; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12213508 - 25 Oct 2020
Cited by 1 | Viewed by 810
Abstract
As the Autonomous Vehicle (AV) industry is rapidly advancing, the classification of non-motorized (vulnerable) road users (VRUs) becomes essential to ensure their safety and to smooth operation of road applications. The typical practice of non-motorized road users’ classification usually takes significant training time [...] Read more.
As the Autonomous Vehicle (AV) industry is rapidly advancing, the classification of non-motorized (vulnerable) road users (VRUs) becomes essential to ensure their safety and to smooth operation of road applications. The typical practice of non-motorized road users’ classification usually takes significant training time and ignores the temporal evolution and behavior of the signal. In this research effort, we attempt to detect VRUs with high accuracy be proposing a novel framework that includes using Deep Transfer Learning, which saves training time and cost, to classify images constructed from Recurrence Quantification Analysis (RQA) that reflect the temporal dynamics and behavior of the signal. Recurrence Plots (RPs) were constructed from low-power smartphone sensors without using GPS data. The resulted RPs were used as inputs for different pre-trained Convolutional Neural Network (CNN) classifiers including constructing 227 × 227 images to be used for AlexNet and SqueezeNet; and constructing 224 × 224 images to be used for VGG16 and VGG19. Results show that the classification accuracy of Convolutional Neural Network Transfer Learning (CNN-TL) reaches 98.70%, 98.62%, 98.71%, and 98.71% for AlexNet, SqueezeNet, VGG16, and VGG19, respectively. Moreover, we trained resnet101 and shufflenet for a very short time using one epoch of data and then used them as weak learners, which yielded 98.49% classification accuracy. The results of the proposed framework outperform other results in the literature (to the best of our knowledge) and show that using CNN-TL is promising for VRUs classification. Because of its relative straightforwardness, ability to be generalized and transferred, and potential high accuracy, we anticipate that this framework might be able to solve various problems related to signal classification. Full article
Show Figures

Graphical abstract

Back to TopTop