Special Issue "Multi-Sensor Fusion Technology in Remote Sensing: Datasets, Algorithms and Applications"

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: 30 June 2022.

Special Issue Editors

Dr. Fahimeh Farahnakian
E-Mail Website
Guest Editor
Prof. Dr. Jukka Heikkonen
E-Mail Website
Guest Editor
Department of Information Technology, University of Turku, Turku, Finland
Interests: machine learning; computer vision; deep learning; multi-sensor fusion; data analysis
Special Issues, Collections and Topics in MDPI journals
Prof. Dr. Dimitrios Makris
E-Mail Website
Guest Editor
Department of Computer Science, Kingston University, London, UK
Interests: computer vision; machine learning; pattern recognition; video and motion analysis; human motion analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Multi-sensor fusion technology is commonly used in various real-world applications, such as remote sensing, military, robotics, and autonomous driving. Extensive research has been dedicated to the effective use of intelligent and advanced multi-sensor fusion methods for accurate monitoring, complete information acquisition, and optimal decision-making. However, the multi-sensor fusion methods suffer from three main challenges: (1) the automatic calibration of sensors for bringing their readings into a common coordinate frame, (2) the feature extraction from various types of sensory data, and (3) the selection of a suitable fusion level.

The aim of this Special Issue is to give the opportunity to explore these challenges in multi-sensor fusion for remote sensing. The topics in the Special Issue include, but are not limited to, multi-sensor, multi-source, and multi-process information fusion. Articles are expected to emphasize one or more of three facets: data, architectures, and algorithms. The applications of various multi-sensor fusion technologies and of various systems are also welcome.

Dr. Fahimeh Farahnakian
Prof. Dr. Jukka Heikkonen
Prof. Dr. Dimitrios Makris
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multi-sensor fusion
  • Image fusion
  • Data fusion
  • Multi-source fusion
  • Remote sensing
  • Machine learning
  • Deep learning
  • Applications

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
The Survey of Lava Tube Distribution in Jeju Island by Multi-Source Data Fusion
Remote Sens. 2022, 14(3), 443; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14030443 - 18 Jan 2022
Viewed by 128
Abstract
Lava tubes, a major geomorphic element over volcanic terrain, have recently been highlighted as testbeds of the habitable environments and natural threats to unpredictable collapse. In our case study, we detected and monitored the risk of lava tube collapse on Jeju, an island [...] Read more.
Lava tubes, a major geomorphic element over volcanic terrain, have recently been highlighted as testbeds of the habitable environments and natural threats to unpredictable collapse. In our case study, we detected and monitored the risk of lava tube collapse on Jeju, an island off the Korean peninsula’s southern tip with more than 200 lava tubes, by conducting Interferometric Synthetic Aperture Radar (InSAR) time series analysis and a synthesized analysis of its outputs fused with spatial clues. We identified deformations up to 10 mm/year over InSAR Persistent Scatterers (PSs) obtained with Sentinel-1 time series processing in 3-year periods along with a specific geological unit. Using machine learning algorithms trained on time series deformations of samples along with clues from the spatial background, we classified candidates of potential lava tube networks primarily over coastal lava flows. What we detected in our analyses was validated via comparison with geophysical and ground surveys. Given that cavities in the lava tubes could pose serious risks, a detailed physical exploration and threat assessment of potential cave groups are required before the planned intensive construction of infrastructure on Jeju Island. We also recommend using the approach established in our study to detect undiscovered potential risks of collapse in the cavities, especially over lava tube networks, and to explore lava tubes on planetary surfaces using proposed terrestrial and planetary InSAR sensors. Full article
Show Figures

Graphical abstract

Article
Semantic Boosting: Enhancing Deep Learning Based LULC Classification
Remote Sens. 2021, 13(16), 3197; https://doi.org/10.3390/rs13163197 - 12 Aug 2021
Cited by 1 | Viewed by 1505
Abstract
The classification of land use and land cover (LULC) is a well-studied task within the domain of remote sensing and geographic information science. It traditionally relies on remotely sensed imagery and therefore models land cover classes with respect to their electromagnetic reflectances, aggregated [...] Read more.
The classification of land use and land cover (LULC) is a well-studied task within the domain of remote sensing and geographic information science. It traditionally relies on remotely sensed imagery and therefore models land cover classes with respect to their electromagnetic reflectances, aggregated in pixels. This paper introduces a methodology which enables the inclusion of geographical object semantics (from vector data) into the LULC classification procedure. As such, information on the types of geographic objects (e.g., Shop, Church, Peak, etc.) can improve LULC classification accuracy. In this paper, we demonstrate how semantics can be fused with imagery to classify LULC. Three experiments were performed to explore and highlight the impact and potential of semantics for this task. In each experiment CORINE LULC data was used as a ground truth and predicted using imagery from Sentinel-2 and semantics from LinkedGeoData using deep learning. Our results reveal that LULC can be classified from semantics only and that fusing semantics with imagery—Semantic Boosting—improved the classification with significantly higher LULC accuracies. The results show that some LULC classes are better predicted using only semantics, others with just imagery, and importantly much of the improvement was due to the ability to separate similar land use classes. A number of key considerations are discussed. Full article
Show Figures

Figure 1

Article
A Quantitative Validation of Multi-Modal Image Fusion and Segmentation for Object Detection and Tracking
Remote Sens. 2021, 13(12), 2364; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13122364 - 17 Jun 2021
Viewed by 666
Abstract
In previous works, we have shown the efficacy of using Deep Belief Networks, paired with clustering, to identify distinct classes of objects within remotely sensed data via cluster analysis and qualitative analysis of the output data in comparison with reference data. In this [...] Read more.
In previous works, we have shown the efficacy of using Deep Belief Networks, paired with clustering, to identify distinct classes of objects within remotely sensed data via cluster analysis and qualitative analysis of the output data in comparison with reference data. In this paper, we quantitatively validate the methodology against datasets currently being generated and used within the remote sensing community, as well as show the capabilities and benefits of the data fusion methodologies used. The experiments run take the output of our unsupervised fusion and segmentation methodology and map them to various labeled datasets at different levels of global coverage and granularity in order to test our models’ capabilities to represent structure at finer and broader scales, using many different kinds of instrumentation, that can be fused when applicable. In all cases tested, our models show a strong ability to segment the objects within input scenes, use multiple datasets fused together where appropriate to improve results, and, at times, outperform the pre-existing datasets. The success here will allow this methodology to be used within use concrete cases and become the basis for future dynamic object tracking across datasets from various remote sensing instruments. Full article
Show Figures

Graphical abstract

Back to TopTop