remotesensing-logo

Journal Browser

Journal Browser

Multi-Sensor Systems and Data Fusion in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 June 2021) | Viewed by 38173

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Electronics, Military University of Technology, 00-908 Warsaw, Poland
Interests: multi-sensor data fusion; statistical estimation; integrated navigation systems; simultaneous localization and mapping; synthetic aperture radars; unmanned aerial systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Military University of Technology, ul. gen. Sylwestra Kaliskiego 2, 00-908 Warsaw 46, Poland
Interests: remote sensing; echolocation; ultrawideband radar systems; antenna technology; sensor modelling; acoustoelectronic devices; unmanned ground vehicles

E-Mail Website
Guest Editor
Department of Computer Science and Engineering (DISI), University of Bologna, Viale Risorgimento, 2 40136 Bologna, Italy
Interests: computer vision; machine-learning; 3D vision; embedded computer vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The remote sensing of today is developing at a rapid pace due to the technological progress in many interconnected fields. It includes emergence of new sensors and refined and more capable traditional sensors, development of more and more sophisticated space, aerial, and ground platforms for mounting those sensors, as well as advances in signal and data processing algorithms. The technological progress in the fields of radar, optoelectronic, acoustic, magnetic, chemical and other sensors is really stunning. Whereas the mentioned sensors are currently more and more sensitive and accurate, have improved resolutions, data rates, and dynamical ranges, they still have their intrinsic deficiencies and limitations. Utilization of multi-sensor systems and joint processing of their signals or data has long been considered an effective solution for reducing the mentioned disadvantages and making the best use of their strengths, this way leading to a synergy effect. Emergence of new types of cutting-edge-technology sensors creates an excellent opportunity for scientists and engineers to propose and develop new and more capable integrated multi-sensor systems. At this point it is necessary to mention that the users’ demands and expectations with respect to the size of the observed area or volume, data resolution, accuracy, speed of operation, and functionality of remote sensing systems are still increasing. Extended frequency bands, improved resolutions and data rates of the new sensors as well as more and more common use of systems composed of many spatially distributed sensors increase the influx of data in contemporary multi-sensor systems. The above facts pose new challenges for the data fusion algorithms which must often employ the newest techniques and achievements from the areas of big data mining, statistical estimation, artificial intelligence, and the other concepts that have recently come to light. Therefore, it would be of great interest to the remote sensing community to have a fresh insight into the newest developments in the fields of multi-sensor systems and data fusion. We would like to invite you to submit theoretical or application-oriented papers presenting new developments including, but not limited to the following topics:

  • Multi-sensor remote-sensing systems in the Earth science, environmental monitoring, robotics, transportation, industrial process monitoring, security and military applications
  • Unconventional multi-sensor solutions
  • Spatially distributed networks of sensors
  • Distributed signal and data processing
  • Multi-sensor data fusion on raw data level, feature level and decision level
  • Statistical estimation in remote sensing
  • Artificial intelligence in remote sensing
  • Big data processing in remote sensing
  • Machine learning in remote sensing

Prof. Dr. Piotr Kaniewski
Prof. Dr. Mateusz Pasternak
Prof. Dr. Stefano Mattoccia
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Remote sensing
  • Multi-sensor systems
  • Multi-sensor data fusion
  • Sensor networks
  • Multi-sensor signal processing
  • Multi-sensor data processing
  • Artificial intelligence
  • Big data mining
  • Machine learning
  • Distributed processing

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 30178 KiB  
Article
Hexagonal Grid-Based Framework for Mobile Robot Navigation
by Piotr Duszak, Barbara Siemiątkowska and Rafał Więckowski
Remote Sens. 2021, 13(21), 4216; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214216 - 21 Oct 2021
Cited by 7 | Viewed by 2592
Abstract
The paper addresses the problem of mobile robots’ navigation using a hexagonal lattice. We carried out experiments in which we used a vehicle equipped with a set of sensors. Based on the data, a traversable map was created. The experimental results proved that [...] Read more.
The paper addresses the problem of mobile robots’ navigation using a hexagonal lattice. We carried out experiments in which we used a vehicle equipped with a set of sensors. Based on the data, a traversable map was created. The experimental results proved that hexagonal maps of an environment can be easily built based on sensor readings. The path planning method has many advantages: the situation in which obstacles surround the position of the robot or the target is easily detected, and we can influence the properties of the path, e.g., the distance from obstacles or the type of surface can be taken into account. A path can be smoothed more easily than with a rectangular grid. Full article
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing)
Show Figures

Figure 1

27 pages, 6810 KiB  
Article
Stratified Particle Filter Monocular SLAM
by Pawel Slowak and Piotr Kaniewski
Remote Sens. 2021, 13(16), 3233; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163233 - 14 Aug 2021
Cited by 10 | Viewed by 2420
Abstract
This paper presents a solution to the problem of simultaneous localization and mapping (SLAM), developed from a particle filter, utilizing a monocular camera as its main sensor. It implements a novel sample-weighting idea, based on the of sorting of particles into sets and [...] Read more.
This paper presents a solution to the problem of simultaneous localization and mapping (SLAM), developed from a particle filter, utilizing a monocular camera as its main sensor. It implements a novel sample-weighting idea, based on the of sorting of particles into sets and separating those sets with an importance-factor offset. The grouping criteria for samples is the number of landmarks correctly matched by a given particle. This results in the stratification of samples and amplifies weighted differences. The proposed system is designed for a UAV, navigating outdoors, with a downward-pointed camera. To evaluate the proposed method, it is compared with different samples-weighting approaches, using simulated and real-world data. The conducted experiments show that the developed SLAM solution is more accurate and robust than other particle-filter methods, as it allows the employment of a smaller number of particles, lowering the overall computational complexity. Full article
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing)
Show Figures

Graphical abstract

19 pages, 8005 KiB  
Article
Infrared and Visible Image Object Detection via Focused Feature Enhancement and Cascaded Semantic Extension
by Xiaowu Xiao, Bo Wang, Lingjuan Miao, Linhao Li, Zhiqiang Zhou, Jinlei Ma and Dandan Dong
Remote Sens. 2021, 13(13), 2538; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132538 - 29 Jun 2021
Cited by 9 | Viewed by 3230
Abstract
Infrared and visible images (multi-sensor or multi-band images) have many complementary features which can effectively boost the performance of object detection. Recently, convolutional neural networks (CNNs) have seen frequent use to perform object detection in multi-band images. However, it is very difficult for [...] Read more.
Infrared and visible images (multi-sensor or multi-band images) have many complementary features which can effectively boost the performance of object detection. Recently, convolutional neural networks (CNNs) have seen frequent use to perform object detection in multi-band images. However, it is very difficult for CNNs to extract complementary features from infrared and visible images. In order to solve this problem, a difference maximum loss function is proposed in this paper. The loss function can guide the learning directions of two base CNNs and maximize the difference between features from the two base CNNs, so as to extract complementary and diverse features. In addition, we design a focused feature-enhancement module to make features in the shallow convolutional layer more significant. In this way, the detection performance of small objects can be effectively improved while not increasing the computational cost in the testing stage. Furthermore, since the actual receptive field is usually much smaller than the theoretical receptive field, the deep convolutional layer would not have sufficient semantic features for accurate detection of large objects. To overcome this drawback, a cascaded semantic extension module is added to the deep layer. Through simple multi-branch convolutional layers and dilated convolutions with different dilation rates, the cascaded semantic extension module can effectively enlarge the actual receptive field and increase the detection accuracy of large objects. We compare our detection network with five other state-of-the-art infrared and visible image object detection networks. Qualitative and quantitative experimental results prove the superiority of the proposed detection network. Full article
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing)
Show Figures

Graphical abstract

18 pages, 6193 KiB  
Article
Sentinel-1 and 2 Time-Series for Vegetation Mapping Using Random Forest Classification: A Case Study of Northern Croatia
by Dino Dobrinić, Mateo Gašparović and Damir Medak
Remote Sens. 2021, 13(12), 2321; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13122321 - 13 Jun 2021
Cited by 49 | Viewed by 6131
Abstract
Land-cover (LC) mapping in a morphologically heterogeneous landscape area is a challenging task since various LC classes (e.g., crop types in agricultural areas) are spectrally similar. Most research is still mostly relying on optical satellite imagery for these tasks, whereas synthetic aperture radar [...] Read more.
Land-cover (LC) mapping in a morphologically heterogeneous landscape area is a challenging task since various LC classes (e.g., crop types in agricultural areas) are spectrally similar. Most research is still mostly relying on optical satellite imagery for these tasks, whereas synthetic aperture radar (SAR) imagery is often neglected. Therefore, this research assessed the classification accuracy using the recent Sentinel-1 (S1) SAR and Sentinel-2 (S2) time-series data for LC mapping, especially vegetation classes. Additionally, ancillary data, such as texture features, spectral indices from S1 and S2, respectively, as well as digital elevation model (DEM), were used in different classification scenarios. Random Forest (RF) was used for classification tasks using a proposed hybrid reference dataset derived from European Land Use and Coverage Area Frame Survey (LUCAS), CORINE, and Land Parcel Identification Systems (LPIS) LC database. Based on the RF variable selection using Mean Decrease Accuracy (MDA), the combination of S1 and S2 data yielded the highest overall accuracy (OA) of 91.78%, with a total disagreement of 8.22%. The most pertinent features for vegetation mapping were GLCM Mean and Variance for S1, NDVI, along with Red and SWIR band for S2, whereas the digital elevation model produced major classification enhancement as an input feature. The results of this study demonstrated that the aforementioned approach (i.e., RF using a hybrid reference dataset) is well-suited for vegetation mapping using Sentinel imagery, which can be applied for large-scale LC classifications. Full article
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing)
Show Figures

Graphical abstract

21 pages, 591 KiB  
Article
An Interval Temporal Logic for Time Series Specification and Data Integration
by Piotr Kosiuczenko
Remote Sens. 2021, 13(12), 2236; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13122236 - 08 Jun 2021
Cited by 1 | Viewed by 2060
Abstract
The analysis of temporal series—in particular, analysis of multisensor data—is a complex problem. It depends on the application domain, the way the data have to be used, and sensors available, among other factors. Various models, algorithms, and technologies have been designed for this [...] Read more.
The analysis of temporal series—in particular, analysis of multisensor data—is a complex problem. It depends on the application domain, the way the data have to be used, and sensors available, among other factors. Various models, algorithms, and technologies have been designed for this goal. Temporal logics are used to describe temporal properties of systems. The properties may specify the occurrence and the order of events in time, recurring patterns, complex behaviors, and processes. In this paper, a new interval logic, called duration calculus for functions (DC4F), is proposed for the specification of temporal series corresponding to multisensor data. DC4F is a natural extension of the well-known duration calculus, an interval temporal logic for the specification of process duration. The adequacy of the proposed logic is analyzed in the case of multisensor data concerning volcanic eruption monitoring. It turns out that the relevant behavior concerns time intervals, not only accumulated history as it is described in other kinds of temporal logics. The examples analyzed demonstrate that a description language is required to specify time series of various kind relative to time intervals. The duration calculus cannot be successfully applied for this task. The proposed calculus allows one to specify temporal series and complex interval-dependent behaviors, and to evaluate the corresponding data within a unifying logical framework. It allows to formulate hypotheses concerning volcano eruption phenomena. However, the expressivity of DC4F comes at the cost of its decidability. Full article
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing)
Show Figures

Figure 1

34 pages, 15088 KiB  
Article
Modeling and Simulation of Very High Spatial Resolution UXOs and Landmines in a Hyperspectral Scene for UAV Survey
by Milan Bajić, Jr. and Milan Bajić
Remote Sens. 2021, 13(5), 837; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13050837 - 24 Feb 2021
Cited by 9 | Viewed by 3232
Abstract
This paper presents methods for the modeling and simulation of explosive target placement in terrain spectral images (i.e., real hyperspectral 90-channel VNIR data), considering unexploded ordnances, landmines, and improvised explosive devices. The models used for landmine detection operate at sub-pixel levels. The presented [...] Read more.
This paper presents methods for the modeling and simulation of explosive target placement in terrain spectral images (i.e., real hyperspectral 90-channel VNIR data), considering unexploded ordnances, landmines, and improvised explosive devices. The models used for landmine detection operate at sub-pixel levels. The presented research uses very fine spatial resolutions, 0.945 × 0.945 mm for targets and 1.868 × 1.868 cm for the scene, where the number of target pixels ranges from 52 to 116. While previous research has used the mean spectral value of the target, it is omitted in this paper. The model considers the probability of detection and its confidence intervals, which are derived and used in the analysis of the considered explosive targets. The detection results are better when decreased target endmembers are used to match the scene resolution, rather than using endmembers at the full resolution of the target. Unmanned aerial vehicles, as carriers of snapshot hyperspectral cameras, enable flexible target resolution selection and good area coverage. Full article
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing)
Show Figures

Graphical abstract

18 pages, 4861 KiB  
Article
The Analysis and Modelling of the Quality of Information Acquired from Weather Station Sensors
by Marek Stawowy, Wiktor Olchowik, Adam Rosiński and Tadeusz Dąbrowski
Remote Sens. 2021, 13(4), 693; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13040693 - 14 Feb 2021
Cited by 28 | Viewed by 3056
Abstract
This article explores the quality of information acquired from weather station sensors. A review of literature in this field concludes that most publications concern the analysis of data acquired from weather station sensors and their characteristic properties, estimating the missing values from the [...] Read more.
This article explores the quality of information acquired from weather station sensors. A review of literature in this field concludes that most publications concern the analysis of data acquired from weather station sensors and their characteristic properties, estimating the missing values from the data, and assessing the quality of weather information. Despite the large collection of studies devoted to these issues, there is no comprehensive approach that would consider the modelling of information uncertainty. Therefore, the article presents a proprietary method of analysing and modelling the uncertainty of the weather station sensors’ information quality. For this purpose, the structure of a real meteorological station and the measurement data obtained from it were analysed. Next, an information quality model was developed using the certainty factor (CF) of hypothesis calculation. The developed method was verified on an exemplary real meteorological station. It was found that this method enables the improvement of the quality of information obtained and processed in a multi-sensor system. This becomes practical when the influence of individual measurement system elements on the information quality reaching the recipient is determined. An example is furnished by a demonstration of the usage of two sensors to improve the information quality. Full article
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 11704 KiB  
Article
Multi-Instance Inertial Navigation System for Radar Terrain Imaging
by Michal Labowski and Piotr Kaniewski
Remote Sens. 2020, 12(21), 3639; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12213639 - 06 Nov 2020
Viewed by 1954
Abstract
Navigation systems used for the motion correction (MOCO) of radar terrain images have several limitations, including the maximum duration of the measurement session, the time duration of the synthetic aperture, and only focusing on minimizing long-term positioning errors of the radar host. To [...] Read more.
Navigation systems used for the motion correction (MOCO) of radar terrain images have several limitations, including the maximum duration of the measurement session, the time duration of the synthetic aperture, and only focusing on minimizing long-term positioning errors of the radar host. To overcome these limitations, a novel, multi-instance inertial navigation system (MINS) has been proposed by the authors. In this approach, the classic inertial navigation system (INS), which works from the beginning to the end of the measurement session, was replaced by short INS instances. The initialization of each INS instance is performed using an INS/GPS system and is triggered by exceeding the positioning error of the currently operating instance. According to this procedure, both INS instances operate simultaneously. The parallel work of the instances is performed until the image line can be calculated using navigation data originating only from the new instance. The described mechanism aims to perform instance switching in a manner that does not disturb the initial phases of echo signals processed in a single aperture. The obtained results indicate that the proposed method improves the imaging quality compared to the methods using the classic INS or the INS/GPS system. Full article
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing)
Show Figures

Graphical abstract

27 pages, 15509 KiB  
Article
A Robust Algorithm Based on Phase Congruency for Optical and SAR Image Registration in Suburban Areas
by Lina Wang, Mingchao Sun, Jinghong Liu, Lihua Cao and Guoqing Ma
Remote Sens. 2020, 12(20), 3339; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12203339 - 13 Oct 2020
Cited by 21 | Viewed by 2958
Abstract
Automatic registration of optical and synthetic aperture radar (SAR) images is a challenging task due to the influence of SAR speckle noise and nonlinear radiometric differences. This study proposes a robust algorithm based on phase congruency to register optical and SAR images (ROS-PC). [...] Read more.
Automatic registration of optical and synthetic aperture radar (SAR) images is a challenging task due to the influence of SAR speckle noise and nonlinear radiometric differences. This study proposes a robust algorithm based on phase congruency to register optical and SAR images (ROS-PC). It consists of a uniform Harris feature detection method based on multi-moment of the phase congruency map (UMPC-Harris) and a local feature descriptor based on the histogram of phase congruency orientation on multi-scale max amplitude index maps (HOSMI). The UMPC-Harris detects corners and edge points based on a voting strategy, the multi-moment of phase congruency maps, and an overlapping block strategy, which is used to detect stable and uniformly distributed keypoints. Subsequently, HOSMI is derived for a keypoint by utilizing the histogram of phase congruency orientation on multi-scale max amplitude index maps, which effectively increases the discriminability and robustness of the final descriptor. Finally, experimental results obtained using simulated images show that the UMPC-Harris detector has a superior repeatability rate. The image registration results obtained on test images show that the ROS-PC is robust against SAR speckle noise and nonlinear radiometric differences. The ROS-PC can tolerate some rotational and scale changes. Full article
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing)
Show Figures

Graphical abstract

23 pages, 14037 KiB  
Article
Modality-Free Feature Detector and Descriptor for Multimodal Remote Sensing Image Registration
by Song Cui, Miaozhong Xu, Ailong Ma and Yanfei Zhong
Remote Sens. 2020, 12(18), 2937; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12182937 - 10 Sep 2020
Cited by 15 | Viewed by 4150
Abstract
The nonlinear radiation distortions (NRD) among multimodal remote sensing images bring enormous challenges to image registration. The traditional feature-based registration methods commonly use the image intensity or gradient information to detect and describe the features that are sensitive to NRD. However, the nonlinear [...] Read more.
The nonlinear radiation distortions (NRD) among multimodal remote sensing images bring enormous challenges to image registration. The traditional feature-based registration methods commonly use the image intensity or gradient information to detect and describe the features that are sensitive to NRD. However, the nonlinear mapping of the corresponding features of the multimodal images often results in failure of the feature matching, as well as the image registration. In this paper, a modality-free multimodal remote sensing image registration method (SRIFT) is proposed for the registration of multimodal remote sensing images, which is invariant to scale, radiation, and rotation. In SRIFT, the nonlinear diffusion scale (NDS) space is first established to construct a multi-scale space. A local orientation and scale phase congruency (LOSPC) algorithm are then used so that the features of the images with NRD are mapped to establish a one-to-one correspondence, to obtain sufficiently stable key points. In the feature description stage, a rotation-invariant coordinate (RIC) system is adopted to build a descriptor, without requiring estimation of the main direction. The experiments undertaken in this study included one set of simulated data experiments and nine groups of experiments with different types of real multimodal remote sensing images with rotation and scale differences (including synthetic aperture radar (SAR)/optical, digital surface model (DSM)/optical, light detection and ranging (LiDAR) intensity/optical, near-infrared (NIR)/optical, short-wave infrared (SWIR)/optical, classification/optical, and map/optical image pairs), to test the proposed algorithm from both quantitative and qualitative aspects. The experimental results showed that the proposed method has strong robustness to NRD, being invariant to scale, radiation, and rotation, and the achieved registration precision was better than that of the state-of-the-art methods. Full article
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing)
Show Figures

Figure 1

23 pages, 10933 KiB  
Article
PWNet: An Adaptive Weight Network for the Fusion of Panchromatic and Multispectral Images
by Junmin Liu, Yunqiao Feng, Changsheng Zhou and Chunxia Zhang
Remote Sens. 2020, 12(17), 2804; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12172804 - 29 Aug 2020
Cited by 12 | Viewed by 4262
Abstract
Pansharpening is a typical image fusion problem, which aims to produce a high resolution multispectral (HRMS) image by integrating a high spatial resolution panchromatic (PAN) image with a low spatial resolution multispectral (MS) image. Prior arts have used either component substitution (CS)-based methods [...] Read more.
Pansharpening is a typical image fusion problem, which aims to produce a high resolution multispectral (HRMS) image by integrating a high spatial resolution panchromatic (PAN) image with a low spatial resolution multispectral (MS) image. Prior arts have used either component substitution (CS)-based methods or multiresolution analysis (MRA)-based methods for this propose. Although they are simple and easy to implement, they usually suffer from spatial or spectral distortions and could not fully exploit the spatial and/or spectral information existed in PAN and MS images. By considering their complementary performances and with the goal of combining their advantages, we propose a pansharpening weight network (PWNet) to adaptively average the fusion results obtained by different methods. The proposed PWNet works by learning adaptive weight maps for different CS-based and MRA-based methods through an end-to-end trainable neural network (NN). As a result, the proposed PWN inherits the data adaptability or flexibility of NN, while maintaining the advantages of traditional methods. Extensive experiments on data sets acquired by three different kinds of satellites demonstrate the superiority of the proposed PWNet and its competitiveness with the state-of-the-art methods. Full article
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop