Special Issue "Application of Multi-Sensor Fusion Technology in Target Detection and Recognition"

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (31 January 2021).

Special Issue Editors

Prof. Dr. Jukka Heikkonen
E-Mail Website
Guest Editor
Department of Information Technology, University of Turku, Turku, Finland
Interests: machine learning; computer vision; deep learning; multi-sensor fusion; data analysis
Special Issues, Collections and Topics in MDPI journals
Dr. Fahimeh Farahnakian
E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

Application of multi-sensor fusion technology has drawn a lot of industrial and academic interest in recent years. The multi-sensor fusion methods are widely used in many applications such as autonomous systems, remote sensing, video surveillance and military. These methods can obtain the complementary properties of targets by considering multiple sensors. On the other hand, they can achieve a detailed environment description and accurate detection of interest targets based on the information from different sensors.

This Special Issue aims to explore the developments in the field of multi-sensor, multi-source and multi-process information fusion. Articles are expected to emphasize one or more of the three facets: architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome. The journal publishes original papers and from time to time invited review articles, in all areas related to the information fusion arena including, but not limited to, the following suggested topics:

  • Data/Image, Feature, Decision, and Multilevel Fusion
  • Multi-Sensor, Multi-Source Fusion System Architectures
  • Target Detection and Tracking
  • Higher Level Fusion Topics Including Situation Awareness and Management
  • Multi-Sensor Management and Real-Time Applications
  • Adaptive and Self-improving Fusion System Architectures
  • Applications such as Robotics, Space and Transportation
  • Fusion Learning in Imperfect, Imprecise And Incomplete Environment
  • Intelligent Techniques for Fusion Processing
  • Fusion System Design and Algorithmic Issues
Prof. Dr. Jukka Heikkonen
Dr. Fahimeh Farahnakian
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multi-sensor fusion
  • Target detection and recognition
  • Remote sensing
  • Machine learning
  • Deep learning
  • Autonomous vehicles

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
ABOships—An Inshore and Offshore Maritime Vessel Detection Dataset with Precise Annotations
Remote Sens. 2021, 13(5), 988; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13050988 - 05 Mar 2021
Cited by 1 | Viewed by 947
Abstract
Availability of domain-specific datasets is an essential problem in object detection. Datasets of inshore and offshore maritime vessels are no exception, with a limited number of studies addressing maritime vessel detection on such datasets. For that reason, we collected a dataset consisting of [...] Read more.
Availability of domain-specific datasets is an essential problem in object detection. Datasets of inshore and offshore maritime vessels are no exception, with a limited number of studies addressing maritime vessel detection on such datasets. For that reason, we collected a dataset consisting of images of maritime vessels taking into account different factors: background variation, atmospheric conditions, illumination, visible proportion, occlusion and scale variation. Vessel instances (including nine types of vessels), seamarks and miscellaneous floaters were precisely annotated: we employed a first round of labelling and we subsequently used the CSRT tracker to trace inconsistencies and relabel inadequate label instances. Moreover, we evaluated the out-of-the-box performance of four prevalent object detection algorithms (Faster R-CNN, R-FCN, SSD and EfficientDet). The algorithms were previously trained on the Microsoft COCO dataset. We compared their accuracy based on feature extractor and object size. Our experiments showed that Faster R-CNN with Inception-Resnet v2 outperforms the other algorithms, except in the large object category where EfficientDet surpasses the latter. Full article
Show Figures

Graphical abstract

Article
Specular Reflection Detection and Inpainting in Transparent Object through MSPLFI
Remote Sens. 2021, 13(3), 455; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13030455 - 28 Jan 2021
Viewed by 984
Abstract
Multispectral polarimetric light field imagery (MSPLFI) contains significant information about a transparent object’s distribution over spectra, the inherent properties of its surface and its directional movement, as well as intensity, which all together can distinguish its specular reflection. Due to multispectral polarimetric signatures [...] Read more.
Multispectral polarimetric light field imagery (MSPLFI) contains significant information about a transparent object’s distribution over spectra, the inherent properties of its surface and its directional movement, as well as intensity, which all together can distinguish its specular reflection. Due to multispectral polarimetric signatures being limited to an object’s properties, specular pixel detection of a transparent object is a difficult task because the object lacks its own texture. In this work, we propose a two-fold approach for determining the specular reflection detection (SRD) and the specular reflection inpainting (SRI) in a transparent object. Firstly, we capture and decode 18 different transparent objects with specularity signatures obtained using a light field (LF) camera. In addition to our image acquisition system, we place different multispectral filters from visible bands and polarimetric filters at different orientations to capture images from multisensory cues containing MSPLFI features. Then, we propose a change detection algorithm for detecting specular reflected pixels from different spectra. A Mahalanobis distance is calculated based on the mean and the covariance of both polarized and unpolarized images of an object in this connection. Secondly, an inpainting algorithm that captures pixel movements among sub-aperture images of the LF is proposed. In this regard, a distance matrix for all the four connected neighboring pixels is computed from the common pixel intensities of each color channel of both the polarized and the unpolarized images. The most correlated pixel pattern is selected for the task of inpainting for each sub-aperture image. This process is repeated for all the sub-aperture images to calculate the final SRI task. The experimental results demonstrate that the proposed two-fold approach significantly improves the accuracy of detection and the quality of inpainting. Furthermore, the proposed approach also improves the SRD metrics (with mean F1-score, G-mean, and accuracy as 0.643, 0.656, and 0.981, respectively) and SRI metrics (with mean structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), mean squared error (IMMSE), and mean absolute deviation (MAD) as 0.966, 0.735, 0.073, and 0.226, respectively) for all the sub-apertures of the 18 transparent objects in MSPLFI dataset as compared with those obtained from the methods in the literature considered in this paper. Future work will exploit the integration of machine learning for better SRD accuracy and SRI quality. Full article
Show Figures

Graphical abstract

Article
Towards Semantic SLAM: 3D Position and Velocity Estimation by Fusing Image Semantic Information with Camera Motion Parameters for Traffic Scene Analysis
Remote Sens. 2021, 13(3), 388; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13030388 - 23 Jan 2021
Viewed by 875
Abstract
In this paper, an EKF (Extended Kalman Filter)-based algorithm is proposed to estimate 3D position and velocity components of different cars in a scene by fusing the semantic information and car model, extracted from successive frames with camera motion parameters. First, a 2D [...] Read more.
In this paper, an EKF (Extended Kalman Filter)-based algorithm is proposed to estimate 3D position and velocity components of different cars in a scene by fusing the semantic information and car model, extracted from successive frames with camera motion parameters. First, a 2D virtual image of the scene is made using a prior knowledge of the 3D Computer Aided Design (CAD) models of the detected cars and their predicted positions. Then, a discrepancy, i.e., distance, between the actual image and the virtual image is calculated. The 3D position and the velocity components are recursively estimated by minimizing the discrepancy using EKF. The experiments on the KiTTi dataset show a good performance of the proposed algorithm with a position estimation error up to 3–5% at 30 m and velocity estimation error up to 1 m/s. Full article
Show Figures

Figure 1

Article
A Co-Operative Autonomous Offshore System for Target Detection Using Multi-Sensor Technology
Remote Sens. 2020, 12(24), 4106; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12244106 - 16 Dec 2020
Cited by 2 | Viewed by 812
Abstract
This article studies the design, modeling, and implementation challenges for a target detection algorithm using multi-sensor technology of a co-operative autonomous offshore system, formed by an unmanned surface vehicle (USV) and an autonomous underwater vehicle (AUV). First, the study develops an accurate mathematical [...] Read more.
This article studies the design, modeling, and implementation challenges for a target detection algorithm using multi-sensor technology of a co-operative autonomous offshore system, formed by an unmanned surface vehicle (USV) and an autonomous underwater vehicle (AUV). First, the study develops an accurate mathematical model of the USV to be included as a simulation environment for testing the guidance, navigation, and control (GNC) algorithm. Then, a guidance system is addressed based on an underwater coverage path for the AUV, which uses a mechanical imaging sonar as the primary AUV perception sensor and ultra-short baseline (USBL) as a positioning system. Once the target is detected, the AUV sends its location to the USV, which creates a straight-line for path following with obstacle avoidance capabilities, using a LiDAR as the main USV perception sensor. This communication in the co-operative autonomous offshore system includes a decentralized Robot Operating System (ROS) framework with a master node at each vehicle. Additionally, each vehicle uses a modular approach for the GNC architecture, including target detection, path-following, and guidance control modules. Finally, implementation challenges in a field test scenario involving both AUV and USV are addressed to validate the target detection algorithm. Full article
Show Figures

Graphical abstract

Article
Deep Learning Based Multi-Modal Fusion Architectures for Maritime Vessel Detection
Remote Sens. 2020, 12(16), 2509; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12162509 - 05 Aug 2020
Cited by 7 | Viewed by 2027
Abstract
Object detection is a fundamental computer vision task for many real-world applications. In the maritime environment, this task is challenging due to varying light, view distances, weather conditions, and sea waves. In addition, light reflection, camera motion and illumination changes may cause to [...] Read more.
Object detection is a fundamental computer vision task for many real-world applications. In the maritime environment, this task is challenging due to varying light, view distances, weather conditions, and sea waves. In addition, light reflection, camera motion and illumination changes may cause to false detections. To address this challenge, we present three fusion architectures to fuse two imaging modalities: visible and infrared. These architectures can provide complementary information from two modalities in different levels: pixel-level, feature-level, and decision-level. They employed deep learning for performing fusion and detection. We investigate the performance of the proposed architectures conducting a real marine image dataset, which is captured by color and infrared cameras on-board a vessel in the Finnish archipelago. The cameras are employed for developing autonomous ships, and collect data in a range of operation and climatic conditions. Experiments show that feature-level fusion architecture outperforms the state-of-the-art other fusion level architectures. Full article
Show Figures

Graphical abstract

Article
Gudalur Spectral Target Detection (GST-D): A New Benchmark Dataset and Engineered Material Target Detection in Multi-Platform Remote Sensing Data
Remote Sens. 2020, 12(13), 2145; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12132145 - 03 Jul 2020
Cited by 4 | Viewed by 1240
Abstract
Target detection in remote sensing imagery, mapping of sparsely distributed materials, has vital applications in defense security and surveillance, mineral exploration, agriculture, environmental monitoring, etc. The detection probability and the quality of retrievals are functions of various parameters of the sensor, platform, target–background [...] Read more.
Target detection in remote sensing imagery, mapping of sparsely distributed materials, has vital applications in defense security and surveillance, mineral exploration, agriculture, environmental monitoring, etc. The detection probability and the quality of retrievals are functions of various parameters of the sensor, platform, target–background dynamics, targets’ spectral contrast, and atmospheric influence. Generally, target detection in remote sensing imagery has been approached using various statistical detection algorithms with an assumption of linearity in the image formation process. Knowledge on the image acquisition geometry, and spectral features and their stability across different imaging platforms is vital for designing a spectral target detection system. We carried out an integrated target detection experiment for the detection of various artificial target materials. As part of this work, we acquired a benchmark multi-platform hyperspectral and multispectral remote sensing dataset named as ‘Gudalur Spectral Target Detection (GST-D)’ dataset. Positioning artificial targets on different surface backgrounds, we acquired remote sensing data by terrestrial, airborne, and space-borne sensors on 20th March 2018. Various statistical and subspace detection algorithms were applied on the benchmark dataset for the detection of targets, considering the different sources of reference target spectra, background, and the spectral continuity across the platforms. We validated the detection results using the receiver operation curve (ROC) for different cases of detection algorithms and imaging platforms. Results indicate, for some combinations of algorithms and imaging platforms, consistent detection of specific material targets with a detection rate of about 80% at a false alarm rate between 10−2 to 10−3. Target detection in satellite imagery using reference target spectra from airborne hyperspectral imagery match closely with the satellite imagery derived reference spectra. The ground-based in-situ reference spectra offer a quantifiable detection in airborne or satellite imagery. However, ground-based hyperspectral imagery has also provided an equivalent target detection in the airborne and satellite imagery paving the way for rapid acquisition of reference target spectra. The benchmark dataset generated in this work is a valuable resourcefor addressing intriguing questions in target detection using hyperspectral imagery from a realistic landscape perspective. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

Technical Note
Inversion of Phytoplankton Pigment Vertical Profiles from Satellite Data Using Machine Learning
Remote Sens. 2021, 13(8), 1445; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13081445 - 08 Apr 2021
Viewed by 715
Abstract
Observing the vertical dynamic of phytoplankton in the water column is essential to understand the evolution of the ocean primary productivity under climate change and the efficiency of the CO2 biological pump. This is usually made through in-situ measurements. In this paper, [...] Read more.
Observing the vertical dynamic of phytoplankton in the water column is essential to understand the evolution of the ocean primary productivity under climate change and the efficiency of the CO2 biological pump. This is usually made through in-situ measurements. In this paper, we propose a machine learning methodology to infer the vertical distribution of phytoplankton pigments from surface satellite observations, allowing their global estimation with a high spatial and temporal resolution. After imputing missing values through iterative completion Self-Organizing Maps, smoothing and reducing the vertical distributions through principal component analysis, we used a Self-Organizing Map to cluster the reduced profiles with satellite observations. These referent vector clusters were then used to invert the vertical profiles of phytoplankton pigments. The methodology was trained and validated on the MAREDAT dataset and tested on the Tara Oceans dataset. The different regression coefficients R2 between observed and estimated vertical profiles of pigment concentration are, on average, greater than 0.7. We could expect to monitor the vertical distribution of phytoplankton types in the global ocean. Full article
Show Figures

Graphical abstract

Back to TopTop