remotesensing-logo

Journal Browser

Journal Browser

Single and Multi-UAS-Based Remote Sensing and Data Fusion

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: closed (1 August 2023) | Viewed by 15229

Special Issue Editors

Department of Industrial Engineering, University of Naples "Federico II", P.le Tecchio 80, 80125 Naples, Italy
Interests: unmanned aircraft systems; vision-based applications; avionics; guidance and navigation; detect and avoid; target tracking; path planning; data fusion; swarms; distributed space systems; formation flying; in orbit proximity operations; space surveillance
Department of Industrial Engineering, University of Naples Federico II, Piazzale Tecchio 80, 80125 Naples, Italy
Interests: spacecraft guidance, navigation and control; spacecraft relative navigation; pose determination; electro-optical sensors; LIDAR; star tracker; unmanned aerial vehicles; autonomous navigation; sense and avoid; visual detection and tracking
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Unmanned aircraft systems (UASs) currently represent a powerful tool for a large variety of civil and military remote sensing applications, being highly versatile, easy to deploy, and capable of quickly covering wide areas. In this respect, significant advancements are being made in two technological directions. On the one hand, the miniaturization of sensing systems is fostering the installation of multi-sensor architectures including not only  passive optical sensors, but also active instruments such as radars and LIDARs (especially exploiting the solid-state technology), even on board small platforms. On the other hand, many researchers are investigating the potential of using formations or swarms of cooperative vehicles to improve efficiency and performance with respect to single UAS operations. For both single and multi-UAS based concepts, data fusion represents a key algorithmic paradigm to unleash the full potential of multi-sensor applications. Indeed, data fusion plays a critical role not only concerning mission payloads, but also in the framework of real-time navigation and control, as well as trajectory/attitude reconstruction.

Hence, this Special Issue welcomes original research contributions and state-of-the-art reviews from academia and industry regarding innovative technologies and algorithms for remote sensing applications by single or multiple UASs. Contributions are welcomed highlighting the role of data fusion to augment performance with respect to single-sensor systems for both remote sensing and navigation purposes. The Special Issue topics include but are not limited to:

  • Cameras, LIDARs, magnetic sensors, radars, and other sensors for single and multi-UAS based remote sensing;
  • Original sensing concepts for active and passive UAS-based remote sensing;
  • Design, Integration, and Calibration of Innovative Multi-sensor-based architectures for UAS-based remote sensing;
  • Fusion of multi-source UAS remote sensing data;
  • Multi-sensor fusion for navigation and trajectory/attitude reconstruction;
  • Distributed sensing and data fusion;
  • Mission-oriented path planning, guidance, navigation, and control for single and multi-UAS architectures;
  • Data fusion for enhanced situational awareness.

Prof. Dr. Giancarmine Fasano
Prof. Dr. Roberto Opromolla
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • unmanned aircraft systems
  • unmanned aerial vehicles
  • sensor data fusion
  • multi-sensor systems
  • cameras
  • LIDARs
  • radars
  • magnetic sensors
  • UAV swarms
  • multi-UAV systems
  • path planning
  • guidance navigation and control

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

45 pages, 30368 KiB  
Article
Encounter Risk Evaluation with a Forerunner UAV
by Péter Bauer, Antal Hiba, Mihály Nagy, Ernő Simonyi, Gergely István Kuna, Ádám Kisari, István Drotár and Ákos Zarándy
Remote Sens. 2023, 15(6), 1512; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15061512 - 09 Mar 2023
Cited by 2 | Viewed by 1402
Abstract
Forerunner UAV refers to an unmanned aerial vehicle equipped with a downward-looking camera flying in front of the advancing emergency ground vehicles (EGV) to notify the driver about the hidden dangers (e.g., other vehicles). A feasibility demonstration in an urban environment having a [...] Read more.
Forerunner UAV refers to an unmanned aerial vehicle equipped with a downward-looking camera flying in front of the advancing emergency ground vehicles (EGV) to notify the driver about the hidden dangers (e.g., other vehicles). A feasibility demonstration in an urban environment having a multicopter as the forerunner UAV and two cars as the emergency and dangerous ground vehicles was done in ZalaZONE Proving Ground, Hungary. After the description of system hardware and software components, test scenarios, object detection and tracking, the main contribution of the paper is the development and evaluation of encounter risk decision methods. First, the basic collision risk evaluation applied in the demonstration is summarized, then the detailed development of an improved method is presented. It starts with the comparison of different velocity and acceleration estimation methods. Then, vehicle motion prediction is conducted, considering estimated data and its uncertainty. The prediction time horizon is determined based on actual EGV speed and so braking time. If the predicted trajectories intersect, then the EGV driver is notified about the danger. Some special relations between EGV and the other vehicle are also handled. Tuning and comparison of basic and improved methods is done based on real data from the demonstration. The improved method can notify the driver longer, identify special relations between the vehicles and it is adaptive considering actual EGV speed and EGV braking characteristics; therefore, it is selected for future application. Full article
(This article belongs to the Special Issue Single and Multi-UAS-Based Remote Sensing and Data Fusion)
Show Figures

Graphical abstract

18 pages, 17318 KiB  
Article
UAV Thermal Imaging for Unexploded Ordnance Detection by Using Deep Learning
by Milan Bajić, Jr. and Božidar Potočnik
Remote Sens. 2023, 15(4), 967; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15040967 - 09 Feb 2023
Cited by 5 | Viewed by 2322
Abstract
A few promising solutions for thermal imaging Unexploded Ordnance (UXO) detection were proposed after the start of the military conflict in Ukraine in 2014. At the same time, most of the landmine clearance protocols and practices are based on old, 20th-century technologies. More [...] Read more.
A few promising solutions for thermal imaging Unexploded Ordnance (UXO) detection were proposed after the start of the military conflict in Ukraine in 2014. At the same time, most of the landmine clearance protocols and practices are based on old, 20th-century technologies. More than 60 countries worldwide are still affected by explosive remnants of war, and new areas are contaminated almost every day. To date, no automated solutions exist for surface UXO detection by using thermal imaging. One of the reasons is also that there are no publicly available data. This research bridges both gaps by introducing an automated UXO detection method, and by publishing thermal imaging data. During a project in Bosnia and Herzegovina in 2019, an organisation, Norwegian People’s Aid, collected data about unexploded ordnances and made them available for this research. Thermal images with a size of 720 × 480 pixels were collected by using an Unmanned Aerial Vehicle at a height of 3 m, thus achieving a very small Ground Sampling Distance (GSD). One of the goals of our research was also to verify if the explosive war remnants’ detection accuracy could be improved further by using Convolutional Neural Networks (CNN). We have experimented with various existing modern CNN architectures for object identification, whereat the YOLOv5 model was selected as the most promising for retraining. An eleven-class object detection problem was solved primarily in this study. Our data were annotated semi-manually. Five versions of the YOLOv5 model, fine-tuned with a grid-search, were trained end-to-end on randomly selected 640 training and 80 validation images from our dataset. The trained models were verified on the remaining 88 images from our dataset. Objects from each of the eleven classes were identified with more than 90% probability, whereat the Mean Average Precision (mAP) at a 0.5 threshold was 99.5%, and the mAP at thresholds from 0.5 to 0.95 was 87.0% up to 90.5%, depending on the model’s complexity. Our results are comparable to the state-of-the-art, whereat these object detection methods have been tested on other similar small datasets with thermal images. Our study is one of the few in the field of Automated UXO detection by using thermal images, and the first that solves the problem of identifying more than one class of objects. On the other hand, publicly available thermal images with a relatively small GSD will enable and stimulate the development of new detection algorithms, where our method and results can serve as a baseline. Only really accurate automatic UXO detection solutions will help to solve one of the least explored worldwide life-threatening problems. Full article
(This article belongs to the Special Issue Single and Multi-UAS-Based Remote Sensing and Data Fusion)
Show Figures

Figure 1

26 pages, 6548 KiB  
Article
Sensing Requirements and Vision-Aided Navigation Algorithms for Vertical Landing in Good and Low Visibility UAM Scenarios
by Paolo Veneruso, Roberto Opromolla, Carlo Tiana, Giacomo Gentile and Giancarmine Fasano
Remote Sens. 2022, 14(15), 3764; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14153764 - 05 Aug 2022
Cited by 2 | Viewed by 2740
Abstract
To support the rapid development of the Urban Air Mobility framework, safe navigation must be ensured to Vertical Take-Off and Landing aircraft, especially in the approach and landing phases. Visual sensors have the potential of providing accurate measurements with reduced budgets, although integrity [...] Read more.
To support the rapid development of the Urban Air Mobility framework, safe navigation must be ensured to Vertical Take-Off and Landing aircraft, especially in the approach and landing phases. Visual sensors have the potential of providing accurate measurements with reduced budgets, although integrity issues, as well as performance degradation in low visibility and highly dynamic environments, may pose challenges. In this context, this paper focuses on autonomous navigation during vertical approach and landing procedures and provides three main contributions. First, visual sensing requirements relevant to Urban Air Mobility scenarios are defined considering realistic landing trajectories, landing pad dimensions, and wind effects. Second, a multi-sensor-based navigation architecture based on an Extended Kalman Filter is presented which integrates visual estimates with inertial and GNSS measurements and includes different operating modes and ad hoc integrity checks. The presented processing pipeline is built to provide the required navigation performance in different conditions including day/night flight, atmospheric disturbances, low visibility, and can support the autonomous initialization of a missed approach procedure. Third, performance assessment of the proposed architecture is conducted within a highly realistic simulation environment which reproduces real world scenarios and includes variable weather and illumination conditions. Results show that the proposed architecture is robust with respect to dynamic and environmental challenges, providing cm-level positioning uncertainty in the final landing phase. Furthermore, autonomous initialization of a Missed Approach Procedure is demonstrated in case of loss of visual contact with the landing pad and consequent increase of the self-estimated navigation uncertainty. Full article
(This article belongs to the Special Issue Single and Multi-UAS-Based Remote Sensing and Data Fusion)
Show Figures

Figure 1

24 pages, 20876 KiB  
Article
LLFE: A Novel Learning Local Features Extraction for UAV Navigation Based on Infrared Aerial Image and Satellite Reference Image Matching
by Xupei Zhang, Zhanzhuang He, Zhong Ma, Zhongxi Wang and Li Wang
Remote Sens. 2021, 13(22), 4618; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13224618 - 16 Nov 2021
Cited by 4 | Viewed by 3297
Abstract
Local features extraction is a crucial technology for image matching navigation of an unmanned aerial vehicle (UAV), where it aims to accurately and robustly match a real-time image and a geo-referenced image to obtain the position update information of the UAV. However, it [...] Read more.
Local features extraction is a crucial technology for image matching navigation of an unmanned aerial vehicle (UAV), where it aims to accurately and robustly match a real-time image and a geo-referenced image to obtain the position update information of the UAV. However, it is a challenging task due to the inconsistent image capture conditions, which will lead to extreme appearance changes, especially the different imaging principle between an infrared image and RGB image. In addition, the sparsity and labeling complexity of existing public datasets hinder the development of learning-based methods in this research area. This paper proposes a novel learning local features extraction method, which uses local features extracted by deep neural network to find the correspondence features on the satellite RGB reference image and real-time infrared image. First, we propose a single convolution neural network that simultaneously extracts dense local features and their corresponding descriptors. This network combines the advantages of a high repeatability local feature detector and high reliability local feature descriptors to match the reference image and real-time image with extreme appearance changes. Second, to make full use of the sparse dataset, an iterative training scheme is proposed to automatically generate the high-quality corresponding features for algorithm training. During the scheme, the dense correspondences are automatically extracted, and the geometric constraints are added to continuously improve the quality of them. With these improvements, the proposed method achieves state-of-the-art performance for infrared aerial (UAV captured) image and satellite reference image, which shows 4–6% performance improvements in precision, recall, and F1-score, compared to the other methods. Moreover, the applied experiment results show its potential and effectiveness on localization for UAVs navigation and trajectory reconstruction application. Full article
(This article belongs to the Special Issue Single and Multi-UAS-Based Remote Sensing and Data Fusion)
Show Figures

Figure 1

37 pages, 8089 KiB  
Article
An Unmanned Lighter-Than-Air Platform for Large Scale Land Monitoring
by Piero Gili, Marco Civera, Rinto Roy and Cecilia Surace
Remote Sens. 2021, 13(13), 2523; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13132523 - 28 Jun 2021
Cited by 11 | Viewed by 4043
Abstract
The concept and preliminary design of an unmanned lighter-than-air (LTA) platform instrumented with different remote sensing technologies is presented. The aim is to assess the feasibility of using a remotely controlled airship for the land monitoring of medium sized (up to 107 [...] Read more.
The concept and preliminary design of an unmanned lighter-than-air (LTA) platform instrumented with different remote sensing technologies is presented. The aim is to assess the feasibility of using a remotely controlled airship for the land monitoring of medium sized (up to 107 m2) urban or rural areas at relatively low altitudes (below 1000 m) and its potential convenience with respect to other standard remote and in-situ sensing systems. The proposal includes equipment for high-definition visual, thermal, and hyperspectral imaging as well as LiDAR scanning. The data collected from these different sources can be then combined to obtain geo-referenced products such as land use land cover (LULC), soil water content (SWC), land surface temperature (LSC), and leaf area index (LAI) maps, among others. The potential uses for diffuse structural health monitoring over built-up areas are discussed as well. Several mission typologies are considered. Full article
(This article belongs to the Special Issue Single and Multi-UAS-Based Remote Sensing and Data Fusion)
Show Figures

Figure 1

Back to TopTop