sensors-logo

Journal Browser

Journal Browser

Sensors for Unmanned Aerial Vehicles (UAV) Navigation and Localization

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Navigation and Positioning".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 5200

Special Issue Editors

Navigation Research Centre, College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Interests: autonomous navigation technology of aircraft; intelligent unmanned swarm navigation technology

Special Issue Information

Dear Colleagues,

The development of unmanned aerial vehicles (UAVs) is currently on the rise, playing an important role in many fields such as transport, rescue, environmental protection and economic development. The navigation and localization technologies, which is a comprehensive combination of sensor measurement, information processing and positioning solution, is critical for UAVs. A variety of integrated navigation technologies have been continuously developed and applied on UAVs, including the inertial/GNSS integrated navigation system, inertial/radio integrated navigation system, inertial/visual integrated navigation system, as well as multi-sensor fusion navigation system, etc. In addition, the collaborative navigation technology for UAVs has received particular attention in recent years. Considering the motion features, task requirement and navigation sensor measurement characteristics of the different UAV platform, the novel UAV navigation and localization technology is helpful for promote the autonomous UAV control performance.

This Special Issue therefore aims to put together original research and review articles on recent advances, technologies, solutions, applications, and new challenges in the field of Sensors for Unmanned Aerial Vehicles (UAV) Navigation and Localization.

Potential topics include but are not limited to:

  • Navigation Sensors on UAVs
  • Integrated Navigation System for UAVs
  • Information Fusion Algorithm in UAV Navigation
  • Collaborative Localization for UAVs
  • Robust Navigation for UAVs
  • Localization Technologies in UAVs Application

Dr. Rong Wang
Prof. Dr. Shuanggen Jin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • unmanned aerial vehicles (UAV)
  • integrated navigation
  • cooperative localization
  • robust navigation
  • multi-sensor fusion navigation

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 6411 KiB  
Article
An Adaptive Multi-Mode Navigation Method with Intelligent Virtual Sensor Based on Long Short-Term Memory in GNSS Restricted Environment
by Rong Wang, Yu Rui, Jingxin Zhao, Zhi Xiong and Jianye Liu
Sensors 2023, 23(8), 4076; https://0-doi-org.brum.beds.ac.uk/10.3390/s23084076 - 18 Apr 2023
Cited by 1 | Viewed by 989
Abstract
Aiming at the problem of fast divergence of pure inertial navigation system without correction under the condition of GNSS restricted environment, this paper proposes a multi-mode navigation method with an intelligent virtual sensor based on long short-term memory (LSTM). The training mode, predicting [...] Read more.
Aiming at the problem of fast divergence of pure inertial navigation system without correction under the condition of GNSS restricted environment, this paper proposes a multi-mode navigation method with an intelligent virtual sensor based on long short-term memory (LSTM). The training mode, predicting mode, and validation mode for the intelligent virtual sensor are designed. The modes are switching flexibly according to GNSS rejecting situation and the status of the LSTM network of the intelligent virtual sensor. Then the inertial navigation system (INS) is corrected, and the availability of the LSTM network is also maintained. Meanwhile, the fireworks algorithm is adopted to optimize the learning rate and the number of hidden layers of LSTM hyperparameters to improve the estimation performance. The simulation results show that the proposed method can maintain the prediction accuracy of the intelligent virtual sensor online and shorten the training time according to the performance requirements adaptively. Under small sample conditions, the training efficiency and availability ratio of the proposed intelligent virtual sensor are improved significantly more than the neural network (BP) as well as the conventional LSTM network, improving the navigation performance in GNSS restricted environment effectively and efficiently. Full article
Show Figures

Figure 1

17 pages, 6170 KiB  
Article
Precision Landing of a Quadcopter Drone by Smartphone Video Guidance Sensor in a GPS-Denied Environment
by Nicolas Bautista, Hector Gutierrez, John Inness and John Rakoczy
Sensors 2023, 23(4), 1934; https://0-doi-org.brum.beds.ac.uk/10.3390/s23041934 - 09 Feb 2023
Cited by 4 | Viewed by 2120
Abstract
This paper describes the deployment, integration, and demonstration of a Smartphone Video Guidance Sensor (SVGS) as a novel technology for autonomous 6-DOF proximity maneuvers and precision landing of a quadcopter drone. The proposed approach uses a vision-based photogrammetric position and attitude sensor (SVGS) [...] Read more.
This paper describes the deployment, integration, and demonstration of a Smartphone Video Guidance Sensor (SVGS) as a novel technology for autonomous 6-DOF proximity maneuvers and precision landing of a quadcopter drone. The proposed approach uses a vision-based photogrammetric position and attitude sensor (SVGS) to estimate the position of a landing target after video capture. A visual inertial odometry sensor (VIO) is used to provide position estimates of the UAV in a ground coordinate system during flight on a GPS-denied environment. The integration of both SVGS and VIO sensors enables the accurate updating of position setpoints during landing, providing improved performance compared with VIO-only landing, as shown in landing experiments. The proposed technique also shows significant operational advantages compared with state-of-the-art sensors for indoor landing, such as those based on augmented reality (AR) markers. Full article
Show Figures

Figure 1

18 pages, 6223 KiB  
Article
A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning
by Kailin Li, Jiansheng Li, Ancheng Wang, Haolong Luo, Xueqiang Li and Zidi Yang
Sensors 2022, 22(24), 9836; https://0-doi-org.brum.beds.ac.uk/10.3390/s22249836 - 14 Dec 2022
Cited by 1 | Viewed by 1225
Abstract
To improve localization and pose precision of visual–inertial simultaneous localization and mapping (viSLAM) in complex scenarios, it is necessary to tune the weights of the visual and inertial inputs during sensor fusion. To this end, we propose a resilient viSLAM algorithm based on [...] Read more.
To improve localization and pose precision of visual–inertial simultaneous localization and mapping (viSLAM) in complex scenarios, it is necessary to tune the weights of the visual and inertial inputs during sensor fusion. To this end, we propose a resilient viSLAM algorithm based on covariance tuning. During back-end optimization of the viSLAM process, the unit-weight root-mean-square error (RMSE) of the visual reprojection and IMU preintegration in each optimization is computed to construct a covariance tuning function, producing a new covariance matrix. This is used to perform another round of nonlinear optimization, effectively improving pose and localization precision without closed-loop detection. In the validation experiment, our algorithm outperformed the OKVIS, R-VIO, and VINS-Mono open-source viSLAM frameworks in pose and localization precision on the EuRoc dataset, at all difficulty levels. Full article
Show Figures

Figure 1

Back to TopTop