Next Article in Journal
Method of Selecting a Decontamination Site Deployment for Chemical Accident Consequences Elimination: Application of Multi-Criterial Analysis
Previous Article in Journal
Mobility Data Warehouses
Previous Article in Special Issue
Accuracy Analysis of a 3D Model of Excavation, Created from Images Acquired with an Action Camera from Low Altitudes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Drone Navigation in GNSS Denied Environment Using VDM and Hall Effect Sensor

1
Department of Geomatics Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
2
Department of Electrical Engineering, Port-Said University, Port Said 42523, Egypt
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2019, 8(4), 169; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8040169
Submission received: 25 December 2018 / Revised: 21 March 2019 / Accepted: 29 March 2019 / Published: 2 April 2019
(This article belongs to the Special Issue Applications and Potential of UAV Photogrammetric Survey)

Abstract

:
The last decade has witnessed a wide spread of small drones in many civil and military applications. With the massive advancement in the manufacture of small and lightweight Inertial Navigation System (INS), navigation in challenging environments became feasible. Navigation of these small drones mainly depends on the integration of Global Navigation Satellite Systems (GNSS) and INS. However, the navigation performance of these small drones deteriorates quickly when the GNSS signals are lost, due to accumulated errors of the low-cost INS that is typically used in these drones. During GNSS signal outages, another aiding sensor is required to bound the drift exhibited by the INS. Before adding any additional sensor on-board the drones, there are some limitations that must be taken into considerations. These limitations include limited availability of power, space, weight, and size. This paper presents a novel unconventional method, to enhance the navigation of autonomous drones in GNSS denied environment, through a new utilization of hall effect sensor to act as flying odometer “Air-Odo” and vehicle dynamic model (VDM) for heading estimation. The proposed approach enhances the navigational solution by estimating the unmanned aerial vehicle (UAV) velocity, and heading and fusing these measurements in the Extended Kalman Filter (EKF) of the integrated system.

1. Introduction

Without GNSS signals, the navigation system of small unmanned aerial vehicles (UAVs) deteriorates rapidly because of the utilized low-cost microelectromechanical sensor (MEMS)-based INS. These MEMS-based INSs exhibit large sensors errors, which deteriorates the navigation system performance during GNSS signals outages [1]. Due to the rapid increase in the applications employing drones in our daily life, to save time, effort, cost, and perform dangerous tasks, the requirement for a versatile drone that can fulfill its task in different condition/situations is essential. Relying on GNSS alone to bound the INS drift is not an adequate solution, as GNSS signals are susceptible to jamming, spoofing, multipath, or blockage. Other aiding sensors must be utilized so substitute GNSS during its unavailability periods. Different sensors and methods can be used, e.g., single or multiple cameras [2,3,4,5,6,7,8,9], Light Detection and Ranging (LIDAR) [10,11,12,13,14,15], and Radio Detection and Ranging (RADAR) [16,17,18,19]. Due to major technological advancement, the ability to manufacture small radar that’s adequate to be mounted on small UAVs is now feasible. In [18] they made an experimental flight (hex copter drone) to assess the ability of small radar systems (e.g., PulsON 440) to provide useful information that can be utilized in different aspects (e.g., localization, navigation, obstacle avoidance, imagery, and surveillance). Their experimental results showed the ability of the radar system to distinguish three artificial targets placed in an outdoor field. In [20] the authors mounted a forward looking Micro Frequency Modulated Continuous Wave (FMWC) radar on a small quadcopter drone (3DR X8+). The information obtained from this radar system are the drone velocity (Radar Odometry), and height Above Ground Level (AGL). The radar odometry process is based on three steps, target detection, Multitarget Tracking (MTT), and Singular Value Decomposition (SVD) to estimate the drone rotations and translations. Height AGL is obtained from the first received radar echo. Their results proved that the information from the radar can lead to enhanced estimation of the drone motion. LIDAR also is a great candidate to aid small UAVs in navigation. The algorithms in [21] are based on fusing the LIDAR measurements (ranges and scan angels) and INS, where the INS role is to compensate the errors in LIDAR information resulting from scan plane change during the drone attitude maneuver. The authors in [22] benefited from the high mobility of the drone to monitor the effect of natural disaster like earthquakes. Their navigation system employs two main aiding sensors, LIDAR (Hokuyo 40 Hz), and downfacing camera. They successfully localized the vehicle through previously known landmarks and reconstructed the geometric features of the surrounding based on the LIDAR data. Their results showed the ability to use UAVs for disaster monitoring in a GNSS denied environment using LIDAR and monocular camera. In [23], they utilized LIDAR for indoor localization purpose. The proposed registration process is based on Iterative closest point (ICP) algorithm [24,25,26] as it can work in unstructured environment. Their proposed approach aims to decrease the computational effort while preserving the robustness of the localization process. Their algorithm is based on labeling the extracted features, then the matching process is done only between matched labels between scans. This approach proves its ability to decrease the mismatches and reduce the iterations required, which will reduce the computational effort. Despite the great improvement in the navigation solution occurred when utilizing these sensors during the GNSS outage periods, they impose additional challenges due to the limited availability of space, size, power, cost, and computational power. So, other aiding methods that consider all these limitations imposed over these small/commercial drones is crucial. One of these methods uses the Vehicle Dynamic Model (VDM) [27] as an update to the EKF of the navigation system. VDM is the relation between the drones’ motors/actuators movements (that causes changes in the drone aerodynamic forces and moments), and its states. In [28,29,30], the authors tried to accurately model the UAV vehicle dynamic to act as an update (observation) to the EKF. While the INS is the main process model, such an approach still depends mainly on the INS. Therefore, if the INS fails, all navigation systems will fail. To overcome these issues, [31] tried to accommodate different approach by utilizing the VDM as the main process model. In this case, if the GNSS/INS measurements are available they serve as an update to the KF. Also, another approach has been used in [32], where the authors dealt with both the INS and VDM with multiprocess unified KF, that allows both processes to propagate simultaneously with its own uncertainties, after removing the duplicate states, and increase the KF states to include both VDM and INS models. Despite the enhancement shown in the navigation system performance after utilizing the VDM without adding any additional cost, size, weight, or power, but it still requires special equipment and massive time and effort to accurately model the UAV VDM parameters, as any perturbations in these modeled parameters may lead to ineffectiveness of the VDM as aiding system. Other unconventional methods/sensors respecting the limitation imposed over these small drones where utilized to aid the navigation system, as in [33], where thermopiles (heat radiation measuring sensor) where used in a different manner to estimate the UAV pitch and roll angle, based on the temperature difference between the ground and the sky. In [34], the authors used an unconventional utilization of two unmanned aerial vehicles, where one of the two drones must have a perfect GNSS signal and visually track the other UAV, which has weak GNSS reception, to be able to supply the second UAV with indirect position update.
Following the same approach, to aid the navigation system during GNSS signal outages, while accounting for the limitations imposed over these types of drones, this paper presents a fusion between two approaches: (a) the first approach presents an unconventional utilization of contactless rotary magnetic encoder (Hall Effect sensor), which is typically used for RPM measurements, fluid flow rate, speed measurements, and proximity sensors [35,36,37,38], to estimate the forwarded drone speed “Air-Odo” (act as a flying odometer) and (b) the second approach utilizes the VDM as a constraint to estimate the heading of the drone. The information from these two approaches (velocity and heading) are fused with INS through EKF. The proposed approach was verified through a hardware experiment, with 3DR quadcopter, equipped with a low-cost MEMS-based Inertial Measurement Unit (IMU).

2. System Overview

Unlike fixed-wing drones, quadcopters cannot benefit from pitot tube to estimate their velocity, due to its sensitivity to the direction of the air flow, and the requirement of a significant change in differential pressure which cannot be achieved with such drones [39]. So, other means to estimate the velocity of the quadcopter is crucial not only to enhance the position estimation, but also to enhance the attitude estimation because of Schuller effect. The “Air-Odo”, presented in this paper, is a technique to estimate the forward velocity of the quadcopter based on Hall effect magnetic sensor.

2.1. Air-Odo

Air-Odo is composed of a resisting plate, rotating part (two-opposite rotating magnet), and a stationary part (Hall effect contactless 360 degrees angle position sensor), as shown in Figure 1. Its working principle is based on the law of lifting force imposed over immersed bodies in the air. As the resisting plate is hinged to a freely rotating rod, and at the edge of this rod the rotating part (two opposite magnet) is attached, when the quadcopter starts to move, there is induced air flow in the opposite direction of the movement. This causes the resisting plate to rotate due to the lifting force imposed over the resisting plate from the air (1). This rotation will cause the two opposite magnets to rotate, according to the position of these two peaks, the amount of rotation caused from this quadcopter movement is measured by the 360 degrees hall effect angle sensor. Then this angle is used to estimate the velocity of the quadcopter through a simple least-square (LS) method.
L = 1 2 ρ V 2 AC D
where L is the drag force, ρ is the air density, V is the velocity of flow, A is the area of the resisting plate (immersed body), and C D is the drag coefficient.
The motion of the quadcopter will cause airflow opposite to the direction of motion, and this airflow will cause the resisting plate to deviate from its initial position, as shown in Figure 2, due to the drag force imposed over the resisting plate from the airflow. According to the resisting plate deviation angle, the velocity of the quadcopter can be estimated. However, the relation between the drone velocity and the measured angle is not a 1:1 relation, but it can be derived according to the mathematical model shown in Equations (2)–(7).
L   cos ( θ ) = mg     cos ( 90 θ )
L   cos ( θ ) = mg     sin ( θ )
where,
  • L is the drag force from the airflow.
  • m is the resisting plate mass.
  • g is the gravity constant.
  • θ is the deflection angle made by the resisting plate due to drag force.
A relation between θ and V can be achieved if we substitute the drag force term (L) in Equation (3) by its definition in Equation (1) as follows.
1 2 ρ V 2 AC D cos ( θ ) = m g     sin ( θ )
tan ( θ ) = ρ V 2 AC D 2 m g
Since the mass of the resisting plate (m), gravity constant (g), the area of the resisting plate (A), air density ( ρ ), and the drag coefficient ( C D ) can be considered as constants, so Equation (5) can be rewritten as shown in (6).
tan ( θ ) = c o n s t     V 2
c o n s t = ρ A C D 2 m g
Figure 3, shows the relationship between the Air-Odo angle and drone velocity using random values. The figure clearly shows the ability to estimate the vehicle velocity if the constant shown in Equation (7) is calculated.

2.2. Vehicle Dynamic Model

The drone utilized in this work is quadcopter drone, which is composed of four brushless dc motors that control the drone motions, according to their speeds. Typically, the quadcopter employs one of two configurations, either plus or X configuration (X configuration is used in this work), as shown in Figure 4.
There are three main motions for the quadcopter, pitch, roll, and yaw. For the X-configuration, to achieve pitch motion, the speeds of motors 1 and 2 increase, while the speeds of motors 4 and 3 decrease (or vice versa), while the roll motion is achieved by increasing motors 3 and 2 RPMs, while decreasing the RPMs of motors 4 and 1 (or vice versa), and yaw motion is realized by increasing motor 2 and 4 RPMs, while decreasing motors 3 and 1 RPMS (or vice versa).
The Quadcopter mathematical model can be expressed as follows:

2.2.1. Thrust

Thrust ( T ) is the force normal to the X-Y plane, which is responsible for the quadcopter ability to do maneuvers and can be calculated as follows.
T i = C T ρ A r r 2 w r 2
where,
  • C T is the motor thrust coefficient which depends on the geometry of the motor.
  • ρ is air density.
  • A r is the propellers cross section area.
  • r is the motor radius.
  • w r  is the motor angular velocity.
Since the air density, propellers cross-section area and the motor radius can be considered as constant for a given motor, so Equation (9) can be simplified to be
T i = C T w r 2

2.2.2. Torque

Motors thrust cause a torque, which is responsible for quadcopter roll, pitch, and yaw motion (rotation around X, Y, and Z axes), and can be calculated as shown in Equation (10).
τ = C Q w r 2
where,
  • C Q is the motor torque coefficient.
For the X configuration, equation (11) shows the torques and moments in matrix form.
[ Σ T τ ϕ τ θ τ ψ ] = [ C T C T C T C T d C T d C T d C T d C T d C T d C T d C T d C T C Q C Q C Q C Q ] [ w 1 2 w 2 2 w 3 2 w 4 2 ]
where,
  • d represents the distance between the quadcopter center and the motor.

2.2.3. Quadcopter Dynamic Model Equation of Motion

The quadcopter dynamic equation of motions can be divided into four parts, angular velocities state Equations (12)–(14) which describes the angular rates of quadcopter pitch, roll, and yaw. The kinematic Euler Equation (15), which describes the rate of change of Euler angles ( θ ,   ϕ ,   a n d   ψ ˙   ); velocity state Equation (16), which account for the acceleration of the rigid body center of mass, resulting from the acting forces; and position state Equation (17), which define the quadcopter body frame linear velocities.
W ˙ b = [ w ˙ x w ˙ y w ˙ z ] = ( I b ) 1 [ M A , T b Ω b I b W b ]
where,
  • I b is 3 × 3 matrix defining the inertia of the quadcopter in the body frame as shown in Equation (13).
  • M A , T b represent the moments produced by the aerodynamic thrusts and torques on the body frame.
  • Ω b is the body angular rates skew symmetric form as given in equation.
  • W b is the body rotational rate (angular velocity).
I b = [ I x x 0 0 0 I y y 0 0 0 I z z ]
M A , T b = [ d C T w ¯ 2 2 d C T w ¯ 4 2 + G x d C T w ¯ 1 2 + d C T w ¯ 3 2 + G Y C Q w ¯ 1 2 + C Q w ¯ 2 2 C Q w ¯ 3 2 + C Q w ¯ 4 2 ]
where,
  • G x and G y are the gyroscopic forces X and Y directions, respectively.
Φ ˙ = [ ϕ ˙ θ ˙ ψ ˙ ] = [ 1 tan ( θ ) sin ( ϕ ) tan ( θ ) cos ( ϕ ) 0 cos ( ϕ ) sin ( ϕ ) 0 sin ( ϕ ) / cos ( θ ) cos ( ϕ ) / cos ( θ ) ] [ w x w y w z ]
V ˙ b = [ v ˙ x v ˙ y v ˙ z ] = ( 1 m ) F A , T b + g b Ω b V b
where,
  • V b is the velocity of the quadcopter in the body frame (X, Y, and Z).
  • m is the quadcopter total mass.
  • F A , T b represents the force acting on the body due to the motors thrust.
  • g b is the gravitational force acting on the body frame.
P ˙ b = [ x ˙ y ˙ z ˙ ] = R b i V b
where,
  • R b i is the rotation from the body frame to inertial the frame.

2.3. Sensor Integration

The overall workflow of the sensor integration is shown in Figure 5. The workflow composed of two parts; first, the quadcopter velocity estimation which is achieved with the aid of Air-Odo. To estimate the velocity of the quadcopter, Air-Odo measurements (angles) pass through a least-square method, during Global Navigation Satellite Systems (GNSS) signal availability, to calculate the constant in Equation (7). As there are unmodeled parameters such as friction at point O, which are not taken into consideration, this constant is used to calculate the velocity of the quadcopter with the aid of Equation (6). The second part is quadcopter heading estimation with the aid of VDM. As shown in Equation (11), the yaw movement is achieved when there is a change in the RPMs of motors 1 and 3, opposite to the change of motor 2 and 4 RPMs. The output of this motor relation is compared against the experimental threshold as shown in Figure 6. If the result is within the threshold bound then the previous heading value is used as an update to KF (as in this case the VDM indicates that there is no change in heading), while if the result of the yaw VDM relation is greater than this threshold (this means that VDM indicates that there is a change in the heading angle), the heading from the mechanization is considered as an update to the KF.

2.4. Extended Kalman Filter Framework

All the sensors measurements are fused through a loosely coupled EKF scheme. The used error state vector is composed of 21 states as shown in Equation (18)
δ x = [ δ r n δ v n δ ε n b d s a s g ]
where,
  • δ r n ,   δ v n ,   ε n are the position, velocity, and attitude error vector, respectively.
  • b , and d are the accelerometers bias and gyros drift, respectively.
  • s a and s g are the accelerometers and gyros scale factor, respectively.
The EKF starts with states and uncertainties initialization, followed by prediction equation as shown in Equation (19) to predict the sates, then Equation (20) to propagate their associated uncertainties.
x ^ k + 1 = Φ k x ^ k + G k w k
P k + 1 = Φ k P k Φ k T + G ¯ k Q k G ¯ k T
where,
  • Φ k is the state transition matrix.
  • x k is the error states.
  • G k is the noise coefficient matrix.
  • P k + 1 is the predicted covariance matrix.
  • w k is the system noise.
  • Q k is the covariance matrix of system noise.
It’s worth noting that the stochastic errors of the inertial sensor were modeled as a first-order Gauss–Markov.
The prediction step is followed by an update step as shown in Equations (21)–(23), to interpret the data from the aiding sensors when it’s available, e.g., GNSS, Air-Odo (velocity), and heading (VDM).
K k = P k H k T ( H k P k H k T + R k ) 1
x ^ k + = x ^ k + K k δ z k
P k + = ( I K k H k ) P k
where,
  • H k is the design matrix.
  • K k is the Kalman gain.
  • x ^ k + is the updated states.
  • P k + is the updated covariance matrix
  • δ z k is the innovation sequence

3. Experiments and Results

3.1. Hardware Setup

To verify the proposed approach in aiding the navigation solution during GNSS signal outages, an experimental flight was carried out. The experiment uses the 3DR Solo quadcopter equipped with MEMS-based low-cost IMU (MPU-9250) and U-Blox GNSS receiver. Air-Odo was attached to the top of the quadcopter as shown in Figure 7.

3.2. Indoor Experiment

Before conducting the outdoor flight experiment, the relation between the angles and the velocities that were driven mathematically needs to be verified. In order to verify this relationship, an indoor experiment was conducted in a wind tunnel as shown in Figure 8.
The measurement from pitot (wind tunnel velocities) was compared to the angle measurements from the Air-Odo, and the relation between them is shown in Figure 9.
Figure 9, verified the relationship between the angles of the Air-Odo and the wind velocities which is shown before in Figure 3 based on theoretical model. Also, this experiment shows that the dynamic range of the Air-Odo design starts from 0.65 m/s to 6 m/s. However, it is not preferred to account for the velocity from Air-Odo after 80 degrees, as the angle increments did not represent the velocities increments.

3.3. Outdoor Experiment

This autonomus flight is composed of 21 waypoints as shown in Figure 10, with different velocities varying between 3 and 5 m/s for a total time of 350 s.

3.3.1. INS Solution in Standalone Mode (Dead-Reckoning)

To evaluate the performance of the standalone INS system, the navigation solution of INS during GNSS signal outage of 65 s has been estimated using EKF in prediction mode without any updates. Figure 11 shows the performance of INS alone in dead-reckoning mode.
The previous figure clearly shows the large deterioration in the low-cost INS performance during the absence of an update, where the Root Mean Square Error (RMSE) reached 415 m and 450 m in the north and east directions, respectively.

3.3.2. Air-Odo/VDM/INS integration

During the outdoor flight, the measuerments of the Air-Odo represents the velocity of the UAV contaminated by any existing wind. To be able to estimate the UAV velocity alone, four intended stops have been carried out during GNSS signal availability. During these stops, the measuerments of the Air-Odo have been collected (which represents the wind velocity only). During GNSS signal outage, the previously collected wind speed measurements are then used to adjust the Air-Odo measuerments in attempt to estimate the UAV velocity only. Figure 12 shows the velocity of the vehicle estimated from the Air-Odo (after removing the effect of wind) compared to the reference velocity of the drone.
The previous figure showed that Air-Odo was able to estimate the velocity of the UAV with RMSE of ~0.6 m/s. Also, the figure shows that there are differences between the reference velocity and the estimated velocity in some parts resulting from the existence of wind gust.
Proposed approach performance (65 s GNSS outage)
Figure 13 and Table 1 show the performance of the proposed approach (Air-Odo /INS integration) without the VDM heading update during an artificial GNSS signal loss of 65 s.
This experiment showed the ability of the Air-Odo to enhance the navigation solution and bound the drift exhibited by the INS during GNSS signal outage. It also showed that regardless of the large enhancement in the navigation solution, the solution still suffers from heading drift as the heading is not strongly observable during velocity updates.
Figure 14 shows the performance of the Air-Odo for the same period of GNSS outage when adding the VDM as a constraint to heading update.
Table 2 and Figure 14 show the effectiveness of utilizing the VDM as heading constraint, in addition to the Air-Odo for velocity estimation.
Performance during long GNSS signal outages.
The performance of the proposed approach is shown in Figure 15 and Table 3, for longer (2 min) of GNSS signal outage.
Figure 15 clearly show the ability of the proposed approach to bound the drift exhibited by the INS for longer GNSS outage period (125 s). Also, Figure 16 shows the effect of the wind gust, which causes the UAV to drift in the 4th link.
Figure 17, shows the performance of the UAV navigation system based on Air-Odo and VDM, for a 185 s GNSS signal outage period. The figure shows the effectiveness of the proposed approach to bound the drift of the INS system, also Table 4 shows the performance of the proposed system as an aiding system.
Table 5 provide a summary of the proposed approach during different artificial GNSS signal outages ranging from 60 to 185 s. These experiments showed that utilizing the Air-Odo system to estimate the velocity of the drone and VDM to constrain the UAV heading greatly enhanced the navigation performance of the UAV during GNSS signals outages. Although the presence of wind gust caused errors in UAV velocity estimation, but the Air-Odo performance as a velocity update system offers a superior navigation solution when compared to the INS standalone solution during short wind anomolly.

4. Conclusions

Low-cost INSs, which is typically utilized in small and commercial UAVs, suffer from large drift during the absence of GNSS signals. This paper presented a new approach to enhance the navigation solution during the GNSS signal outage based on Hall effect sensor for velocity estimation “Air-Odo”, and a vehicle dynamic model approach for heading constraint (update). The Air-Odo system act as an odometer for flying vehicles. It is composed of a stationary and rotating component (resisting plate). During the quadcopter motion, the air flow resulting from this motion will yield the resisting plate to rotate. According to this rotation, the velocity of the quadcopter is estimated, with the aid of least-square approach in a mathematically driven relation between angles and drone velocities. While the VDM which is the relation between the quadcopter motors; its states are utilized to detect if there is a change in the quadcopter heading. According to the output of this relation, the heading of the drone is constrained to the previous heading state or not. The proposed approach was verified with an indoor experiment (wind tunnel), and outdoor environment experiment through a real flight. Different GNSS signal outages, ranging from 60 to 185 s, were carried out, to prove the ability of the proposed system to aid the navigation of the UAV. The results showed that the navigation system 2D RMSE was enhanced by more than 99% when compared to the INS navigation solution (dead-reckoning mode). Also, the proposed approach considers all the limitations imposed over these kind of drones (small UAVs) from power, weight, size, and computation, as the Air-Odo weight is less than 50 grams with a very low power consumption 150 mW/hr, low-cost (~20 dollars and can be cheaper for mass production). While VDM does not require any additional space, cost, or weight as the feedback from the UAV actuators were utilized.

5. Patents

Air-Odo was filed as a patent on 6 June 2018. The application number is 62681233.

Author Contributions

Conceptualization, Shady Zahran, Adel Moussa and Naser El-Sheimy; methodology, Shady Zahran, Adel Moussa and Naser El-Sheimy; software, Shady Zahran; validation, Shady Zahran; formal analysis, Shady Zahran; investigation, Shady Zahran; data curation, Shady Zahran; writing—original draft preparation, Shady Zahran; writing—review and editing, Shady Zahran, Adel Moussa and Naser El-Sheimy; visualization, Shady Zahran; supervision, Naser El-Sheimy; project administration, Naser El-Sheimy; funding acquisition, Naser El-Sheimy.

Funding

This work was supported by Naser El-Sheimy research funds from NSERC and Canada Research Chairs programs.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lau, T.K.; Liu, Y.; Lin, K.W. Inertial-Based Localization for Unmanned Helicopters against GNSS Outage. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 1932–1949. [Google Scholar] [CrossRef]
  2. Mostafa, M.M.; Moussa, A.M.; El-Sheimy, N.; Sesay, A.B. Optical Flow Based Approach for Vision Aided Inertial Navigation Using Regression Trees. In Proceedings of the 2017 International Technical Meeting of The Institute of Navigation, Monterey, CA, USA, 30 January–02 February 2017; pp. 856–865. [Google Scholar] [CrossRef]
  3. Chugo, D.; Yokota, S.; Matsushima, S.; Takase, K. Camera-based indoor navigation for service robots. In Proceedings of the SICE Annual Conference 2010, Taipei, Taiwan, 18–21 August 2010; pp. 1008–1013. [Google Scholar]
  4. Huang, K.-C.; Tseng, S.-H.; Mou, W.-H.; Fu, L.-C. Simultaneous localization and scene reconstruction with monocular camera. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, St. Paul, MN, USA, 14–18 May 2012; pp. 2102–2107. [Google Scholar] [CrossRef]
  5. Zarándy, Á.; Nemeth, M.; Nagy, Z.; Kiss, A.; Santha, L.; Zsedrovits, T. A real-time multi-camera vision system for UAV collision warning and navigation. J. Real-Time Image Process. 2016, 12, 709–724. [Google Scholar] [CrossRef]
  6. Achtelik, M.; Achtelik, M.; Weiss, S.; Siegwart, R. Onboard IMU and monocular vision based control for MAVs in unknown in- and outdoor environments. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 3056–3063. [Google Scholar] [CrossRef]
  7. Wang, C.; Wang, T.; Liang, J.; Chen, Y.; Zhang, Y.; Wang, C. Monocular visual SLAM for small UAVs in GPS-denied environments. In Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guangzhou, China, 11–14 December 2012; pp. 896–901. [Google Scholar] [CrossRef]
  8. Urzua, S.; Munguía, R.; Grau, A. Vision-based SLAM system for MAVs in GPS-denied environments. Int. J. Micro Air Veh. 2017, 9, 283–296. [Google Scholar] [CrossRef]
  9. Chuanqi, C.; Xiangyang, H.; Zhenjie, Z.; Mandan, Z. Monocular visual odometry based on optical flow and feature matching. In Proceedings of the 2017 29th Chinese Control and Decision Conference (CCDC), Chongqing, China, 28–30 May 2017; pp. 4554–4558. [Google Scholar] [CrossRef]
  10. Gao, Y.; Liu, S.; Atia, M.M.; Noureldin, A. INS/GPS/LiDAR Integrated Navigation System for Urban and Indoor Environments Using Hybrid Scan Matching Algorithm. Sensors 2015, 15, 23286–23302. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Kumar, G.A.; Patil, A.K.; Patil, R.; Park, S.S.; Chai, Y.H. A LiDAR and IMU Integrated Indoor Navigation System for UAVs and Its Application in Real-Time Pipeline Classification. Sensors 2017, 17, 1268. [Google Scholar] [CrossRef]
  12. Mohamed, H.; Moussa, A.; Elhabiby, M.; El-Sheimy, N.; Sesay, A. A Novel Real-Time Reference Key Frame Scan Matching Method. Sensors 2017, 17, 1060. [Google Scholar] [CrossRef]
  13. Bailey, T.; Durrant-Whyte, H. Simultaneous localization and mapping (SLAM): part II. IEEE Robot. Autom. Mag. 2006, 13, 108–117. [Google Scholar] [CrossRef]
  14. Hemann, G.; Singh, S.; Kaess, M. Long-range GPS-denied aerial inertial navigation with LIDAR localization. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 1659–1666. [Google Scholar] [CrossRef]
  15. Tang, J.; Chen, Y.; Niu, X.; Wang, L.; Chen, L.; Liu, J.; Shi, C.; Hyyppä, J. LiDAR Scan Matching Aided Inertial Navigation System in GNSS-Denied Environments. Sensors 2015, 15, 16710–16728. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Mostafa, M.; Zahran, S.; Moussa, A.; El-Sheimy, N.; Sesay, A. Radar and Visual Odometry Integrated System Aided Navigation for UAVS in GNSS Denied Environment. Sensors 2018, 18, 2776. [Google Scholar] [CrossRef] [PubMed]
  17. Zahran, S.; Mostafa, M.M.; Masiero, A.; Moussa, A.; Vettore, A.; El-Sheimy, N. Micro-Radar and UWB Aided UAV Navigation In GNSS Denied Environment. In The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Proceedings of the 2018 ISPRS TC I Mid-term Symposium: Innovative Sensing – From Sensors to Methods and Applications, Karlsruhe, Germany, 10–12 October 2018; ISPRS: Hannover, Germany, 2018; pp. 469–476. [Google Scholar] [CrossRef]
  18. Fasano, G.; Renga, A.; Vetrella, A.R.; Ludeno, G.; Catapano, I.; Soldovieri, F. Proof of concept of micro-UAV-based radar imaging, International Conference on Unmanned Aircraft Systems (ICUAS). In Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 1316–1323. [Google Scholar] [CrossRef]
  19. Quist, E.B.; Beard, R.W. Radar odometry on fixed-wing small unmanned aircraft. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 396–410. [Google Scholar] [CrossRef]
  20. Scannapieco, A.F.; Renga, A.; Fasano, G.; Moccia, A. Ultralight radar for small and micro-UAV navigation. In The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Proceedings of the International Conference on Unmanned Aerial Vehicles in Geomatics, Bonn, Germany, 4–7 September 2017; ISPRS: Hannover, Germany, 2017; pp. 333–338. [Google Scholar] [CrossRef]
  21. Li, R.; Liu, J.; Zhang, L.; Hang, Y. LIDAR/MEMS IMU integrated navigation (SLAM) method for a small UAV in indoor environments. In Proceedings of the 2014 DGON Inertial Sensors and Systems (ISS), Karlsruhe, Germany, 16–17 September 2014; pp. 1–15. [Google Scholar] [CrossRef]
  22. Hirose, M.; Xiao, Y.; Zuo, Z.; Kamat, V.R.; Zekkos, D.; Lynch, J. Implementation of UAV localization methods for a mobile post-earthquake monitoring system. In Proceedings of the 2015 IEEE Workshop on Environmental, Energy, and Structural Monitoring Systems (EESMS) Proceedings, Trento, Italy, 9–10 July 2015; pp. 66–71. [Google Scholar] [CrossRef]
  23. Wang, Y.-T.; Peng, C.-C.; Ravankar, A.A.; Ravankar, A. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm. Sensors 2018, 18, 1294. [Google Scholar] [CrossRef] [PubMed]
  24. Wang, F.; Zhao, Z. A survey of iterative closest point algorithm. In Proceedings of the Chinese Automation Congress (CAC), Jinan, China, 20–22 October 2017; pp. 4395–4399. [Google Scholar] [CrossRef]
  25. Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 145–152. [Google Scholar] [CrossRef]
  26. Censi, A. An ICP variant using a point-to-line metric. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 19–25. [Google Scholar] [CrossRef]
  27. Zahran, S.; Moussa, A.; El-Sheimy, N.; Sesay, A.B. Hybrid Machine Learning VDM for UAVs in GNSS-denied Environment. Nav. J. Inst. Nav. 2018, 65, 477–492. [Google Scholar] [CrossRef]
  28. Koifman, M.; Bar-Itzhack, I.Y. Inertial navigation system aided by aircraft dynamics. IEEE Trans. Control Syst. Technol. 1999, 7, 487–493. [Google Scholar] [CrossRef]
  29. Bryson, M.; Sukkarieh, S. Vehicle model aided inertial navigation for a UAV using low-cost sensors. In Proceedings of the Australasian Conference on Robotics and Automation, Canberra, Australia, 6–8 December 2004; ISBN 0-9587583-6-0. [Google Scholar]
  30. Vissière, D.; Bristeau, P.J.; Martin, A.P.; Petit, N. Experimental autonomous flight of a small-scaled helicopter using accurate dynamics model and low-cost sensors. IFAC Proc. Vol. 2008, 41, 14642–14650. [Google Scholar] [CrossRef]
  31. Khaghani, M.; Skaloud, J. Autonomous Vehicle Dynamic Model-Based Navigation for Small UAVs. Nav. J. Inst. Nav. 2016, 63, 345–358. [Google Scholar] [CrossRef]
  32. Philipp, F.; Lorenz, G.; Gert, F.T.; Florian, H. Unified Model Technique for Inertial Navigation Aided by Vehicle Dynamics Model. Nav. J. Inst. Nav. 2013, 60, 179–193. [Google Scholar] [CrossRef]
  33. Barton, J.D. Fundamentals of Small Unmanned Aircraft Flight. Johns Hopkins APL Tech. Dig. 2012, 31, 132–149. [Google Scholar]
  34. Cledat, E.; Cucci, D.A. Mapping GNSS Restricted Environments with a Drone Tandem and Indirect Position Control. In The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Proceedings of the International Conference on Unmanned Aerial Vehicles in Geomatics, Bonn, Germany, 4–7 September 2017; ISPRS: Hannover, Germany, 2017. [Google Scholar]
  35. Paul, S.; Chang, J. A New Approach to Detect Mover Position in Linear Motors Using Magnetic Sensors. Sensors 2015, 15, 26694–26708. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Jezný, J.; Čurilla, M. Position Measurement with Hall Effect Sensors. Am. J. Mech. Eng. Am. J. Mech. Eng. 2013, 1, 231–235. [Google Scholar]
  37. Ferrazzin, D.; di Domizio, G.; Salsedo, F.; Avizzano, C.A.; Tecchia, F.; Bergamasco, M. Hall effect sensor-based linear transducer. In Proceedings of the 8th IEEE International Workshop on Robot and Human Interaction. RO-MAN ’99 (Cat. No.99TH8483), Pisa, Italy, 27–29 September 1999; pp. 219–224. [Google Scholar] [CrossRef]
  38. Bienczyk, K. Angle measurement using a miniature hall effect position sensor. In Proceedings of the 2009 2nd International Students Conference on Electrodynamic and Mechatronics, Silesia, Poland, 19–21 May 2009; pp. 21–22. [Google Scholar] [CrossRef]
  39. Mecholic. Applications, Advantages and Limitations of Pitot Tubes. Available online: http://www.mecholic.com/2017/05/applications-advantages-limitations-pitot-tube.html (accessed on 8 November 2017).
Figure 1. Air-Odo Computed Aided Design (CAD) designed model.
Figure 1. Air-Odo Computed Aided Design (CAD) designed model.
Ijgi 08 00169 g001
Figure 2. Air-Odo theory of operation, showing the effect of wind on the resisting plate.
Figure 2. Air-Odo theory of operation, showing the effect of wind on the resisting plate.
Ijgi 08 00169 g002
Figure 3. Air-Odo theoretical relationship between angle and velocity.
Figure 3. Air-Odo theoretical relationship between angle and velocity.
Ijgi 08 00169 g003
Figure 4. The quadcopter typically employs two configurations either plus or X.
Figure 4. The quadcopter typically employs two configurations either plus or X.
Ijgi 08 00169 g004
Figure 5. Proposed approach workflow and Air-Odo/vehicle dynamic model/INS integration, through an EKF scheme.
Figure 5. Proposed approach workflow and Air-Odo/vehicle dynamic model/INS integration, through an EKF scheme.
Ijgi 08 00169 g005
Figure 6. Quadcopter Vehicle dynamic model heading RPMs relation and upper and lower heading constraint threshold. Within the heading constraint limits there is no heading change.
Figure 6. Quadcopter Vehicle dynamic model heading RPMs relation and upper and lower heading constraint threshold. Within the heading constraint limits there is no heading change.
Ijgi 08 00169 g006
Figure 7. Air-Odo System attached to the top of the quadcopter.
Figure 7. Air-Odo System attached to the top of the quadcopter.
Ijgi 08 00169 g007
Figure 8. Wind tunnel Air-Odo indoor Experiment to show the relationship between the Air-Odo angles and wind velocity.
Figure 8. Wind tunnel Air-Odo indoor Experiment to show the relationship between the Air-Odo angles and wind velocity.
Ijgi 08 00169 g008
Figure 9. Experimental relationship between wind velocities and Air-Odo angles and theoretical relation.
Figure 9. Experimental relationship between wind velocities and Air-Odo angles and theoretical relation.
Ijgi 08 00169 g009
Figure 10. The trajectory of the quadcopter for the experimental flight.
Figure 10. The trajectory of the quadcopter for the experimental flight.
Ijgi 08 00169 g010
Figure 11. INS standalone navigational solution, showing the massive drift exhibited by the INS when there is no aiding sensor to bound the massive accumulation of errors.
Figure 11. INS standalone navigational solution, showing the massive drift exhibited by the INS when there is no aiding sensor to bound the massive accumulation of errors.
Ijgi 08 00169 g011
Figure 12. Air-Odo velocity estimate (after compensating for existing wind) compared against reference velocity.
Figure 12. Air-Odo velocity estimate (after compensating for existing wind) compared against reference velocity.
Ijgi 08 00169 g012
Figure 13. Air-Odo/INS integration performance during 65 s of complete GNSS signal outage.
Figure 13. Air-Odo/INS integration performance during 65 s of complete GNSS signal outage.
Ijgi 08 00169 g013
Figure 14. Air-Odo/Vehicle Dynamic Model/INS integration performance during 65 of complete GNSS signal outage, Air-Odo for velocity update, and VDM for heading constraint.
Figure 14. Air-Odo/Vehicle Dynamic Model/INS integration performance during 65 of complete GNSS signal outage, Air-Odo for velocity update, and VDM for heading constraint.
Ijgi 08 00169 g014
Figure 15. Air-Odo/vehicle dynamic model/INS integration performance during 125 s of complete GNSS signal outage.
Figure 15. Air-Odo/vehicle dynamic model/INS integration performance during 125 s of complete GNSS signal outage.
Ijgi 08 00169 g015
Figure 16. Effect of wind gust on Air-Odo/vehicle dynamic model/INS integration and navigational performance.
Figure 16. Effect of wind gust on Air-Odo/vehicle dynamic model/INS integration and navigational performance.
Ijgi 08 00169 g016
Figure 17. Air-Odo/vehicle dynamic model/INS integration performance during 185 of complete GNSS signal outage.
Figure 17. Air-Odo/vehicle dynamic model/INS integration performance during 185 of complete GNSS signal outage.
Ijgi 08 00169 g017
Table 1. This table shows the root mean square error (RMSE) and the max errors for the Air-Odo/INS integration performance during 65 s of Global Navigation Satellite Systems (GNSS) outage.
Table 1. This table shows the root mean square error (RMSE) and the max errors for the Air-Odo/INS integration performance during 65 s of Global Navigation Satellite Systems (GNSS) outage.
RMSEMax Error
North1.48 m5 m
East5.49 m15 m
Table 2. This table shows the RMSE and the max errors for the Air-Odo/VDM/INS integration performance during 65 s of GNSS outage.
Table 2. This table shows the RMSE and the max errors for the Air-Odo/VDM/INS integration performance during 65 s of GNSS outage.
RMSEMax Error
North1.35 m4.36 m
East1.41 m3.55 m
Table 3. This table shows the RMSE and the max errors for the Air-Odo/VDM/INS integration performance during 125 s of GNSS outage.
Table 3. This table shows the RMSE and the max errors for the Air-Odo/VDM/INS integration performance during 125 s of GNSS outage.
RMSEMax Error
North2.50 m7.70 m
East12.40 m24.04 m
Table 4. This table shows the RMSE and the max errors for the Air-Odo/VDM/INS integration performance during 185 s of GNSS outage.
Table 4. This table shows the RMSE and the max errors for the Air-Odo/VDM/INS integration performance during 185 s of GNSS outage.
RMSEMax Error
North3.64 m8.78 m
East18.97 m36.29 m
Table 5. This table shows the RMSE and the max errors for the Air-Odo/VDM/INS integration performance compared to INS (DR solution) during different GNSS outage periods.
Table 5. This table shows the RMSE and the max errors for the Air-Odo/VDM/INS integration performance compared to INS (DR solution) during different GNSS outage periods.
INS
60 s Outage
Proposed
Approach
65 s Outage
Proposed
Approach
125 s Outage
Proposed
Approach
185 s Outage
RMSE North415 m1.35 m2.50 m3.64 m
Max Error- N1200 m4.36 m7.70 m8.78 m
RMSE East450 m1.41 m12.40 m18.97 m
Max Error - E1350 m3.55 m24.04 m36.29 m

Share and Cite

MDPI and ACS Style

Zahran, S.; Moussa, A.; El-Sheimy, N. Enhanced Drone Navigation in GNSS Denied Environment Using VDM and Hall Effect Sensor. ISPRS Int. J. Geo-Inf. 2019, 8, 169. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8040169

AMA Style

Zahran S, Moussa A, El-Sheimy N. Enhanced Drone Navigation in GNSS Denied Environment Using VDM and Hall Effect Sensor. ISPRS International Journal of Geo-Information. 2019; 8(4):169. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8040169

Chicago/Turabian Style

Zahran, Shady, Adel Moussa, and Naser El-Sheimy. 2019. "Enhanced Drone Navigation in GNSS Denied Environment Using VDM and Hall Effect Sensor" ISPRS International Journal of Geo-Information 8, no. 4: 169. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8040169

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop