Next Article in Journal
Low Energy Consumption Compressed Spectrum Sensing Based on Channel Energy Reconstruction in Cognitive Radio Network
Next Article in Special Issue
Precision and Reliability of Tightly Coupled PPP GNSS and Landmark Monocular Vision Positioning
Previous Article in Journal
Sparse Feature Learning of Hyperspectral Imagery via Multiobjective-Based Extreme Learning Machine
Previous Article in Special Issue
Distributed Multi-Antenna Positioning for Automatic-Guided Vehicle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Perception in the Dark; Development of a ToF Visual Inertial Odometry System

Deptartment of Mechanical Engineering and Interdisciplinary Division of Aeronautical and Aviation Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kong
*
Author to whom correspondence should be addressed.
Submission received: 11 January 2020 / Revised: 21 February 2020 / Accepted: 24 February 2020 / Published: 26 February 2020
(This article belongs to the Collection Positioning and Navigation)

Abstract

:
Visual inertial odometry (VIO) is the front-end of visual simultaneous localization and mapping (vSLAM) methods and has been actively studied in recent years. In this context, a time-of-flight (ToF) camera, with its high accuracy of depth measurement and strong resilience to ambient light of variable intensity, draws our interest. Thus, in this paper, we present a realtime visual inertial system based on a low cost ToF camera. The iterative closest point (ICP) methodology is adopted, incorporating salient point-selection criteria and a robustness-weighting function. In addition, an error-state Kalman filter is used and fused with inertial measurement unit (IMU) data. To test its capability, the ToF–VIO system is mounted on an unmanned aerial vehicle (UAV) platform and operated in a variable light environment. The estimated flight trajectory is compared with the ground truth data captured by a motion capture system. Real flight experiments are also conducted in a dark indoor environment, demonstrating good agreement with estimated performance. The current system is thus shown to be accurate and efficient for use in UAV applications in dark and Global Navigation Satellite System (GNSS)-denied environments.

1. Introduction

The study is motivated and inspired by the need to design and build an efficient and low-cost robot-mount localization system for indoor industrial facility inspection, tunnel inspection, and search and rescue missions inside earthquake- or explosion-damaged buildings in poor lighting and GNSS-denied conditions. The system must be able to track the movement of the inspection vehicle in real time without the assistance of GNSS, GPS or other localization infrastructure. A vision based state estimator is particularly suitable for such a scenario.
Consequently, vision-based state estimation has become one of the most active research areas in the past few years and there have been many notable VO, VIO or VSLAM works, such as parallel tracking and mapping (PTAM) [1], SVO+MSF (semidirect visual odometry + multiple sensor fusion) [2,3,4], MSCKF (multi-state constrain Kalman filter) [5], OKVIS (open keyframe-based visual–inertial SLAM) [6], VINS (visual–inertial navigation system) [7,8], VI-ORB-SLAM (visual–inertial oriented FAST and rotated BRIEF-SLAM) [9], VI-DSO (visual–inertial direct sparse odometry) [10] and KinectFusion [11]. These works can be categorized by their pose estimation methods and the ways of fusing IMU data (Table 1). Currently, approaches to depth information based pose estimation can generally use one of two optimization methods: the direct optimization method and the iterative closest point (ICP) method.
The optimization method models the estimation task as a minimization problem [15,16]. Generally, two kinds of optimization methods are widely used: Image-based optimization method and direct optimization method. In the traditional image-based optimization method, the objective function is modeled by the re-projection error of the features through feature matching; while in the direct optimization method, with the photometric camera model, the objective function is chosen as the intensity residual between frames. A comprehensive comparison of these two optimization methods can be found in Delmerico and Scaramuzza [17]. The target functions are solved by the gradient descent optimizer. As the image transformation in the traditional image-based method is represented by the Jacobian matrix in the objective function, which is the first order linearization of the real model, this method performs poorly in scenarios involving significant rotation, due to the implicitly high non-linearity in the real model.
Apart from the traditional image-based optimization method, the ICP method is considered as a relatively new type of its kind. The ICP method relies on the classic registration workflow [18], where the ICP algorithm begins with two sequential point clouds and an initial prediction of the transformation. The correspondence between two images (i.e., the source point cloud and the target point cloud) is found by searching the closest neighborhoods of the source points and estimating the progressive transformation to minimize the distances between the source points and target points. The iteration process is continued until transformation convergence is achieved. The value of the initial guess for the ICP is important, as a poor guess may lead to the iterative solution becoming trapped in a local minimum. Also, the ICP require high computation power especially when the number of points increase.
Additionally, there are some others, such as normal distribution transformation (NDT), in which the transform is calculated based on the probability density function inside the celled map. This kind of methods require huge amount of depth data and is normally used in the Lidar sensor applications. The notable works using the NDT method include: 3D-NDT [13,19] and Direct Depth SLAM [14].
Most of the methods described above use passive cameras and were developed for use under different experimental environments in daylight, meaning that camera exposure for good image quality is not a problem. However, many real application scenarios, such as indoor industrial facility inspection and search and rescue missions inside earthquake- or explosion-damaged buildings, are much more challenging than the benchmark case of the MH 05 difficult in EuRoC MAV Dataset [20], as these are in poor lighting conditions. In order to achieve perception in such dark or changing ambient light environments, we replace the conventional passive camera with an active ToF camera in this study. The pros and cons of using the conventional passive camera and the active ToF camera in motion estimation are listed in Table 2.
Notably, in a passive camera, the sensor is exposed to the light during the shutter opening period. The depth information can be triangulated through the motion of the camera or using two cameras with the calibrated extrinsic parameters. Contrarily, the ToF camera illuminates a modulated light and observes the reflected light. The phase shift between the illuminating light and the reflected light is measured and translated to distance [21]. Since such a ToF camera can measure the distance in “one shot” without any post processing, it enables good resilience to any change of environmental brightness [22]. The output of the ToF camera provides a depth image and an NIR intensity image simultaneously (Figure 1).
In this paper, we propose an optimized VIO framework using a combination of a pmd flexx ToF camera and Pixhawk v2 hardware as the IMU sensor. Salient point selection with a statistic based weight factor is applied in the basic ICP algorithm, and the estimation results are fused with the IMU data using an error state Kalman filter. The algorithm can achieve a realtime processing rate for UAV applications without needing to use a graphics-processing unit. The main contributions of this work are:
  • Improved the conventional ICP workflow with the salient point selection criterias and the statistics based robust weighting factor.
  • Development of an error state Kalman filter based framework to fuse global sensors, composed of a ToF camera and an IMU, and with local estimations, which achieves locally accurate localization and high computational efficiency.
  • An evaluation of the proposed system in both the motion capture system and the real experiments. A ToF-IMU dataset with the ground truth is published.
  • Open-source code of the error state Kalman filter based ToF–VIO for the research community.

2. ToF–VIO Platform

Our sensor platform consists of a pmd technologies Flexx ToF camera and an IMU sensor embedded in a Pixhawk v2 flight controller, as shown in Figure 2. The Flexx ToF camera has a resolution of 224 × 171 pixels and a depth detection range of 4 m with a frame rate set to be 15 Hz. The Pixhawk v2 IMU updates at a rate of 250 Hz. The times of the two sensors are synchronized by the software and the installation geometry of the camera and IMU is represented by (1).
T I C = 0 0 1 0.1 1 0 0 0 0 1 0 0 0 0 0 1

3. ICP Alignment

3.1. Visual Odometry by the ToF Camera Module

The ToF camera outputs a depth image p z = ( u , v , z ) and a near infrared (NIR) intensity image p i = ( u , v , i ) simultaneously, where z and i represent the depth and intensity information, respectively, of the pixel ( u , v ) . Because these two images are captured by the same optical sensing module and aligned, they share the same camera parameters ( f x , c x , f y , c y ), where f and c are the focal length and the image center, respectively. For simplicity, we can use p = ( u , v , z , i ) to represent the camera output, which is linked to its corresponding 3D point in the Camera Frame with the intensity, P c = ( X c , Y c , Z c , I ) T , through the projection model of the camera as shown in Equation (2). Accordingly, P c = π ( p ) can be derived explicitly in Equation (3).
u v 1 z i = f x 0 c x 0 0 0 f y c y 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 X c / Z c Y c / Z c 1 Z c I
P c = π ( p ) = ( u · f x z + c x , v · f y z + c y , z , i ) T
Assuming the world frame is fixed, the rigid motion of an object in front of a still camera can be described by a transformation matrix T = [ R | t ] , where R is the rotational matrix in S O ( 3 ) and t represents the translational vector. Equation (4) describes such a transformation from the world frame to the camera frame, where P w = ( X w , Y w , Z w , I ) T is the 3D point in the world Frame with its intensity.
P c = X c Y c Z c I = T · P w = r 11 r 12 r 13 t x r 21 r 22 r 23 t y r 31 r 32 r 33 t z 0 0 0 1 X w Y w Z w I
As shown in Figure 3, a moving camera captures the point clouds P t 1 and P t 2 at two sequent time t 1 and t 2 , respectively. Notably, P and P represent the point cloud and a single point, respectively. The captured point cloud, P t 1 can be represented by Equation (5), while P t 2 can be regard as the inter-frame transformation of P t 1 due to the motion of the camera in the world frame, as seen in Equation (6). Here, T c a m represents the coordinate transformation of the camera in the world frame between P t 1 and P t 2 .
P t 1 = T t 1 · P w
P t 2 = T c a m 1 P t 1 = T c a m 1 · T t 1 · P w
The frame to frame alignment can also be made through the transformation matrix T , which aligns two point clouds as P t 2 = T · P t 1 . Combining it with Equation (5) and Equation (6), we could find the motion of the camera T c a m equal to the inverse of the point cloud transformation matrix T (Equation (7)). In another word, the camera motion can be derived from the point cloud alignment.
P t 2 = T c a m 1 · T t 1 · P w = T · T t 1 · P w T c a m = T 1
In this work, the ICP algorithm are used to derive the transformation T . The workflow of the basic ICP algorithm can be summarized as:
  • The point cloud alignment algorithm starts with the source point cloud P s and the target point cloud P t and an initial prediction of the transformation T 0 between these two clouds.
  • For every point in the target cloud ( P i P t ), searching the corresponding point P i in the transformed source point cloud which has the closest distance:
    P i = P j , j = argmin j | | P i T · P j | | , P j P s
  • Estimate the increment transformation from the point pairs which could minimize the error metric.
    Δ T = argmin Δ T | | P Δ T · P | |
  • Applying the increment transformation.
    T n + 1 Δ T · T n
  • By applying the above steps iteratively until the transformation of the two point clouds converge (Equation (11), where T T H is the threshold of transformation) or meets a certain criterion, e.g reach maximum iterative number, the final transformation is obtained.
    Δ T < T T H
Many variants of the basic ICP concept have been introduced to enhance the performance, i.e, to speed up the algorithm, improve robustness or increase accuracy. In the work of Rusinkiewicz and Levov [23], they classify these variants according to the effect on one of six stages of the algorithm: Selection, Matching, Weighting, Rejecting, Assigning Error Metric and Minimizing the error metric. Inspired by the work of Li and Lee [24], we develop a salient point selection criteria for acceleration of the ICP process. The statics based weighting function to improve the robustness of the ICP process is applied.

3.2. Selection of Sailent Points

The resolution of ToF camera in this study is 38K, i.e, every point cloud image contains 38K points. It is impossible to align such a huge amount of points on an embedded computer in real time. Therefore, several criteria to select the salient points from the raw image are applied in this work. As shown in Figure 4, these criteria can be divided into the rejection and acceptance groups.

3.2.1. Rejection Rules

Rejection Rules: Two kinds of rejection rules are applied: background points and motion induced outliers. A cloud point which satisfies either of these two criteria will be rejected.

Background Point

As shown in Figure 5, the background points might be blocked by the movement of the camera, compared to the object point (which are consistent in different frames). As these background points will introduce the error into the point cloud alignment, they must be removed.
The background points are directly identified from the depth image. Thus, if an abrupt decrease of depth is found to occur in the area nearby a single point, this point is inferred to be a background point. These criteria can be expressed by Equation (12), where z ( u , v ) represents the depth value z value of the point p = ( u , v , z , i ) T at the ( u , v ) position and π s h , b g is a threshold. Notably, relevant background points tend to exist near the edges of an object; this means that comparing z ( u , v ) with that of the point which is four pixels away is the most effect means of background point removal. In addition, π s h , b g is relevant to the detection range of the selected camera (see Section 5 Table 3).
z ( u , v ) z ( u + 4 , v ) > π s h , b g z ( u , v ) z ( u , v ) z ( u 4 , v ) > π s h , b g z ( u , v ) z ( u , v ) z ( u , v + 4 ) > π s h , b g z ( u , v ) z ( u , v ) z ( u , v 4 ) > π s h , b g z ( u , v )

Motion Induced Outlier

The initial coordinate transformation T 0 can be predicted by the IMU integration process (which will be introduced in the next section) or using the transformation of previous frame. Together with the projection model of the camera Equations (2)–(4), the predicted positions of the points in the new frame can be calculated from the transformation of the source point cloud P s by Equation (13). If this predicted point lays out of the FOV (Field-of-View) of the camera (Equation (14)), it will be removed before the ICP alignment. Note that this step is implemented at the step of making point pairs.
p = ( u , v , z , i ) T = π 1 ( T 0 · P s )
u p r e d i c t < 0 u p r e d i c t > u m a x ( i . e . , w i d t h ) v p r e d i c t < 0 v p r e d i c t > v m a x ( i . e . , h e i g h t )

3.2.2. Acception Rules

Three kinds of acceptance rules are further applied: gradient-based criteria, depth extreme points and canny features. Any point satisfies one of these three rules will be accepted. The relationship of the rejection rule and acceptance rule is ”and,∧”, which means that a salient point needs to satisfy the acceptance rules and it is not a rejected point.

Gradient-Based Criteria

The position where the Depth or Intensity changed dramatically are believed to carry the important position information or features. Thus, the criteria based on the intensity gradient (Equation (15)) and the depth gradient (Equation (16)) gradient are developed.
i ( u + 2 , v ) i ( u 2 , v ) > π s h , i i ( u , v + 2 ) i ( u , v 2 ) > π s h , i
z ( u + 2 , v ) z ( u 2 , v ) > π s h , z z ( u , v ) z ( u , v + 2 ) z ( u , v 2 ) > π s h , z z ( u , v )

Depth Extreme Point

As shown in Figure 5, the depth extreme points are considered to contain 3D features and thus need to be select as salient points. The extreme point of the depth (the local maximum of minimum of the depth) on a continuous plane can be found by a zero depth gradient g u ( u , v ) = ( u , v ) / u = 0 or g v ( u , v ) = ( u , v ) / v = 0 . As the cloud points are discrete, the extreme points are detected by calculating monotonically the gradient in an interrogation window of 5 × 5 pixels. Thus, the points which satisfy Equations (17) and (18) are considered to be the extreme points in the u direction. Similar procedures are conducted to extract the extreme points in the v direction.
g u ( u , v ) = z ( u + 1 , v ) z ( u , v )
( g u ( u 2 , v ) < 0 g u ( u 1 , v ) < 0   g u ( u , v ) > 0 g u ( u + 1 , v ) > 0 ) ( g u ( u 2 , v ) > 0 g u ( u 1 , v ) > 0   g u ( u , v ) < 0 g u ( u + 1 , v ) < 0 )

Canny Features

The Canny edge detector is believed to be one of the most popular edge detection methods ever since it was developed [25]. It has been shown that the introduction of the canny feature will increase the alignment accuracy. Therefore, this detector is applied to the NIR image to extract the edge points as a supplement of salient point detector. Once the canny edge is detected, the corresponding points in the point cloud will be accepted. The Canny detector form OpenCV with the parameter: t h r e s h o l d 1 = 150 , t h r e s h o l d 2 = 300 and a p e r t u r e S i z e = 3 are used in this work.

3.3. Weighting of the Point Pairs

In Kerl’s work [15], intensity residuals are found to follow the t-distribution approximately. We measured the position error distribution of the point pairs from two images. The results of three different cases are shown in Figure 6. In first case, the camera is fixed and the point cloud is captured twice. The second and third cases show the position error distributions of the point clouds by a moving camera with and without running four ICP loops, respectively. As shown in Figure 5, the t-distribution approximates the error distribution better than the normal distribution in every case.
Base on our observation of distribution, we modify the error matrix (Equation (9)) with the robust weighting factor:
Δ T = argmin Δ T i s i z e ( P ) w ( i ) | | P i Δ T · P i | |
w ( i ) = ν + 1 ν + ( | | P i T · P i | | u σ ) 2
In Equation (19), ν is the degree of freedom. As indicated in Figure 6, ν is close to 2 when a large motion occurs and the alignment of two frames is moderate; while ν increases when the alignment is improved. In the following experiments, ν = 4 is set for the weight factor calculation. Assuming the point cloud are well aligned, the mean of the error matrix u is set to zero. The deviation can then be calculated by the following equation recursively:
σ = 1 n i = 1 n | | P i T · P i | | ν + 1 ν + ( | | P i T · P i | | σ ) 2

4. Data Fusion

4.1. Modelling Equations of IMU Sensors

The MEMS IMU sensor consists of a 3-axis gyroscope and a 3-axis accelerometer. The sensor measurements of the angular velocities by the gyroscope ω m = ( ω x , ω y , ω z ) T and the accelerations by the accelerometer a m = ( a x , a y , a z ) T can be described by the following model:
ω m = ω r e a l + ω b + ω n a m = R ( q ) ( a r e a l g ) + a b + a n ω n N ( 0 , σ ω 2 ) ω ˙ b = ω b n N ( 0 , σ ω b 2 ) a n N ( 0 , σ a 2 ) a ˙ b = a b n N ( 0 , σ a b 2 )
where ω r e a l and a r e a l represent the true angular velocity and acceleration, respectively; ω n and a n refer to the additive noises of the sensor which follow the Gaussian distributions in nature; and the bias part ω b and a b can be described as random walk processes. The derivatives of the gyroscope bias ω b n = ω ˙ b and the accelerometer bias a b n = a ˙ b also follow the Gaussian distributions. The error variance σ ω 2 and σ a 2 can be found in the datasheet of the IMU. Some high precision IMUs also provide the σ ω b 2 and σ a b 2 . If not, this two values can be derived from the IMU calibration process or one can set them as the squares of additive noises. q = ( q w , q x , q y , q z ) T is the quaternion representation of rotation from the inertial frame to the IMU frame. R ( q ) is the rotation matrix corresponding to the quaternion vector.

4.2. Error State Kalman Filter

The Error State Kalman filter (ESKF) is adopted to estimate the errors in every state by using the differences between the IMU data and the Frame to Frame Alignment results. Figure 7 shows the workflow of the proposed estimator. When an IMU data are fed in, the integration process will integrate the nominal state to provide a prediction pose for the frame to frame alignment module. At the same time, the error state will also be updated according to the ESKF processing model. After finishing the alignment process, the output of the alignment module will serve as the measurement to correct the error state. Finally, the new state will be calculated by the composition of the error state and the nominal state.
In the current work, the quaternion (rotation) q , position p , velocity v , together with the biases of the gyroscope ω b and accelerometer a b are used to describe the system. Following the notation style of Santamaria-Navarro et al. [26], The total system state is described by the true state x t = ( q t , p t , v t , ω b t , a b t ) , nominal state x = ( q , p , v , ω b , a b ) and error state δ x = ( δ θ , δ p , δ v , δ ω b , δ a b ) . In the error state, the Euler angles are selected to represent the rotation so that the dimension of the rotation component is 3. The composition of the state follows:
q t = δ q q , δ q = q ( δ θ ) = e δ θ / 2 p t = p + δ p , v t = v + δ v ω b t = ω b + δ ω b , a b t = a b + δ a b
where ⊗ indicates quaternion multiplication. In the ESKF fusion, the inputs are divided into 3 parts: u , n and z . The measurement input u contains the readout from the gyroscope and the accelerometer u = ( ω m , a m ) . The noise impulse input n is the assumed IMU noise n = ( ω n , a n , ω b n , a b n ) and the observation z includes the rotation and translation from the ICP alignment process z = ( q , t ) . According to the IMU measurement module, every component in the noise impulse input follows the Gaussian distribution with a zero mean value. The purpose of doing so is intended to separate the error part from other parts and let the error remain in the error state. The storage in the nominal state is always the best fusion result. The ESKF workflow can be divided into two steps: (1) the ESKF updating process driven by the IMU and (2) the ESKF innovation step from the IMU (see Figure 7), detailed as follows:
(1)
Update the nominal state through the nominal state processing model x n = f n s ( x t 1 , u t ) and update the error state and the covariance matrix through the error state processing model δ x n = f e s ( x t 1 , u t , n t )
(2)
Innovate the Kalman Gain K, the error state and the covariance matrix, and then inject the error state to the nominal state

4.3. ESKF Updating Process

4.3.1. Kinematics of True and Nominal State

The system kinematics of the IMU can be describe by the equation:
q ˙ t = 1 2 q t ω t = 1 2 q t ( ω m ω b t ω n ) p ˙ t = v t , v ˙ t = R t 1 ( a m a b t a n ) + g ω ˙ b t = ω b n , a ˙ b t = a b n
where q ω represent q [ 0 , ω ] T and the multiplication operation of the quaternion and the angular velocity can be calculated by q t ω t = Ω ( ω t ) q t where Ω ( ω ) is the quaternion integration matrix:
Ω ( ω ) = 0 ω x ω y ω z ω x 0 ω z ω y ω y ω z 0 ω x ω z ω y ω x 0
The nominal state kinematics can be updated following the system module. The noise of the IMU is not considered in this nominal state module, i.e., the nominal state is only updated by the measurement input u t = ( ω m , a m ) . The kinematics in continuous time is shown as follows:
q ˙ = 1 2 q ( ω m ω b ) = 1 2 Ω ( ω m ω b ) q p ˙ = v , v ˙ = R ( q ) 1 ( a m b a ) + g ω ˙ b = 0 , a ˙ b = 0
In a discrete time, the nominal state can be calculated by:
q q q { ( ω m ω b ) Δ t } p p + v Δ t + 1 2 ( R ( a m a b ) + g ) Δ t 2 v v + ( R ( a m a b ) + g ) Δ t ω b ω b , a b a b

4.3.2. Updating Process of Error State and Covariance Matrix

The error state kinematics can be described by the following equations.
δ θ ˙ = R δ ω b R ω n δ p ˙ = δ v δ v ˙ = [ R ( a m a b ) ] × δ θ R δ a b R a n δ ω ˙ b = ω b n δ a ˙ b = a b n
The equations of δ p ˙ , δ ω ˙ b and δ a ˙ b can be derived directly by the definition of the error state. The derivations of the error velocity δ v ˙ and the error orientation δ θ ˙ can be found out in the Appendix A. The error state in the discrete time domain follows:
δ θ δ θ R δ ω b Δ t R ω n δ p δ p + δ v Δ t δ v δ v + ( [ R ( a m b a ) ] × δ θ R δ a b ) Δ t + R a n δ ω b δ ω b + ω b n , δ a b δ a b + a b n
By applying n = [ ω n , a n , ω b n , a b n ] T as the input, which drives the system forward and induces the system transition matrix F and input matrix F i , The kinetics of the error state and covariance of the error state can be represented by:
δ x t = F δ x t 1 Σ = F Σ F T + F i Q i m u F i T F = I 0 0 F 1 0 0 I F 2 0 0 F 3 0 I 0 F 1 0 0 0 I 0 0 0 0 0 I F i = I 0 0 0 0 0 0 0 0 I 0 0 0 0 I 0 0 0 0 I F 1 = R Δ t F 2 = I Δ t F 3 = [ R ( a m a b ) ] × Δ t
Here [ ] × is the skew operator which produces the skew-symmetric matrix. According to the IMU measurement model, the covariance matrix Q i m u of the the perturbation input n = [ ω n , a n , ω b n , a b n ] T is:
Q i m u = d i a g ( σ ω 2 Δ t 2 , σ a 2 Δ t 2 , σ ω b 2 Δ t , σ a b 2 Δ t )

4.4. ESKF Innovation Process

In Figure 6, the output of ICP alignment is the orientation q and the position p of the camera in the world frame. It is necessary to transfer the pose to the IMU link through Equation (32):
T W I = T W C · T I C 1
where T W C is the ICP result representing the transformation of Camera in the world frame, T I C = [ R I C | t I C ] is the installation geometry of camera in the IMU link (see Figure 1) and T W I = [ R W I | t W I ] is the IMU pose in the world frame. T I C in the current setup is represented by Equation (1). T W I is then used as the input of the filter y = [ θ , p ] T . Since the estimate of the error state is zero ( δ x ¯ = 0 ), the innovation z and the covariance matrix Σ are:
z = y h ( x )
Σ = H Σ H T + Q i c p
where h ( x ) is the measurement model of the system state, H is the jacobian matrix of the measurement model and Q i c p is the covariance of ICP. As the orientation and the position are directly measured by ICP, H is the identity matrix accordingly. Therefore, the Kalman gain, the estimate of the error state and the covariance matrix can be innovated.
K Σ H T Σ 1
δ x ¯ K z
Σ Σ K Z K T
After the innovation process is done, the error state is then injected into the nominal state following the general composition rules defined in Equation (23). The injection process make sure that the nominal state is always updated. Afterward the error state will be reset to zero ( δ x = 0 ) and the orientation part of the covariance matrix need to be updated according to the newest nominal state, as follow:
Σ δ θ G Σ δ θ G T
where G is the Jacobian matrix of the nominal state toward the error state and
G = I + [ 1 2 δ θ ] ×

4.5. Integration and Re-Integration

In the VIO system, the IMU data are updated at a high rate (i.e., >200 Hz for a typical MEMS IMU) while the camera capture rate is relatively slow (i.e., <30 Hz). In addition, the ICP processing time is dependent on the numerical convergence rate and is not constant. Therefore, we need to maintain the state queue and update it properly using the integration/re-integration method.
As shown in Figure 8, the above procedure results in a fixed-length state queue being maintained. The integration of the state is driven by the IMU measurement, i.e., whenever an IMU measurement is inputted, the state will be updated and stored in the queue. When the ICP and filter innovation processes are concluded, the state will be updated according to the capture timestamp. Then, the re-integration process integrates the state from the innovation state to the most recent state. As a result, the system state is always updated and the VIO output rate can keep pace with the IMU update rate.

5. Results and Discussion

Some threshold parameters are relevant to the camera resolution and its detection range (Table 3). Also, the sampling rates of different sensors and camera calibration parameters are listed in Table 4.
Due to there is no existing public ToF-IMU dataset, we first conducted the handheld test and UAV test and compared the results with those of the motion capture system, which serve as the ground truth benchmarks. The accuracy of the odometry is presented by the root mean square error (RMSE) of translational drift. Both the absolute trajectory error (ATE) and the one-step relative pose error (RPE) cases are considered [27]. The definitions of ATE and RPE are shown below:
E A T E , i = T g t , i 1 S T e s t , i
E R P E , i = ( T g t , i 1 T g t , i + 1 ) 1 ( T e s t , i 1 T e s t , i + 1 )
R M S E ( E 1 : n ) : = ( 1 n i = 1 n t r a n s ( E i ) 2 ) 1 2
where T g t , i is the transformation of ground truth of the frame i, T e s t , i the estimate transformation of the frame i and S the least-squares solution that maps the estimated trajectory T e s t , 1 : n onto the ground truth trajectory T g t , 1 : n . We then carried out a UAV field test and an exploration test on a ground moving platform. The results were compared with those obtained by RealSense T265 VIO sensors. All data is provided in rosbag and are compatible with the TUM dataset [28].

5.1. Handheld Test

In the handheld test, we held the VIO platform and successively moved in the x, y, and z directions. This generated a good agreement between estimated trajectory and ground truth, as can be observed in Figure 9 and Figure 10. The length of the trajectory is 12.86 m and ATE and RPE of the estimated trajectory is 0.047 m and 0.017 m/s, respectively.
Based on this dataset, Figure 11 shows the comparison of ground truth trajectory with those using algorithms of the conventional ICP (icp), the ICP with the salient point selection criteria (icp+s), the ICP with the salient point selection criteria and robust weighting factor (icp+s+w), the random sub-sampled based ICP (icp+rd) and the ICP with the RANSAC pipeline (icp+r) [29]. The corresponding accuracy and processing time of each algorithm are also evaluated and listed in Table 5. It can be seen that the conventional ICP workflow can provide the accurate pose estimation but the processing time is unfavorable to the real-time applications. Both the random sub-sampling based ICP and the ICP with our salient point selection method can dramatically reduce processing time. Nevertheless, the former induced a large ATE error. The ICP with the RANSAC method achieves best in the RPE error but with the largest ATE error and a second largest average processing time. Notably, the conventional ICP, icp+s, icp+r, and icp+rd are all with weights for points set to equal. Together with the robust weighting factor (the weights for points set to the t distribution), the proposed method (icp+s+w) in this work reaches almost the same ATE accuracy and 26% improvement in RPE accuracy when compared with the conventional ICP. In the meanwhile, the processing time of icp+s+w is only 25% that of the conventional ICP. The superiority of icp+s+w over other methods is clearly illustrated.
Furthermore, we visualize the salient points in Figure 12a. Our selection algorithm selects the feature and the edge of the object from the raw input point cloud. The number of salient points is consistent (Figure 12b) even if the size of input cloud varies. Together with the good accuracy of icp+s and icp+s+w methods, it is believed that the salient point selection criteria is efficient and robust.
Figure 13 demonstrates that the error of mean p p in each frame (different lines) decreases with the increase of ICP loops. In general, three (no motion) to 15 ICP loops were performed and the motion prediction of IMU provided the initial guess of the ICP algorithm. After about ten loops, the mean p p almost reach the minimum value. The refinement process continues until the corresponding transformation change ( Δ T ) is relatively small or the maximum loops (15 in our experiment) are reached.

5.2. UAV Test in LAB Environment

In the UAV test, we mounted the VIO system on our UAV platform and flew the UAV in a circular trajectory (13.019 m) in a laboratory, generating the data in Figure 14 and Figure 15, with an ATE 0.04 m and RPE of 0.021 m/s. Note that we turned off the ambient light during the test, which induced the failure of the mono camera; however, the ToF camera based VIO system continued to function in this dark environment (Figure 16). The full video of this test can be found in Supplementary Materials.

5.3. Field Test in Corridor

We mounted the ToF–VIO sensor and a Realsense T265 sensor on the UAV platform and conducted a field test in a corridor. The UAV took off, flew straight along the corridor, and landed 5 m in front of the takeoff position. The trajectory length including takeoff and landing is 8.23 m. Figure 17 presents the comparison between the estimated trajectory generated by the current ToF–VIO sensor and that by the Realsense T265 sensor. In general, the agreement is good, especially in the early period. The ATE and RPE between ToF–VIO and T265 is 0.12 m and 0.029 m/s, respectively. As the UAV took off and landed at the same ground level, our ToF–VIO has better estimation in the z-direction.

5.4. Exploration Test Using a Ground Moving Platform

To further demonstrate the performance of our system in a longer range, we mounted the ToF–VIO sensor and a Realsense T265 sensor on a ground moving platform to explore in the indoor Lab environment. The Realsense T265 sensor were used as the benchmark. The trajectory length for this experiment sequence is 25.675 m (captured by T265 sensor). Figure 18 shows the reconstructed images of the lab and the corresponding trajectories by ToF–VIO and T265. The colors of these points represent the z values (height). As seen, the map shows many detailed features, including a flat ground and straight walls. Figure 19 presents the comparison between the estimated trajectory generated by these two sensors. Good agreement between these two sensors can also be observed. The ATE and RPE between our ToF–VIO and T265 is 0.78 m and 0.025 m/s respectively. Compared with the UAV field test in Section 5.3, the ATE over the trajectory length increase from 1.4% to 3.0%. As known, in an unknown environment without any reference map, the ATE will be accumulated as the drift is not compensated. However, the PRE of these two tests remain at the same level (<0.03 m/s), even when the range increases. The pose estimations from these two sensors agree with each other. The full video of this test can be found in Supplementary Materials.

5.5. Analysis

As shown in Figure 20, the number of salient points is only 5 % of the number in the original point clouds. Aside from small variations, the number of salient points remains consistent in different frames. We can therefore efficiently obtain a pose estimation by alignment of these points.
We tested our algorithm on a TX2-embedded computer and an Intel i5 PC, and the resulting calculation times are listed in Table 6. The results show that our algorithm can estimate the pose at a rate of at least 15 Hz, which means it can be used in realtime applications.

6. Conclusions

In this paper, we have described the development of a ToF camera-based VIO system. This system was demonstrably superior to the conventional ICP-based workflow, as the computational time was reduced by salient-point selection criteria and the robustness of frame-to-frame alignment was ensured by the statistic weight function. The IMU data were loosely coupled in the proposed system with an ESKF to provide the ego-motion pose estimation. We then assembled our experimental platform and conducted a field test. The results showed that our proposed approach achieved similar accuracy to the state-of-the-art VIO system. Experimental data also showed that our system exhibited excellent performance in an environment of varying ambient-light intensity and in a totally dark environment. The limitation of this study is the range of the ToF camera. The depth detection range of the current ToF camera is 4 m, which is short and will limit the vehicle speed and operation time for the mission. With the development of ToF sensor technology, this limitation will be relieved and the current algorithm can still find good applications.

Supplementary Materials

The following are available online at https://www.youtube.com/watch?v=IqfIqArsWXA, Video: ToF–VIO demonstration. https://www.youtube.com/watch?v=lsl-8b6PmMI, Video: ToF–VIO in exploration. https://github.com/HKPolyU-UAV/ToF--VIO, Souce code and project page.

Author Contributions

Conceptualization, C.-Y.W. and S.C.; methodology, S.C.; software, S.C. and C.-W.C.; validation, C.-W.C. and S.C.; resources, S.C.; writing–original draft preparation, S.C.; writing–review and editing, C.-Y.W. and S.C.; visualization, C.-W.C.; supervision, C.-Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This research is support by EMSD HongKong under Grant. DTD/M&V/W0084/S0016/0523 —Indoor and Outdoor Inspection of E&M Installations using Unmanned Aerial Vehicles.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Derivation of Error State Integration Model

Appendix A.1. Linear Velocity Error

Substituting the error state of the linear velocity into its true state yields
v ˙ t = R t ( a m a b δ a b a n ) + g
As the error state is relatively small, the rotation matrix R t can be approximated by R t = ( I + [ δ θ ] × ) R . According to the error state definition v ˙ t = v ˙ + δ v ˙ ; we have
v ˙ + δ v ˙ = ( I + [ δ θ ] × ) R ( a m a b δ a b a n ) + g
Then deducting the nominal state Equation (26) from Equation (A2) results in
δ v ˙ = R ( δ a b a n ) + [ δ θ ] × R ( a m a b )
Assuming the acceleration noise is a white and isotropic noise ( R ( a n ) = a n ), the velocity error can be solved as
δ v ˙ = [ R ( a m a b ) ] × δ θ R δ a b R δ a n

Appendix A.2. Orientation Error

The derivation of the orientation can be given by the derivation of the general composition form (Equation (23)) and the system kinematics (Equation (24)):
q ˙ t = δ q ˙ q + δ q q ˙ = 1 2 q t ( ω m ω b t ω n )
Matching the nominal state kinematics (Equation (26)), we have:
δ q ˙ q = 1 2 q t ( ω m ω b t ω n ) 1 2 δ q q ( ω m ω b )
Consider the composition rule of q t and ω b t in Equation (23), Equation (A6) is simplified as follow:
δ q ˙ q = 1 2 δ q q ( ω b n ω n )
By miltiplying the conjugate q on the both sides, we can get
δ q ˙ = 1 2 δ q ( R ( ω b n ω n ) )
with the vector rotation rules q ω q = R ω . Further expand Equation (A8) and consider δ q ˙ = 1 2 δ θ is solved as:
δ θ ˙ = 0 ( R ( ω b n ω n ) ) T ( R ( ω b n ω n ) ) [ ( R ( ω b n ω n ) ) ] × 1 δ θ 2
Considering the second row, because the error term ( ( ω b n ω n ) ) is small, the 1 2 ( ω b n ω n ) δ θ term can be neglected. Finally, the orientation error is obtained:
δ θ ˙ = R ( δ ω b n ω n )

References

  1. Klein, G.; Murray, D. Parallel Tracking and Mapping for Small AR Workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
  2. Forster, C.; Pizzoli, M.; Scaramuzza, D. SVO: Fast semi-direct monocular visual odometry. In Proceedings of the 2014 IEEE international conference on robotics and automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 15–22. [Google Scholar]
  3. Forster, C.; Zhang, Z.; Gassner, M.; Werlberger, M.; Scaramuzza, D. SVO: Semidirect visual odometry for monocular and multicamera systems. IEEE Trans. Robot. 2016, 33, 249–265. [Google Scholar] [CrossRef] [Green Version]
  4. Lynen, S.; Achtelik, M.W.; Weiss, S.; Chli, M.; Siegwart, R. A robust and modular multi-sensor fusion approach applied to mav navigation. In Proceedings of the 2013 IEEE/RSJ international conference on intelligent robots and systems, Tokyo, Japan, 3–7 November 2013; pp. 3923–3929. [Google Scholar]
  5. Mourikis, A.I.; Roumeliotis, S.I. A multi-state constraint Kalman filter for vision-aided inertial navigation. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 3565–3572. [Google Scholar]
  6. Leutenegger, S.; Furgale, P.; Rabaud, V.; Chli, M.; Konolige, K.; Siegwart, R. Keyframe-based visual–inertial slam using nonlinear optimization. In Proceedings of the Robotis Science and Systems (RSS), Berlin, Germany, 24–28 June 2013. [Google Scholar] [CrossRef]
  7. Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual–inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
  8. Shan, Z.; Li, R.; Schwertfeger, S. RGBD-Inertial Trajectory Estimation and Mapping for Ground Robots. Sensors 2019, 19, 2251. [Google Scholar] [CrossRef] [Green Version]
  9. Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
  10. Von Stumberg, L.; Usenko, V.; Cremers, D. Direct sparse visual–inertial odometry using dynamic marginalization. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 2510–2517. [Google Scholar]
  11. Newcombe, R.A.; Izadi, S.; Hilliges, O.; Molyneaux, D.; Kim, D.; Davison, A.J.; Kohli, P.; Shotton, J.; Hodges, S.; Fitzgibbon, A.W. Kinectfusion: Real-time dense surface mapping and tracking. ISMAR 2011, 11, 127–136. [Google Scholar]
  12. Mur-Artal, R.; Tardós, J.D. Visual–inertial monocular SLAM with map reuse. IEEE Robot. Autom. Lett. 2017, 2, 796–803. [Google Scholar] [CrossRef] [Green Version]
  13. Magnusson, M.; Lilienthal, A.; Duckett, T. Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot. 2007, 24, 803–827. [Google Scholar] [CrossRef] [Green Version]
  14. Zhao, S.; Fang, Z. Direct depth SLAM: Sparse geometric feature enhanced direct depth SLAM system for low-texture environments. Sensors 2018, 18, 3339. [Google Scholar] [CrossRef] [Green Version]
  15. Kerl, C.; Sturm, J.; Cremers, D. Robust odometry estimation for RGB-D cameras. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 3748–3754. [Google Scholar]
  16. Meilland, M.; Comport, A.I.; Rives, P. A spherical robot-centered representation for urban navigation. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 5196–5201. [Google Scholar]
  17. Delmerico, J.; Scaramuzza, D. A benchmark comparison of monocular visual–inertial odometry algorithms for flying robots. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 2502–2509. [Google Scholar]
  18. Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
  19. Stoyanov, T.; Magnusson, M.; Andreasson, H.; Lilienthal, A.J. Fast and accurate scan registration through minimization of the distance between compact 3D NDT representations. Int. J. Robot. Res. 2012, 31, 1377–1393. [Google Scholar] [CrossRef]
  20. Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J.; Omari, S.; Achtelik, M.W.; Siegwart, R. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
  21. Li, L. Time-of-Flight Camera–An Introduction; Technical White Paper; Texas Instruments: Dallas, TX, USA, 2014. [Google Scholar]
  22. Sarbolandi, H.; Lefloch, D.; Kolb, A. Kinect range sensing: Structured-light versus Time-of-Flight Kinect. Comput. Vis. Image Underst. 2015, 139, 1–20. [Google Scholar] [CrossRef] [Green Version]
  23. Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 145–152. [Google Scholar]
  24. Li, S.; Lee, D. Fast visual odometry using intensity-assisted iterative closest point. IEEE Robot. Autom. Lett. 2016, 1, 992–999. [Google Scholar] [CrossRef]
  25. Tomono, M. Robust 3D SLAM with a stereo camera based on an edge-point ICP algorithm. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 4306–4311. [Google Scholar]
  26. Santamaria-Navarro, A.; Solà, J.; Andrade-Cetto, J. Visual Guidance of Unmanned Aerial Manipulators; Springer: New York, NY, USA, 2019. [Google Scholar]
  27. Grupp, M. evo: Python package for the evaluation of odometry and SLAM. Available online: https://github.com/MichaelGrupp/evo (accessed on 25 February 2020).
  28. Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A Benchmark for the Evaluation of RGB-D SLAM Systems. In Proceedings of the International Conference on Intelligent Robot Systems (IROS), Vilamoura, Portugal, 7–12 October 2012; pp. 573–580. [Google Scholar]
  29. Holz, D.; Ichim, A.E.; Tombari, F.; Rusu, R.B.; Behnke, S. Registration with the point cloud library: A modular framework for aligning in 3-D. IEEE Robot. Autom. Mag. 2015, 22, 110–124. [Google Scholar] [CrossRef]
Figure 1. PMD Flexx ToF camera output (224 x 171 pixels). (a) Depth image, white color mean depth information is not available. (b) NIR intensity image.
Figure 1. PMD Flexx ToF camera output (224 x 171 pixels). (a) Depth image, white color mean depth information is not available. (b) NIR intensity image.
Sensors 20 01263 g001
Figure 2. ToF–VIO testing platform and its implementation on a UAV.
Figure 2. ToF–VIO testing platform and its implementation on a UAV.
Sensors 20 01263 g002
Figure 3. The camera pose in the world frame and the point cloud alignment procedure in the camera frame.
Figure 3. The camera pose in the world frame and the point cloud alignment procedure in the camera frame.
Sensors 20 01263 g003
Figure 4. Workflow of sailent point selection.
Figure 4. Workflow of sailent point selection.
Sensors 20 01263 g004
Figure 5. Background points and Depth extreme points.
Figure 5. Background points and Depth extreme points.
Sensors 20 01263 g005
Figure 6. Distribution and approximate distribution curve of the | | P i P i | | in different cases. Uper figures plot the probability density function(PDF) and lower figures plot the cumulative distribution function (CDF).
Figure 6. Distribution and approximate distribution curve of the | | P i P i | | in different cases. Uper figures plot the probability density function(PDF) and lower figures plot the cumulative distribution function (CDF).
Sensors 20 01263 g006
Figure 7. Workflow of the Error State Kalman Filter estimator by using the ToF camera alignment results and the IMU data.
Figure 7. Workflow of the Error State Kalman Filter estimator by using the ToF camera alignment results and the IMU data.
Sensors 20 01263 g007
Figure 8. Time synchronization of IMU and ICP.
Figure 8. Time synchronization of IMU and ICP.
Sensors 20 01263 g008
Figure 9. Estimate trajectory and ground truth from xy plane of hand held test.
Figure 9. Estimate trajectory and ground truth from xy plane of hand held test.
Sensors 20 01263 g009
Figure 10. Comparison of estimated trajectory and ground truth of hand held test.
Figure 10. Comparison of estimated trajectory and ground truth of hand held test.
Sensors 20 01263 g010
Figure 11. Trajectories of different ICP methods.
Figure 11. Trajectories of different ICP methods.
Sensors 20 01263 g011
Figure 12. Visualization of salient points.
Figure 12. Visualization of salient points.
Sensors 20 01263 g012
Figure 13. The mean value of | | p p | | in ICP loops.
Figure 13. The mean value of | | p p | | in ICP loops.
Sensors 20 01263 g013
Figure 14. Estimated trajectory and ground truth from an xy plane of lab environment test.
Figure 14. Estimated trajectory and ground truth from an xy plane of lab environment test.
Sensors 20 01263 g014
Figure 15. Comparison of estimated trajectory and ground truth of in lab environment test.
Figure 15. Comparison of estimated trajectory and ground truth of in lab environment test.
Sensors 20 01263 g015
Figure 16. The VIO results obtained in an environment of varying ambient light intensity. (a,c,e,g,i) are recorded when light is on, (b,d,f,h,i) are recorded when light is turned off.
Figure 16. The VIO results obtained in an environment of varying ambient light intensity. (a,c,e,g,i) are recorded when light is on, (b,d,f,h,i) are recorded when light is turned off.
Sensors 20 01263 g016aSensors 20 01263 g016b
Figure 17. Comparison of the estimated trajectory by the current ToF–VIO sensor with that by the Realsense T265 sensor in the corridor environment.
Figure 17. Comparison of the estimated trajectory by the current ToF–VIO sensor with that by the Realsense T265 sensor in the corridor environment.
Sensors 20 01263 g017
Figure 18. Exploration and restruction of the indoor environment using ToF–VIO, blue path (ToF–VIO), red path (T265).
Figure 18. Exploration and restruction of the indoor environment using ToF–VIO, blue path (ToF–VIO), red path (T265).
Sensors 20 01263 g018
Figure 19. Comparison of the estimated trajectory by the current ToF–VIO sensor with that by the Realsense T265 sensor in the exploration test.
Figure 19. Comparison of the estimated trajectory by the current ToF–VIO sensor with that by the Realsense T265 sensor in the exploration test.
Sensors 20 01263 g019
Figure 20. Box plots of the numbers of input points and salient points in three tests.
Figure 20. Box plots of the numbers of input points and salient points in three tests.
Sensors 20 01263 g020
Table 1. Different VIO frameworks.
Table 1. Different VIO frameworks.
IMU Fusion MethodLoose-CoupledTight-Coupled
Estimation Method and Input
Direct methodRGB/GreySVO+MSF [3,4]VINS [7] [8]
VI-DSO [10]
Image-based methodFeature basedRGB/GreyMSCKF [5]VI-ORB [12]
OKVIS [6]
ICPDepth + RGB/GreyKinectFusion [11] (no IMU support)
Depth + Grey(current work)n.a.
OthersNDTRGB/Grey + Lidar3D-NDT [13], Direct Depth SLAM [14]
Table 2. Pros and cons of using passive camera and active ToF camera in motion estimation.
Table 2. Pros and cons of using passive camera and active ToF camera in motion estimation.
Passive CameraActive ToF Camera
prosLess expensive in price
Studied for years and with many CV algorithm
Relatively high resolution
Strong resilience to ambient light
Sense depth infomation without post-processing
consFail in poor/changing lighting conditions
Need initialization process to recover the depth
Limit sensing distance (less than 10 m)
Current CV algorithm can not used on the NIR image
Relatively low resolution
Table 3. Device Related Parameter Selection.
Table 3. Device Related Parameter Selection.
SymbolValueDescription
π s h , b g 0.01Background outlier threshold
π s h , i 100Intensity gradient threshold
π s h , z 0.07Depth gradient threshold
Table 4. Detail of Sensors Information.
Table 4. Detail of Sensors Information.
Topic NameContentFrequency
/image_depthDepth image15
/image_nir8-Bit NIR intensisty image15
/pointsOrganized point cloud15
/camera_infoCamera Information ( f x , f y , c x , c y ) 15
/imuIMU data250
/gtGround truth captured by Vicion50
Table 5. Accuracy and processing time comparison of different methods.
Table 5. Accuracy and processing time comparison of different methods.
MethodTranslation error (RMSE)Average
Processing
Time (ms)
Relevant
Processing Time
t ( x ) t ( icp ) t ( icp )
ATE
(m)
Improvement
ATE ( x ) ATE ( icp ) ATE ( icp )
RPE
(m/s)
Improvement
ATE ( x ) ATE ( icp ) ATE ( icp )
icp0.04800.0230148100%
icp+s0.051–6%0.01917%28.018%
icp+s+w0.0472%0.01726%36.225%
icp+rd0.063–31%0.0219%20.414%
icp+r0.070–45%0.01528%83.856%
Table 6. Calculation times of different processes of our algorithms on a Intel-i5 PC and TX2 embedded computers.
Table 6. Calculation times of different processes of our algorithms on a Intel-i5 PC and TX2 embedded computers.
ProcessTime (TX2)Time (i5-PC)
Salient Point Selection6 ms4 ms
ICP alignment33 ms30 ms
ESKF1 ms1 ms

Share and Cite

MDPI and ACS Style

Chen, S.; Chang, C.-W.; Wen, C.-Y. Perception in the Dark; Development of a ToF Visual Inertial Odometry System. Sensors 2020, 20, 1263. https://0-doi-org.brum.beds.ac.uk/10.3390/s20051263

AMA Style

Chen S, Chang C-W, Wen C-Y. Perception in the Dark; Development of a ToF Visual Inertial Odometry System. Sensors. 2020; 20(5):1263. https://0-doi-org.brum.beds.ac.uk/10.3390/s20051263

Chicago/Turabian Style

Chen, Shengyang, Ching-Wei Chang, and Chih-Yung Wen. 2020. "Perception in the Dark; Development of a ToF Visual Inertial Odometry System" Sensors 20, no. 5: 1263. https://0-doi-org.brum.beds.ac.uk/10.3390/s20051263

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop