Next Article in Journal
ZY-1 02D Hyperspectral Imagery Super-Resolution via Endmember Matrix Constraint Unmixing
Next Article in Special Issue
A Novel Method for AI-Assisted INS/GNSS Navigation System Based on CNN-GRU and CKF during GNSS Outage
Previous Article in Journal
Improving Remote Estimation of Vegetation Phenology Using GCOM-C/SGLI Land Surface Reflectance Data
Previous Article in Special Issue
FPS: Fast Path Planner Algorithm Based on Sparse Visibility Graph and Bidirectional Breadth-First Search
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LIDAR-Inertial Real-Time State Estimator with Rod-Shaped and Planar Feature

1
School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China
2
Key Laboratory of Micro-Inertial Instrument and Advanced Navigation Technology, Southeast University, Nanjing 210096, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(16), 4031; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14164031
Submission received: 8 June 2022 / Revised: 9 August 2022 / Accepted: 16 August 2022 / Published: 18 August 2022

Abstract

:
State estimation and mapping based on Light Detection and Ranging (LIDAR) are important for autonomous systems. Point cloud registration is a crucial module affecting the accuracy and real-time performance of LIDAR simultaneous localization and mapping (SLAM). In this paper, a novel point cloud feature selection for LIDAR-inertial tightly coupled systems is proposed. In the front-end, a point cloud registration is carried out after marking rod-shaped and planar feature information which is different from the existing LIDAR and inertial measurement unit (IMU) integration scheme. This preprocessing method subsequently reduces the outliers. IMU pre-integration outputs high-frequency result and is used to provide the initial value for LIDAR solution. In the scan-to-map module, a computationally efficient graph optimization framework is applied. Moreover, the LIDAR odometry further constrains the IMU states. In the back-end, the optimization based on sliding-window incorporates the LIDAR-inertial measurement and loop closure global constraints to reduce the cumulative error. Combining the front-end and back-end, we propose the low drift and high real-time LIDAR-inertial positioning system. Furthermore, we conducted an exhaustive comparison in open data sequences and real-word experiments. The proposed system outperforms much higher positioning accuracy than the state-of-the-art methods in various scenarios. Compared with the LIO-SAM, the absolute trajectory error (ATE) average RMSE (Root Mean Square Error) in this study increases by 64.45% in M2DGR street dataset (street_01, 04, 07, 10) and 24.85% in our actual scene datasets. In the most time-consuming mapping module of each system, our system runtime can also be significantly reduced due to the front-end preprocessing and back-end graph model.

1. Introduction

Accurate and reliable state estimation is a fundamental requirement of mobile robot and automatic driving. In urban environments, indoor environments and other complex scenes, it is difficult to achieve a high precision of positioning requirements with the traditional GNSS/INS integrated.
In recent years, visual/LIDAR simultaneous localization and mapping have made certain developments. On the one hand, visual slam can achieve six degrees-of-freedom state estimation just by camera, but it is seriously affected by the illumination and low texture feature [1]. On the other hand, the laser sensor directly obtains depth information and has high resolution, which can also work at night and achieve accurate pose estimation. Therefore, this research mainly focuses on LIDAR simultaneous localization and mapping.
LIDAR odometry and mapping (LOAM) [2] is an earlier proposed LIDAR slam algorithm. Iterative ICP algorithm is a common method for point cloud matching, which is time-consuming for registration, and it is easy to fall into a local minimum [3]. LOAM replaces ICP with point-to-line and point-to-plane matching. It consists of two subsystems. The odometry system performs point to line/plane feature matching to calculate pose between scans. The features of line and plane are judged according to point curvature. At the same time, LOAM effectively eliminates the unreliable parallel points and occlusion points. And it performs the distortion compensation by motion interpolation. The mapping system performs scan to map matching and runs at lower frequency, which can perform higher accuracy state estimation. By combining these two systems, LOAM achieves low drift and low-computational complexity, which has been ranked as the top in the LIDAR based method on the KITTI odometry benchmark site [4]. However, LOAM still has some flaws; its point cloud is stored in global voxel. Without key frame selection, it is difficult to integrate observation information of other sensors and perform global optimization.
F-LOAM adopts a two-stage distortion compensation method to reduce the computational cost and improve the real-time performance [5], but there are still no global optimization methods such as loop closure, resulting in large cumulative errors over a long period of time. Liu et al. propose a method based on deep learning for extracting feature points and obtaining the descriptors in LIDAR odometry. It also adopts the two-step state estimation for long distance experiment, which has a good performance for LIDAR of various resolutions [6]. V-LOAM introduces the visual odometry as the front-end of the laser odometry, further improving the accuracy of slam [7]. HDL_GRAPH_SLAM [8] is an algorithm that can fuse LIDAR, IMU and GNSS sensors, but the scan registration accuracy is low which is based on NDT [9]. It is also prone to drift in non-plane because of the flat ground constraint. LeGo-LOAM implements point cloud segmentation to reduce the number of features, and two-step registration provides the initial value for LIDAR mapping module. LeGo-LOAM firstly covers the key frame selection and loop detection [10]. However, there is obvious drift in the large scene testing experiment and the IMU is only used to remove distortion. LIO-mapping [11] is a joint state estimation problem based on the ideas of LOAM and VINS-Mono [12]. The front-end vision part is replaced by the LIDAR front-end for feature extraction and state estimation. However, the optimization problem is too large to be real-time, which makes it hard to apply in a mobile device. LINS is a tightly coupled LIDAR-inertial odometry (LIO) system based on the filter method [13]. The iterative error state Kalman Filter is used to correct the state estimation of the robot, but there is still a problem that the robot will drift when it runs for a long time without global constraints. LiLi-OM puts forward an adaptive keyframe selection for both solid-state and traditional LIDAR. It also introduces a metric weighting function during sensor fusion [14]. However, lacking a point cloud processing, the system stability is inadequate. LIO-SAM [15] is also a tightly-coupled LIO system, which is based on the incremental smoothing and mapping framework iSAM2 [16]. In addition, the loop closure factor and GPS factor can be added to the global optimization factor graph. In spite of this, its IMU constraints do not enter the factor graph optimization system, which may result in loss of constraint information between IMU and LIDAR measurements. In the actual scene test, LIO-SAM will appear at unstable states such as point cloud matching errors, especially when the carrier movement is in a large scene. Zhang et al. proposed the LIDAR-inertial odometry with an adaptive covariance estimation algorithm which is based on loosely-coupled method. It achieves better result compared to the tightly-coupled method [17].
In short, the existing LIDAR slam algorithms are mainly for small scenes. But for the complex scene or the great motion change, they are prone to cumulative errors and poor robustness.
Meanwhile, the processing of point cloud data affects the accuracy of point cloud registration for LIDAR slam. Douillard et al. introduced a method which jointly determines the ground and individual objects on the ground in three-dimensional space, including overhanging structures, but it requires a large amount of computation time, limiting online applications [17]. B. Douillard et al. proposed a priori ground extraction way. Segmentation of dense 3D data is optimized via a simple yet efficient voxel of the space. This approach provides near-real-time performance, but is not sufficient for real-time positioning scenarios [18]. M. Himmelsbach et al. proposed that 3D point clouds are projected onto 2D grids on the ground plane, and then point clouds were segmented on the occupied grids [19]. The algorithm has fast speed and is suitable for online segmentation. However, the method tends to result in weak segmentation. When two objects are relatively close to each other, it is prone to misrecognition, especially in the z-axis direction. In 2019, Seungcheol Park et al. proposed Curved-Voxel Clustering. The point cloud coordinates are converted from cartesian coordinates to spherical coordinates, and each point cloud is assigned to the voxel in the corresponding spherical coordinate system. Hash tables establish associations between indexes and points. When clustering, lookup is implemented using the hash tables [20]. Chen et al. use IMU to assist the point cloud registration and introduce the inertial error model for mobile laser scanning, which could effectively reduce the error with low time cost [21].
This paper mainly aims to improve the accuracy of LIDAR point cloud registration under the condition of real-time positioning, so as to ensure the robustness of the system. The contributions of this paper are summarized as follows:
  • A quick and effective feature extraction method is proposed. Due to the information of rod-shaped and planar feature, edge points and surface points are extracted reasonably to calculate curvature with the low computational cost.
  • IMU pre-integration is used to provide the initial value for LIDAR odometry, and the LIDAR odometry further constrains the pre-integrated IMU states.
  • A graph optimization model is used to solve the scan-to-map module, which greatly improves the speed of the traditional algorithm. Another graph optimization model is used to globally optimize the pre-integrated IMU measurements residuals, inter-frame matching residuals and loop residuals, which improves the accuracy and stability of LIO system effectively.

2. System Overview

The overall framework of this system is shown in the Figure 1. LIDAR and IMU measurements are the inputs for the system.
The system can be divided into three parts.
First, the preprocessing module, the raw point clouds are de-skewed using gyroscope data and IMU pre-integration value. Current scan’s point cloud is projected to the 2D image. The depth characteristic value is used to remove the outlier points. Image is used to segment the planar feature and cluster the rod-shaped feature information.
Then, the LIO odometry module, IMU pre-integration results are used to estimate motion pose. The scan-to-map between current frame and local map is performed. In the scan-to-map module, we introduce the graph optimization model which can enhance the speed and accuracy of the solution, and a sliding window-based way is applied to update and maintain the local map.
At last, the global optimization module, if the current frame is judged to be a keyframe, LIDAR scan-to-scan residuals, pre-integrated IMU residuals and loop residuals are optimized via the slide window optimization. Information of marginalization is used for prior constraints. Loop closure is detected and performed in an effective way, which is beneficial to reduce cumulative error.
According to this system, we get the 6-DOF pose estimation and a real-time updated global map. Exhaustive comparisons have been conducted to prove the superiority of our system.
We define notations and frame definitions throughout the article. (·)W is considered as world frame. In the LIO system, the origin of the world coordinate is identified as the first LIDAR frame (·)B is the body frame and (·)L is the LIDAR frame. Rotation is represented by rotation matrices R and quaternions q. So R W B and q W B is the rotation from world frame to body frame, and p W B is the translation. ⊗ is defined as the multiplication between two quaternions.

3. The Preprocessing Module

In this study, the current LIDAR point cloud is projected onto the current 2D image grid, which is represented by a matrix. The horizontal index unit of moment frame is the horizontal resolution of each frame, and the vertical unit is the vertical resolution. For example, the size of the projected image matrix of 16-line LIDAR is 16 ×1800. The value of the image grid stores the depth of each point, and the points will be removed if there is an outlier value. The operating point cloud data on the basis of two-dimensional images can significantly improve the computing speed.
After this process, reliable estimation of LIDAR´s per scan is a necessary prerequisite. In this paper, IMU pre-integration is used to obtain the relative translational motion at the beginning and the end of each scan. Based on this method, point cloud distortion can be eliminated. In the meantime, the raw point clouds from each scan are rotationally de-skewed using gyroscope data.

3.1. Label Planar Feature Information

The sensor carrier is moving on the ground and the LIDAR is mounted horizontally. The ground is observed with the beams below. We can get a rough but fast estimate of the plane from the number of rows of the image matrix. In the estimation plane, accurate ground points can be marked by judging the angle of each point to the ground.
a = tan 1 dz ( dx ) 2 + ( dy ) 2 = OP 2 OP 1
As shown in the Figure 2, P1 and P2 are two laser beams reflection points. The angle a corresponding to the points of adjacent laser beams should have a small value if there is no barrier. Points on the ground can be marked according to the size of the included angle value. Dx, dy and dz represent the differences of the two laser beams in the three directions, respectively.

3.2. Label Rod-Shape Feature Information

Figure 3 is the top view; OC is the measurement of the first beam and OD is the second.
Β = tan 1 r 2 sin α r 1     r 2 cos α = MD OM  
In this formula, r1 and r2 are the depth values measured by the two beams. A is the angle between the two laser beams. For example, the angle is 0.2° in x direction and 2° in y direction in Velodyne16.
If C and D are on different objects, the angle β between OC and OD will be greater than a certain threshold. In this work, we set this threshold at 20°. According to this method, we can cluster the same cluster objects quickly and accurately.
The point cloud of our two-dimensional image model can be traversed quickly. We traversed each pixel in the 2D image and calculated the included angle for four points around each pixel. And each pixel is searched by BFS algorithm [22].
As shown in the Figure 3c, the surrounding points consist of the left, right, lower and top pixels. If the angle between the surrounding points is smaller than the threshold, we decided they are the same object. If a point is marked, then it will be skipped. So, the algorithm complexity is Θ ( N ) , where N is the number of image pixels.
For the subsequent processing of point cloud, the influence of disorderly points and inaccurate points can be avoided. For example, when the carrier is driving, leaves, small objects, grass and weeds can be removed. These are difficult to observe through two consecutive frames of scanning, which are the main factors affecting the pose solution.

3.3. Feature Extraction

Through the segmentation and clustering of planar and rod-shaped information in the previous steps, the extraction of edge point and planar point are carried out in the rod-shape and planar feature information. Curvature is defined as follows:
c = 1 | S |   | | Pr i L | | | | j S , j i ( Pr j L     Pr i L ) | |  
In this formula,   S is the set of continuous points from the same row of the 2D image.   Pr is the point range. In this work, S is set to 10. And c is the curvature value.
Similar to LOAM, the depth information of each point is used to eliminate parallel points and occluded points, which have certain influence on the subsequent solution. In the feature extraction process, each frame is divided into 6 sub-images, which has a resolution of 16 × 300. Edge points and planar points are extracted from each sub-image, which are determined according to threshold c t h and p t h . In this work, c t h and p t h are chosen to be 1 and 0.1. The edge point set and planar point set extracted from k frame are ε k and s k .
After this process, we selectively obtain stable features and reduce the calculation pressure at the back-end. At the same time, this step improves the reliability of front-end scan registration.

4. The LIO Odometry Module

4.1. IMU Pre-Integration

LIDAR and IMU work in different frequencies. Usually, the LIDAR is 10 to 30 Hz and the IMU is 100 to 500 Hz. The pre-integration integrates the IMU measurement values between each adjacent frame of LIDAR, and adopts a value to express it. Through this step, we can get the output of the two sensors at the same frequency
The measurements of IMU include angular velocity ω B ˜ ( t ) and acceleration a B ˜ ( t ) . The measured values are all under the B coordinate system. And the measurement equation can be modeled as:
ω B ˜ ( t ) = ω B ( t ) + b ω ( t ) + η ω ( t )  
a B ˜ ( t ) = R W B ( t ) ( a W ( t )     g W ) + b a ( t ) + η a ( t )  
The measured values are affected by the slowly varying bias b ( t ) and white noise η(t). The acceleration of gravity in the world system. g W = [ 0 , 0 , g ] T   is the gravity vector, which affects the measurement. So, it should be subtracted.
In our work, noise is ignored and the biases are considered the constant during the pre-integration period. The current state value can be obtained based on the derive of pre-integration. Assuming that j is the current frame and i is the last frame. The attitude rotation matrix R WB j , velocity v WB j , and position p WB j can be expressed as:
R WB j = R WB i Δ R ij Exp ( J Δ R ij g δ b ω i )  
v WB j = v WB   i + g W Δ t ij + R WB i ( Δ v ij + J Δ v ij ω δ b ω i + J Δ v ij a δ b a i )  
p WB j = p WB i + v WB i Δ t ij + 1 2 g W Δ t ij 2 + R WB i ( Δ p ij + J Δ p ij ω δ b ω i + J Δ p ij a δ b a i )  
J is for Jacobians, and the details can be found in [23]. The J ( · ) a b a and J ( · ) ω b ω means a first-order approximation of the effect of changing the biases to avoid repeated integration. Meanwhile, the terms of pre-integration Δ R ij , Δ v ij and   Δ p ij can be computed between the frame i and j:
Δ R ij = k = i j 1 Exp ( ( ω B k b ω i ) Δ t )  
Δ v ij = k = i j 1 Δ R ik ( a B k b a i ) Δ t  
Δ p ij = k = i j 1 ( Δ v ik Δ t + 1 2 Δ R ik ( a B k b a i ) Δ t 2 )  

4.2. Build Local Map

In point cloud registration, the iterative closest point (ICP) algorithm is the most commonly scan registration method. However, as the urban scenes consists of lots of moving targets, ICP registration failure rate is high, which is directly based on raw point data. And ICP is improper for localization and mapping in real time due to its large amount of computation. In LOAM, pose estimation depends on scan-to-scan matching for quick estimation. However, this method is prone to cumulative error. In our work, current scan and local map are matched according to the predict value of IMU pre-integration. Meanwhile, the scan-to-map result is used to correct the IMU accumulative errors.
A local map associated with the current LIDAR frame is constructed. A fixed number of key frame maps within a certain range are constructed by sliding window method. The local map is converted to the W coordinate system. Edge points and planar points of a local map form the voxel map. And the points in local map are down-sampled to eliminate the duplicated features. In order to improve the point cloud matching speed, the feature information of the local map is stored in the data structure of KD-tree [24] for the convenience of subsequent search.
Therefore, this paper adopts the registration method based on feature points. After the feature points with the same type are obtained through preprocess, the graph optimization model is used to iteratively locate for current scan and local map.

4.3. Pose Estimation

For each edge point p ε ε k , we search for the nearest five points on the local map and calculate the mean and covariance matrices for the five points. When the distribution of points approximates a straight line, one eigenvalue of the covariance matrix will be significantly larger than the rest. In this study, the eigenvector corresponding to the eigenvalue u ε s m is the main direction of the line, and p ε s m is the geometric center of the five points in the Figure 4. If the line feature satisfies the condition, the distance between the current edge point and the line can be calculated, and the best pose estimation of the current point in the local map can be obtained by minimizing the distance. The distance calculation formula:
f ε ( p ε ) = p n   ( ( T k p ε     p ε sm )   ×   u ε sm )  
where symbol is the dot product and × is the cross product. p n is the unit vector.
p n = ( T k p ε p ε sm )   ×   u ε sm | | ( T k p ε p ε sm )   ×   u ε sm | |  
In the same way, each planar point in the current scan   p s s k , we search for five points on the local map to form a plane. However, the difference is that the eigenvector corresponding to the minimum eigenvalue of the five-point covariance matrix is the normal vector corresponding to this plane. As shown in the Figure 5,   u s s m is the main direction of the normal vector.   p s s m is the geometric center of five planar points.
f s ( p s ) = ( T k p s     p s sm )   u s sm
Therefore, this optimization problem can be constructed as:
min {   f ε ( p ε ) +   f s ( p s ) }  
The graph optimization algorithm is used to solve the nonlinear optimization problem. Thus, accurate state estimation can be obtained. Jacobian’s derivation can be based on the mathematical model of left disturbance with δ φ ϵ se ( 3 )   [25].
J p = ( TP ) δ φ = lim δ φ 0 ( exp ( δ φ ) ( TP )     ( TP ) ) φ = [ I 3 * 3   [ TP ] × 0 1 * 3 0 1 * 3 ]  
where [ T P ] × transforms 4D point expression { x , y , z , 1 } into 3D point expression { x , y , z } and calculates its skew symmetric matrix. Jacobian matrix with edge residual can be derived by:
J ε = f ε ( p ε ) ( TP ) ( TP ) δ φ = p n   ( u ε sm   ×   J p )  
In the same way, we also can derive:
J s = f s ( p s ) ( TP ) ( TP ) δ φ = u ε sm   J p  
According to above formula, the estimation can be calculated by iterative optimization until it converges. In the work, the local map size is set to within 50 m radius. We propose the new optimization model, deduce the corresponding residual and Jacobian, and improve the solving speed significantly compared to other algorithms (see Section 6).

5. The Global Optimization Module

If the motion change in the current scan is greater than a certain threshold (10° in rotation and 0.5 m in translation) compared with that of the previous scan, the current frame will be judged as a key frame, and it will enter the global optimization which is based on sliding window.
In this paper, the state vector in the sliding window is defined as χ = [ x 0 ,   x 1 , x 2 , ,   x n ] . And χ i = [ p b i W , q b i W ,   v b i W , b a , b g ] . For the n keyframe window width, these states are obtained by minimizing
m i n x n { | | R p ( χ ) | | 2 + k = 1 n L ( z ^ b j b i , χ ) + k = 1 n ϰ ( z ^ b j b i , χ ) + k = 1 n F ( z ^ b j b loop , χ ) }
In this formula,   R p ( x ˜ ) means the prior residual according to the measurements which are marginalized out because of the sliding window. L ( x k ) ,   ϰ ( x k ) and F ( x k ) denote the LIDAR, IMU and loop closure error terms. Figure 6 shows the optimization process.

5.1. Prior Term

The purpose of marginalization is to bound the computational complexity. For the states out of the sliding window, they cannot be directly throwed away, because it will destroy the original constraint relationship and lose the constraint information. This work selectively marginalizes out x i from the sliding window via Schur-complement [26], and convert measurements corresponding to marginalized states into the prior.

5.2. LIDAR Term

Through the previous scan-to-map calculation of each scan (see Section 4.3), The LIDAR state variation between two adjacent frames is added into the graph optimization model as scan-to-scan constraint.
L ( z b j b i , χ ) = Δ T i j = T i T T j  
This work assumes that j and i are the current and previous frame, respectively. This term can inhibit the accumulation of cumulative errors over a long time.

5.3. IMU Term

When IMU measurements are available, the residual between two continuous frames can be calculated, and the residual is defined as:
ϰ ( Z ^ b j b i , χ ) =   [ δ α b j b i δ β b j b i δ θ b j b i δ b a δ b ω ] = [ R W b i ( p b j W p b i W v i W Δ t + 1 2 g W Δ t 2 ) α ^ b j b i R W b i ( v j W v i W Δ t + g W Δ t ) β ^ b j b i 2 [ ( q ^ b j b i ) 1 ( q b i ω ) 1 q b j ω ] x y z b j a b i a b j ω b i ω ]
where [ ] x y z is the imaginary part of a quaternion. α ^ b j b i , β ^ b j b i  and  q ^ b j b i are the pre-integration of position, velocity, and rotation between j and i under the assumption that b a and b ω are stable.

5.4. Loop Term

Loop closure is an important step to correct the accumulated error in SLAM system. In this study, the function is realized by distance detection. In the current frame, we search for the distance coordinates of nearby key frames. And frames within the geometric radius 15m can be marked as the candidate loop closure frames.
We select the nearest frame from the candidate frame as the previous key frame T loop W . Then a certain number of point clouds are found near the previous key frame, which are used for a small local map. In this module, the number of points in optimization model is less, so ICP is used to calculate the relative transformation   T W W of similar scenes. The residual between the previous key frame and current frame can be obtained:
F ( z ^ b j b l o o p , χ ) = Δ T loop , j = ( T loop W ) 1 T W W T j W  
In order not to affect the real-time performance, loop closure detection and mapping for ICP are in another thread.

6. Evaluation

In order to verify our algorithm, we have conducted public dataset experiments and real-word experiments. The proposed algorithm is operated on a laptop which consists of an Intel-i7 CPU and 16G of memory. The operating system is Ubuntu18.04 and ROS Melodic [27]. We use evo [28] to evaluate accuracy. The optimization library we used are GTSAM [29] and Ceres [30].

6.1. Validation of M2DGR Datasets

M2DGR [31] datasets were recorded using ground robots. As shown in Figure 7, A HDL 32E Velodyne LiDAR (labeled 3 in the figure) was used to scan the surrounding environment and obtain the 3D point cloud. The IMU device is Handsfree A9 (labeled 5 in the figure), which is a 9-axis sensor. In outdoors, the satellite visibility is good so that the GNSS-RTK suite (labeled 4 in the figure) outputs high-precision ground truth. For indoor environments, the ground truth trajectories are recorded with a motion-capture system which consists of twelve highspeed tracking cameras. The spatial relationship among different sensors have been calibrated.
In order to test the robustness of our algorithm, we adopt tests of different scenarios, and the data information is shown in the Table 1. Street_01, 04, 07, 10 are collected on the street. In the street dataset, there are buildings discontinuously. The structured environment has rich geometric feature information. However, the switching of unstructured scene has the unpredictable influence on LIDAR odometry. Various weeds, leaves and other environmental factors affect the positioning accuracy in Street_04 around the lawn. Loop is set for loop closure detection, which is important for the validation in back-end graph optimization, and the motion state of zigzag brings challenges for interframe motion estimation. When the running time is longer than 500 s, we think that it is long-term to test for stability and robustness. Gate_02 is collected around the large circular gate. It is easy to satisfy the loopback condition. The ground robot is always rotating in Circle_02 scene, which is difficult for the feature matching, and Hall_05 is collected for indoor environment. There is a large amount of overlap and structured feature during the experiment.
We contrast our system with ALOAM, LeGo-LOAM and LIO-SAM. ALOAM only depends on LIDAR; the core of the algorithm is the same as LOAM, but it is achieved according to Ceres for the code readability. LeGo-LOAM uses IMU data to help remove motion distortions from point clouds. LIO-SAM is a tightly coupled LIDAR and IMU approach, but there is no front-end processing of point clouds and the traditional solution algorithm in scan-to-map is more time consuming.
We choose some typical scenarios such as zigzag, rotation and loop. In these cases, point cloud mismatching often occurs due to the violent motion of the carrier. The long-term run is to verify the elimination of the accumulated error of the LIO system.
In Table 2, the bold and italic values indicate the minimum error. Seven groups of experiments prove that our algorithm improves the accuracy in most scenarios. Especially in a scene such as a street. In the sequence “Circle_02”, our LIO system has a higher error than LeGo-LOAM. That is because “Circle_02” is collected in a fixed scene and the ground robot is always rotating. This motion state has slightly bad effect on IMU pre-integration. Other than “Circle_02”, our system benefits from the tightly couple of inertial and LIDAR information. In the sequence “Gate_02” and “Hall_05”, the four algorithms perform equally well. These two scenes are simple and rich in structural features. However, in the street sequence, our algorithm can greatly improve the performance.

6.1.1. Positioning Performance Analysis

Street_01 is chosen for our analysis. Figure 8 shows the trajectories of the four algorithms in street_01 in X-Y plot. The accumulative error of four algorithms can be obtained from the detail diagram. Our system makes reasonable use of feature information, which effectively improve the accuracy of point cloud matching. Planar points are extracted from ground surface feature information, and edge points are extracted from rod feature information. We notice that ALOAM, LeGo-LOAM and LIO-SAM will drift and have a large deviation when it comes to turning. However, in this system, the point cloud registration has been greatly improved.
More detailed evaluations have been conducted. Figure 9 shows absolute trajectory error variation for 4 algorithms in street_01. ALOAM have largest error without IMU measurements, and its scan-to-scan method has a bad influence, which is easy for providing inaccurate information in the scan-to-map module. LeGo-LOAM applies a two-step scan-to-scan method, which is beneficial for improving efficiency, but it still introduces much error. Moreover, without IMU constraint, loose-coupled LIO system such as LeGo-LOAM cannot adequately make use of sensor observation information. In LIO-SAM, there is no point cloud preprocessing section. Lots of unstable observations also have bad effects on point cloud registration.
This work not only tests RMSE in Table 2. Figure 10 displays each evaluation parameter. Our system has a good performance in different indicators. Figure 11 is the box diagram, which is used to display the dispersion of a set of data. The system also has the lowest deviation.
Four algorithms have been compared. We now analyze the difference between our system and ground truth. Figure 12 is the display diagram of trajectory and truth value in the X-Y plane.
Figure 13 shows error changes in three directions throughout the period. We can see that the system has an obvious deviation at the start time. That is because the optimization process takes time to converge and correct. At the same time, local map takes time to build. After a long run, the error is remained low in our system. It verifies the robustness and high precision of the work.

6.1.2. Runtime Performance Analysis

Meanwhile, our experiments proved that the time consumption of our scan-to-map module is significantly reduced from Table 3. Four algorithms have this module which is the most time-consuming. So, we choose the cost time of this module for comparison. The bold and italic values indicate the minimum time consumption. We can see the obvious advantage of our algorithm.
LOAM and LeGo-LOAM use a scan-to-scan match to provide odometry, which means using the current scan and last scan to do the scan matching, and the result offers an initial guess for mapping. LIO-SAM and our system use IMU pre-integration, which is of high frequency, and we use the back-end result to suppress IMU drift. Even more, thanks to the edge points extracted from the rod-shaped information and the planar points extracted from the ground surface information, many outliers are not in the operation. Accuracy and speed are greatly improved.
We still choose street_01 for analysis. Table 4 shows the number of frames four systems processed.
Figure 14 shows the processing time of each frame. We can clearly see the lowest cost time of our system. In ALOAM, the mapping module uses the global map and map maintenance is time-consuming. LeGo-LOAM and LIO-SAM are the same, which apply Levenberg–Marquardt algorithm [32] of 30 iterations for optimization. Our algorithm uses faster and more accurate graph optimization model to solve the scan-to-map module (see Section 4.2 and Section 4.3).
The computationally-efficient system is meaningful for mobile terminal and other platforms with limited computing resources.

6.2. Validation of Our Datasets

To further test our system, LIDAR has less beams and IMU is of different quality. We set up a sensor suite composed of a VLP16 Velodyne and an ADIS16488 IMU (see Figure 15). Sensors have hardware time synchronization because of GPS pulse per second (PPS). RTK/IMU combined navigation results are used as the truth value, which is after NovAtel Inertial Explorer software post-processing. Our aim is to prove the versatility of our algorithm. We pick two typical scenarios. One is in a campus (dataset_01) and the other is on a city road (datasetet_02).
In dataset_01, (see Figure 16) the speed of our car is about 6 m/s. There is rich feature, but there is accumulated error in long-term run. Various weeds, leaves and other environmental factors affect the positioning accuracy.
Dataset_02 was collected in the wide urban road (see Figure 17) and the speed of our car is about 14 m/s. It contains a large number of buildings. Dynamic objects will affect the point cloud matching accuracy. The density of point cloud in open space is small and it is difficult to have a good performance on localization.
In feature-rich areas(dataset_01), we can conclude that LIDAR will have good performance than the wide-open spaces(dataset_02) from the Table 5. The bold and italic values indicate the minimum error. Compared with the LIO-SAM, the RMSE in the study increases by 24.2% in dataset_01 and 25.0% in dataset_02.
In the Figure 16a and Figure 17a, they show the corresponding trajectories of the two datasets and different colors represent the error values. In the Figure 16b and Figure 17b, the global reconstruction of the two scenes is built. Due to the multiple constraints of the back-end optimization, we obtain a globally consistent point cloud map. According to Equation (19), the robustness and reliability of the map can be guaranteed. The map shows the structural details in the bottom panel of Figure 16c. We can clearly see the cars and the trunk of the tree in the dataset in bird-eye view. In the Figure 17c, the wide roads have great influence for LIDAR slam (see Table 5). However, our algorithm can also reduce the error.
The time performance is consistent with the M2DGR dataset analysis (see Section 6.1.2). In dataset_01, due to the features’ richness in campus scenes, the feature information relationship in scan-to-map needs much time for calculation. However, the average time consumption is 64.379 ms, which can satisfy the real-time requirement (LIDAR is 10 HZ sampling frequency), and the average time consumption is 25.874 ms in the road test in the dataset_02, which is computationally efficient. Our system can achieve a good estimation result with less time cost.

7. Conclusions and Future Perspectives

According to the datasets and our own data experiments, compared to LIDAR only positioning (ALOAM), the positioning accuracy and robustness is significantly improved. Then only IMU data helps the point cloud to remove distortion (LeGo-LOAM), the tightly coupled LIO has lower drift, and compared to LIO-SAM, segmentation and clustering are used to mark feature information. The point cloud matching is more accurate and the runtime of scan-to-map module is much less.
In this paper, we propose an improved LIO system. Firstly, it makes reasonable use of the feature information of point cloud and effectively improves the accuracy of point cloud matching. Point cloud registration is carried out after marking rod-shaped and planar feature information which is different from the existing LIDAR-inertial integration scheme. The optimized edge points and planar points extraction modes reduce the computation of scan-to-map and improve the real-time performance. Secondly, prediction of IMU odometry and correction of LIDAR odometry improve the accuracy and frequency of the mapping module, which is inspired by LIO-SAM. Comparing this to the front-end odometry in traditional scan-to-scan mode, the tightly coupled mode of system greatly improves the performance of LIO. Thirdly, the scan-to-map based on the graph optimization model is of great significance to speed up the solution and decrease error. Therefore, the system does not apply the Levenberg–Marquardt algorithm, which is adopted in Lego-LOAM and LIO-SAM. Fourthly, the robust back-end optimization system including effective loop closure suppress the cumulative drift of LIO odometry, and IMU measurements residuals add more constraints information between IMU and LIDAR measurements compared to LIO-SAM. The optimization mode based on sliding window ensure full use of sensors information under real-time conditions. Experiments show that the real-time performance and accuracy of our algorithm exceed that of most state-of-the-art systems in various typical environments.
It can be seen from Table 2 that the positioning accuracy (RMSE) can be improved by 25–78% (the average increment is 64.45%) in the M2DGR street datasets compared to the current tightly coupled LIDAR SLAM algorithms (LIO-SAM). After optimizing the extraction mode of edge points and planar points, our system processes more frames and takes less time on average, effectively improving real-time performance. In our actual scene datasets, the RMSE in the study increases by 24.4% in dataset_01 and 25.0% in dataset_02.
We draw a conclusion that we propose the low drift and high real-time LIDAR-inertial positioning and mapping system, which is of great importance in indoor locating and other GNSS occlusion area. At the same time, it can provide high precision point cloud image for scene understanding in automatic system. For the back-end optimization framework, we can easily add other measurements such as GNSS for global restriction.
In the future, we noticed that it is necessary to improve the initialization process to reduce initial error. It is very important to judge the rod-shaped feature information and the planar feature information in the research process of this paper. This work gets thresholds according to experience temporarily. We will focus on online threshold estimation and adaptive threshold selection. Also, it is worth mentioning that LIO system is prone to Z direction drift in the large scene. Then more constraints will be introduced to suppress drift in our next step. Furthermore, according to the recent study [33,34,35,36], the positioning and mapping system based on solid state LIDAR can significantly reduce the hardware cost. Therefore, the research of solid-state LIDAR- inertial system is worth exploring.

Author Contributions

Conceptualization, H.L., S.P., W.G. and C.M.; methodology, H.L.; software, F.J. and C.M.; validation, S.P. and F.J.; formal analysis, H.L. and C.M.; investigation, W.G. and F.J.; resources, S.P., W.G. and H.L.; writing—original draft preparation, H.L. and X.L.; writing—review and editing, W.G. and X.L.; supervision, S.P. and W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research study was funded by the National Key Research and Development Program of China (No. 2021YFB3900804), the Research Fund of Ministry of Education of China and China Mobile, (No. MCM20200J01) and the Fundamental Research Funds for the Central Universities (No. 2242021R41134).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
  2. Zhang, J.; Singh, S. Low-drift and real-time lidar odometry and mapping. Auton. Robot. 2017, 41, 401–416. [Google Scholar] [CrossRef]
  3. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  4. Geiger, A.; Lenz, P.; Urtasun, R. Are We Ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012. [Google Scholar]
  5. Wang, H.; Wang, C.; Chen, C.L.; Xie, L. F-LOAM: Fast LiDAR Odometry and Mapping. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 4390–4396. [Google Scholar]
  6. Liu, T.; Wang, Y.; Niu, X.; Chang, L.; Zhang, T.; Liu, J. LiDAR Odometry by Deep Learning-Based Feature Points with Two-Step Pose Estimation. Remote Sens. 2022, 14, 2764. [Google Scholar] [CrossRef]
  7. Zhang, J.; Singh, S. Visual-Lidar Odometry and Mapping: Low-Drift, Robust, and Fast. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015. [Google Scholar]
  8. Koide, K.; Miura, J.; Menegatti, E. A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419841532. [Google Scholar] [CrossRef]
  9. Biber, P.; Strasser, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), Las Vegas, NV, USA, 27–31 October 2003. [Google Scholar]
  10. Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
  11. Ye, H.; Chen, Y.; Liu, M. Tightly Coupled 3D Lidar Inertial Odometry and Mapping. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
  12. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
  13. Qin, C.; Ye, H.; Pranata, C.E.; Han, J.; Zhang, S.; Liu, M. LINS: A Lidar-Inertial State Estimator for Robust and Efficient Navigation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar]
  14. Li, K.; Li, M.; Hanebeck, U.D. Towards High-Performance Solid-State-LiDAR-Inertial Odometry and Mapping. IEEE Robot. Autom. Lett. 2021, 6, 5167–5174. [Google Scholar] [CrossRef]
  15. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2020. [Google Scholar]
  16. Kaess, M.; Ranganathan, A.; Dellaert, F. iSAM: Incremental Smoothing and Mapping. IEEE Trans. Robot. 2008, 24, 1365–1378. [Google Scholar] [CrossRef]
  17. Zhang, J.; Wen, W.; Huang, F.; Chen, X.; Hsu, L.-T. Coarse-to-Fine Loosely-Coupled LiDAR-Inertial Odometry for Urban Positioning and Mapping. Remote Sens. 2021, 13, 2371. [Google Scholar] [CrossRef]
  18. Douillard, B.; Underwood, J.; Vlaskine, V.; Quadros, A.; Singh, S. A Pipeline for the Segmentation and Classification of 3D Point Clouds. In Experimental Robotics: The 12th International Symposium on Experimental Robotics; Khatib, O., Kumar, V., Sukhatme, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 585–600. [Google Scholar]
  19. Himmelsbach, M.; Hundelshausen, F.v.; Wuensche, H.-J. Wuensche Fast segmentation of 3D point clouds for ground vehicles. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010. [Google Scholar]
  20. Park, S.; Wang, S.; Lim, H.; Kang, U. Curved-Voxel Clustering for Accurate Segmentation of 3D LiDAR Point Clouds with Real-Time Performance. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 6459–6464. [Google Scholar]
  21. Chen, Z.; Li, Q.; Li, J.; Zhang, D.; Yu, J.; Yin, Y.; Lv, S.; Liang, A. IMU-Aided Registration of MLS Point Clouds Using Inertial Trajectory Error Model and Least Squares Optimization. Remote Sens. 2022, 14, 1365. [Google Scholar] [CrossRef]
  22. Bogoslavskyi, I.; Stachniss, C. Fast range image-based segmentation of sparse 3D laser scans for online operation. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016. [Google Scholar]
  23. Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-Manifold Preintegration for Real-Time Visual—Inertial Odometry. IEEE Trans. Robot. 2017, 33, 1–21. [Google Scholar] [CrossRef]
  24. Bentley, J.L. Multidimensional binary search trees used for associative searching. Commun. ACM 1975, 18, 509–517. [Google Scholar] [CrossRef]
  25. Barfoot, T.D. State Estimation for Robotics: A matrix Lie Group Approach; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
  26. Sibley, G.; Matthies, L.; Sukhatme, G. Sliding window filter with application to planetary landing. J. Field Robot. 2010, 27, 587–608. [Google Scholar] [CrossRef]
  27. Będkowski, J.; Pełka, M.; Majek, K.; Fitri, T.; Naruniec, J. Open source robotic 3D mapping framework with ROS—Robot Operating System, PCL—Point Cloud Library and Cloud Compare. In Proceedings of the 2015 International Conference on Electrical Engineering and Informatics (ICEEI), Denpasar, Indonesia, 10–11 August 2015. [Google Scholar]
  28. Sanfourche, M.; Vittori, V.; le Besnerais, G. evo: A realtime embedded stereo odometry for MAV applications. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; Available online: https://github.com/MichaelGrupp/evo (accessed on 25 May 2022).
  29. Kaess, M.; Johannsson, H.; Roberts, R.; Ila, V.; Leonard, J.; Dellaert, F. iSAM2: Incremental smoothing and mapping with fluid relinearization and incremental variable reordering. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011. [Google Scholar]
  30. Agarwal, S.; Mierle, K. Ceres Solver. Available online: http://ceres-solver.org (accessed on 25 May 2022).
  31. Yin, J.; Li, A.; Li, T.; Yu, W.; Zou, D. M2DGR: A Multi-Sensor and Multi-Scenario SLAM Dataset for Ground Robots. IEEE Robot. Autom. Lett. 2022, 7, 2266–2273. Available online: https://github.com/SJTU-ViSYS/M2DGR (accessed on 25 May 2022). [CrossRef]
  32. Moré, J.J. The Levenberg-Marquardt Algorithm: Implementation and Theory; Springer: Berlin/Heidelberg, Germany, 1977. [Google Scholar]
  33. Bai, C.; Xiao, T.; Chen, Y.; Wang, H.; Zhang, F.; Gao, X. Faster-LIO: Lightweight Tightly Coupled Lidar-Inertial Odometry Using Parallel Sparse Incremental Voxels. IEEE Robot. Autom. Lett. 2022, 7, 4861–4868. [Google Scholar] [CrossRef]
  34. Xu, W.; Zhang, F. FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by Tightly-Coupled Iterated Kalman Filter. IEEE Robot. Autom. Lett. 2021, 6, 3317–3324. [Google Scholar] [CrossRef]
  35. Xu, W.; Cai, Y.; He, D.; Lin, J.; Zhang, F. FAST-LIO2: Fast Direct LiDAR-Inertial Odometry. IEEE Trans. Robot. 2022, 38, 2053–2073. [Google Scholar] [CrossRef]
  36. Lin, J.; Zhang, F. Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar]
Figure 1. Overall framework of our LIO system.
Figure 1. Overall framework of our LIO system.
Remotesensing 14 04031 g001
Figure 2. Illustration of the planar feature information (a) The angle between the lines for judging the surface; (b) Red points are the planar feature in the scan.
Figure 2. Illustration of the planar feature information (a) The angle between the lines for judging the surface; (b) Red points are the planar feature in the scan.
Remotesensing 14 04031 g002
Figure 3. Illustration of the rod-shape feature cluster (a) The fast and accurate method for segmentation; (b) Green points are the edge feature in the scan; (c) The BFS search method in current scan.
Figure 3. Illustration of the rod-shape feature cluster (a) The fast and accurate method for segmentation; (b) Green points are the edge feature in the scan; (c) The BFS search method in current scan.
Remotesensing 14 04031 g003
Figure 4. Illustration of the edge point-to-line residual. u ε s m s the main direction of the line, and   p s s m is the geometric center of the five nearest points in the local map.
Figure 4. Illustration of the edge point-to-line residual. u ε s m s the main direction of the line, and   p s s m is the geometric center of the five nearest points in the local map.
Remotesensing 14 04031 g004
Figure 5. Illustration of the planar point-to-plane residual.   p s s m is the geometric center of five planar points.   u s s m is the main direction of the normal vector.
Figure 5. Illustration of the planar point-to-plane residual.   p s s m is the geometric center of five planar points.   u s s m is the main direction of the normal vector.
Remotesensing 14 04031 g005
Figure 6. The optimization contains prior term, LIDAR term, IMU term and loop term. Prior term is generated by marginalization. Observations of LIDAR can provide scan-to-scan constraints and IMU pre-integration forms the constraints between keyframes. Loop closure is used for reduce the drift for the long-time running.
Figure 6. The optimization contains prior term, LIDAR term, IMU term and loop term. Prior term is generated by marginalization. Observations of LIDAR can provide scan-to-scan constraints and IMU pre-integration forms the constraints between keyframes. Loop closure is used for reduce the drift for the long-time running.
Remotesensing 14 04031 g006
Figure 7. The sensor suite of M2DGR. Sensors are strictly calibrated and using the same time stamps.
Figure 7. The sensor suite of M2DGR. Sensors are strictly calibrated and using the same time stamps.
Remotesensing 14 04031 g007
Figure 8. The trajectories of the four algorithms on street_01. Our trajectory is closest to the ground truth. The detail diagram is near the end of this test.
Figure 8. The trajectories of the four algorithms on street_01. Our trajectory is closest to the ground truth. The detail diagram is near the end of this test.
Remotesensing 14 04031 g008
Figure 9. Illustration of the absolute trajectory error variation. Our system has been kept a low level.
Figure 9. Illustration of the absolute trajectory error variation. Our system has been kept a low level.
Remotesensing 14 04031 g009
Figure 10. Illustration of A variety of indicators. The superiority of our algorithm can be concluded from the statistical graphs.
Figure 10. Illustration of A variety of indicators. The superiority of our algorithm can be concluded from the statistical graphs.
Remotesensing 14 04031 g010
Figure 11. Box diagram which is used to reflect the characteristics of distribution of data.
Figure 11. Box diagram which is used to reflect the characteristics of distribution of data.
Remotesensing 14 04031 g011
Figure 12. The detailed trajectories of ground truth and our system in street_01. They are aligned well.
Figure 12. The detailed trajectories of ground truth and our system in street_01. They are aligned well.
Remotesensing 14 04031 g012
Figure 13. Error changes in three directions. The initial error is a little larger, after that the error is small.
Figure 13. Error changes in three directions. The initial error is a little larger, after that the error is small.
Remotesensing 14 04031 g013
Figure 14. Processing time of each scan. The red is obviously lower than other three.
Figure 14. Processing time of each scan. The red is obviously lower than other three.
Remotesensing 14 04031 g014
Figure 15. Our sensor suite. All have hardware time synchronization (a) IMU is in the LIDAR below. Integrated navigation device is used to gain ground truth. (b) The device is mounted on top of the vehicle.
Figure 15. Our sensor suite. All have hardware time synchronization (a) IMU is in the LIDAR below. Integrated navigation device is used to gain ground truth. (b) The device is mounted on top of the vehicle.
Remotesensing 14 04031 g015
Figure 16. Trajectory and mapping are generated by our system. (a) Our trajectory and ground truth. Different colors represent the error values. (b) The mapping result is rendered with LIDAR intensity value from the top view during the positioning process. (c) The top panel is the specific real-word environment picked out of the whole trajectory. The bottom panel shows the detail from LIDAR mapping correspond to the top panel.
Figure 16. Trajectory and mapping are generated by our system. (a) Our trajectory and ground truth. Different colors represent the error values. (b) The mapping result is rendered with LIDAR intensity value from the top view during the positioning process. (c) The top panel is the specific real-word environment picked out of the whole trajectory. The bottom panel shows the detail from LIDAR mapping correspond to the top panel.
Remotesensing 14 04031 g016aRemotesensing 14 04031 g016b
Figure 17. Trajectory and mapping in dataset_02. (a) Our trajectory and ground truth in the urban road. (b) The mapping result is rendered with LIDAR intensity value from bird-eye view. (c) The road scene.
Figure 17. Trajectory and mapping in dataset_02. (a) Our trajectory and ground truth in the urban road. (b) The mapping result is rendered with LIDAR intensity value from bird-eye view. (c) The road scene.
Remotesensing 14 04031 g017aRemotesensing 14 04031 g017b
Table 1. The dataset contains all kinds of scene.
Table 1. The dataset contains all kinds of scene.
Street_01Street_04Street_07Street_10Gate_02Circle_02Hall_05
Durations/s1028858929910327244402
Description of featuresStreet and buildings,
zigzag, long-term
Around
lawn,
loop back,
long-term
Zigzag,
long-term
Zigzag,
long-term
Loop back,
around gate
Circle,
rotation
Indoor, large overlap
Table 2. Absolute trajectory error (ATE) RMSE (m) of the four algorithms in seven datasets.
Table 2. Absolute trajectory error (ATE) RMSE (m) of the four algorithms in seven datasets.
Street_01Street_04Street_07Street_10Gate_02Circle_02Hall_05
ALOAM7.6613.58227.59022.0750.3611.3911.029
LeGo-LOAM3.2691.19314.58331.0240.4850.2881.034
LIO-SAM6.3901.1334.6932.5690.3260.6181.053
OURS1.3620.8361.5791.4790.3130.4090.980
Table 3. The time consumption (ms) in seven datasets. All are recorded in the same platform.
Table 3. The time consumption (ms) in seven datasets. All are recorded in the same platform.
Street_01Street_04Street_07Street_10Gate_02Circle_02Hall_05
ALOAM295.875250.093309.164251.751230.516294.688106.374
LeGo-LOAM132.98693.184149.694124.700112.126122.93257.938
LIO-SAM61.90043.46183.95776.54157.51890.45021.308
OURS37.60027.42766.09350.88835.21534.62814.296
Table 4. The scan-to-map frames in street_01.
Table 4. The scan-to-map frames in street_01.
ALOAMLeGo-LOAMLIO-SAMOURS
Scan-to-map frames2788256651285133
Table 5. Absolute trajectory error (ATE) RMSE (m) in our real-word experiments.
Table 5. Absolute trajectory error (ATE) RMSE (m) in our real-word experiments.
Durations/sDescription of FeaturesA-LOAMLeGo-LOAMLIO-SAMOURS
dataset_01678campus 4.0401.2641.2720.964
dataset_02240wide rode6.3915.9334.2053.152
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, H.; Pan, S.; Gao, W.; Ma, C.; Jia, F.; Lu, X. LIDAR-Inertial Real-Time State Estimator with Rod-Shaped and Planar Feature. Remote Sens. 2022, 14, 4031. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14164031

AMA Style

Liu H, Pan S, Gao W, Ma C, Jia F, Lu X. LIDAR-Inertial Real-Time State Estimator with Rod-Shaped and Planar Feature. Remote Sensing. 2022; 14(16):4031. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14164031

Chicago/Turabian Style

Liu, Hong, Shuguo Pan, Wang Gao, Chun Ma, Fengshuo Jia, and Xinyu Lu. 2022. "LIDAR-Inertial Real-Time State Estimator with Rod-Shaped and Planar Feature" Remote Sensing 14, no. 16: 4031. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14164031

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop