Next Article in Journal
Semantic Decomposition and Reconstruction of Compound Buildings with Symmetric Roofs from LiDAR Data and Aerial Imagery
Next Article in Special Issue
Effective Generation and Update of a Building Map Database Through Automatic Building Change Detection from LiDAR Point Cloud Data
Previous Article in Journal
Optimal Altitude, Overlap, and Weather Conditions for Computer Vision UAV Estimates of Forest Structure
Previous Article in Special Issue
Evaluation of a Backpack-Mounted 3D Mobile Scanning System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hierarchical Registration Method for Airborne and Vehicle LiDAR Point Cloud

1
Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing University, Nanjing 210093, China
2
Collaborative Innovation Center for the South Sea Studies, Nanjing University, Nanjing 210093, China
3
Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing University, Nanjing 210093, China
4
Department of Geographic Information Science, Nanjing University, Nanjing 210093, China
5
Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing 210023, China
*
Authors to whom correspondence should be addressed.
Remote Sens. 2015, 7(10), 13921-13944; https://0-doi-org.brum.beds.ac.uk/10.3390/rs71013921
Submission received: 13 July 2015 / Revised: 6 October 2015 / Accepted: 15 October 2015 / Published: 23 October 2015
(This article belongs to the Special Issue Lidar/Laser Scanning in Urban Environments)

Abstract

:
A new hierarchical method for the automatic registration of airborne and vehicle light detection and ranging (LiDAR) data is proposed, using three-dimensional (3D) road networks and 3D building contours. Firstly, 3D road networks are extracted from airborne LiDAR data and then registered with vehicle trajectory lines. During the registration of airborne road networks and vehicle trajectory lines, a network matching rate is introduced for the determination of reliable transformation matrix. Then, the RIMM (reversed iterative mathematic morphological) method and a height value accumulation method are employed to extract 3D building contours from airborne and vehicle LiDAR data, respectively. The Rodriguez matrix and collinearity equation are used for the determination of conjugate building contours. Based on this, a rule is defined to determine reliable conjugate contours, which are finally used for the fine registration of airborne and vehicle LiDAR data. The experiments show that the coarse registration method with 3D road networks can contribute to a reliable initial registration result, and the fine registration using 3D building contours obtains a final registration result with high reliability and geometric accuracy.

Graphical Abstract

1. Introduction

The advent of the Light Detection and Ranging (LiDAR) system represents a technological breakthrough in acquiring three-dimensional (3D) data of surface objects in a rapid and cost-effective manner [1]. By now, a variety of LiDAR techniques have emerged, including satellite-based laser scanning (SLS), airborne laser scanning (ALS), vehicle laser scanning (VLS), and terrestrial laser scanning (TLS). The airborne LiDAR system, due to its extraordinary capability in gathering highly accurate and dense elevation measurements in large scales, has become an important new technique for data filtering [2], 3D reconstruction [3,4], and object identification [5,6]. The vehicle LiDAR system, mounted on high speed vehicles, can quickly obtain detailed facade information of surface objects with an extremely high point density [7,8]. On one hand, both airborne and vehicle LiDAR are endowed with a large-range scanning area, which makes the integration practical. On the other hand, massive façade information is included in vehicle LiDAR with little top information, while more top information is obtained using airborne LiDAR with little façade information [9]. Considering the similarity and complementarity of airborne and vehicle LiDAR data, the integration of airborne and vehicle LiDAR can be performed to lead to new applications, such as a new presentation of our city, named “point city” (shown in Figure 1). Point cloud registration is a prerequisite to integrate airborne and vehicle LiDAR data.
In general, the point cloud registration methods can be divided into two types: the auxiliary method and the data-based method. Auxiliary data (e.g., spectral image and global positioning system (GPS) data) and artificial targets have been widely used in the auxiliary method to assist the registration of point clouds. In study [10], a method is proposed for the registration of terrestrial LiDAR point clouds based on the local features extracted from the images. Firstly, the characteristic two-dimensional (2D) points based on the scale invariant feature transform (SIFT) feature are extracted from the reflectance images of two scans. Then these 2D points are projected into 3D space by using interpolated range information. Finally, the transformation parameters of point clouds can be estimated from the 3D-2D correspondences. Some studies use GPS data, which can achieve centimeter accuracy, to directly georeference the LiDAR point clouds [11,12]. However, in complex urban and forest environments, GPS receivers frequently lose lock, which would reduce the positioning accuracy of the GPS system. Based on this consideration, after using GPS data for registration, the multi-station registration method which minimizes the surface error of point cloud data is applied [13,14]. As auxiliary data is required in the above work, these methods are limited in their application scope.
The data-based method conducts registration based on the point clouds without using auxiliary data. The iterative closest point (ICP) algorithm is a classic data-based method [15]. Based on the ICP method, many variant approaches appeared later. A flaw of these ICP-based approaches is the calling for a good initial alignment of the point clouds to enable convergence to the local minimum, which is not always easy to access. Another branch of the data-based registration approach lies in the use of the local normal and curvature feature [16,17,18,19]. The basic idea of those methods is that the conjugate features are determined based on the changes in geometric curvature and the approximate normal vector of the local surface around each point.
Figure 1. An example of “point city” with registered airborne and vehicle LiDAR points. (a) Airborne LiDAR points; (b) Vehicle LiDAR points; (c) Building X in “point city” (blue points: airborne data, green points: vehicle data); (d) Building Y in “point city” (blue points: airborne data, green points: vehicle data).
Figure 1. An example of “point city” with registered airborne and vehicle LiDAR points. (a) Airborne LiDAR points; (b) Vehicle LiDAR points; (c) Building X in “point city” (blue points: airborne data, green points: vehicle data); (d) Building Y in “point city” (blue points: airborne data, green points: vehicle data).
Remotesensing 07 13921 g001
Besides, some data-based methods conduct registration by using points, lines, planes, and other geometric features extracted from point clouds. (1) Point feature is the basic control feature, and has been widely used in point cloud registration. Barnea and Filin [20] conducted the registration of terrestrial LiDAR data by extracting key points from the panoramic range image. Cheng et al. [21] extracted 3D building corners from the intersection of 3D building boundaries from airborne and terrestrial LiDAR data, and the 3D building corners are matched through an automatic iterative process. In [22], a shiftable leading point method is optimized for high accuracy registration of 3D corners; (2) Lines can serve as essential features for the registration of point clouds in areas with abundant man-made structures. Better registration results can be offered with line features than with point features [23]. Jaw and Chuang [24] provided a line-based registration approach, where the matching of 3D line features is performed with angle and distance as constraints. Lee et al. [25] extracted 3D line features through the intersection of neighboring planar patches and used them to adjust the discrepancies between overlapping data strips. Recently, Bucksch and Khoshelham [26] extracted skeletons from tree branches and utilized them for the precise registration of point clouds. As line features present geometric evidence of edges, which are quite prominent and extraordinary, the matching of conjugate lines can be performed precisely; (3) Plane features are also used for the registration of point clouds. Teo and Huang [27] proposed a scheme for the registration of airborne and terrestrial LiDAR points using the least squares 3D surface registration technique to minimize the surfaces between two datasets. In von Hansen’s work, single plane correspondences are used to find the transformation parameters with some additional assumptions [28]. Zhang et al. [29] extracted plane features using a segmentation method and adopted the Rodriguez matrix for the registration work. Brenner et al. [30] showed that three plane matches can help to determine the transformation parameters between two point clouds, and their method has been compared with the point-based Normal Distributions Transform method [31]. The results of this comparison showed that the plane-based method tends to be more accurate and the point-based method is conceptually simple and fast. Besides, Wu et al. [32] proposed a registration method for airborne and terrestrial LiDAR point clouds based on building profiles, and achieved registration accuracies of 0.15 to 0.5 m in the horizontal direction and 0.20 m in the vertical direction.
However, some technical challenges exist for the integration of airborne and vehicle data due to different perspectives, huge differences of point density, different coverage, and the discrete nature of points [33]. Carlberg et al. [34] and Hu et al. [35] conducted the registration of airborne and vehicle LiDAR points by using GPS data directly. However, Früh and Zakhor [36] believe it is not reliable enough to use GPS data in urban street canyons. As a result, in their method aerial image was selected as a reference and the Monte Carlo Localization method was applied for the registration. Some studies related to the registration of LiDAR data and imagery also give good inspirations for this topic [37,38]. Among the existing literature, auxiliary data is often used to supplement the registration. In view of the availability and precision of these data, it is essential to get rid of these auxiliary data and seek conjugate features for the automatic registration. Based on the above, it is necessary to find reliable matching primitives (e.g., road and wall) from the LiDAR point cloud.
In complex urban environments, the GPS receiver of the vehicle LiDAR platform frequently loses lock, which would reduce the measurement accuracy of the whole system, and cause the drift of the local scanning point cloud. The vehicle LiDAR data used in this study has been corrected through a topographic map of 1:500. Thus, the coordinate of vehicle data has been converted to a local coordinate system, while the initial airborne data is in the WGS-84 coordinate frame. In this study, a hierarchical registration method is proposed, including coarse registration with 3D road networks and fine registration with 3D building contours.

2. Method

Based on the above consideration, a hierarchical registration method (Figure 2), including coarse registration with 3D road networks and fine registration with 3D building contours, is used for the registration of airborne and vehicle LiDAR points.
Figure 2. A flowchart of the proposed registration method.
Figure 2. A flowchart of the proposed registration method.
Remotesensing 07 13921 g002

2.1. Coarse Registration with 3D Road Networks

Two sets of 3D road networks, which contain road links and intersections, are used for the coarse registration of large-range LiDAR data. Three-dimensional airborne road networks are extracted using an existing method [39] from airborne LiDAR data. Existing vehicle trajectory lines provided along with vehicle LiDAR points are used as 3D vehicle road networks. Both airborne and vehicle road networks are used for the coarse registration. Considering that the coarse registration just offers a rough registration relationship, road networks with high positioning precision are not necessary.

2.1.1. 3D Road Networks from Airborne LiDAR

A method, proposed in literature [39], for automatic detection and vectorization of roads from airborne LiDAR data, is used in this section. A digital surface model (DSM) is generated by using the morphology algorithm and intensity values are used for the detection of road points. Due to the homogenous and consistent nature of roads, a local point density and a minimum bounding rectangle are introduced to finalize the detection. On this basis, the road centerline is extracted by the convolution of the binary road image with a complex-valued disk named the Phase Coded Disk (PCD). Both the width and direction of the road at the centerline are obtained from the convolution. The direction of the road facilitates the successful vectorization of the magnitude image by using the described tracing algorithm. The vectorization of any classified road networks captured in a binary image can be achieved using the PCD method.
After obtaining 2D airborne road networks, a 3D buffer zone along the networks, forming a cylinder, is constructed. The buffer area is usually several times the average spacing of the point cloud, and the elevations of the points inside the cylinder volume are averaged to obtain 3D road networks.

2.1.2. Coarse Registration with 3D Road Networks

In this section, an intersection point registration method with 3D road networks as constraints, is introduced for the coarse registration. In this method, the 3D intersection points, derived from airborne and vehicle road networks, respectively, are used for calculating the transformation matrices. Additionally, the road networks of airborne and vehicle LiDAR data are regarded as constraints to evaluate the reliability of the matrices.
As shown in Figure 3a,b, the 3D intersection points of airborne and vehicle road networks are PA = {PAi, i = 0,1,2,…,μ} and PB = {PBi, i = 0,1,2,…,v}, with their corresponding road networks being RA = {RAi, i = 0,1,2,…,m}and RB = {RBi, i = 0,1,2,…,n}, respectively. The coarse registration of airborne and vehicle LiDAR with road networks is conducted as follows:
(1) Selecting intersections.
Choose three pairs of points from intersection sets PA, PB randomly and calculate the translation matrix T 1 and rotation matrix R1.
(2) Road network transformation.
Use T1 and R1 to convert the road networks RB. The converted road network RC = {RCi, i = 0,1,2,…,n} is obtained.
(3) Calculating the match rate of 3D road networks.
Determine the matching rate of road networks in RA and RC. In Figure 3c, R1 and R2 refer to road segments selected from road networks RA and RC, respectively. Starting from Endpoint NA0 of Segment R1, draw 3D section planes at an interval θ. The section plane is drawn as a circle with its radius set according to the positioning precision of extracted road networks. If precise road networks are provided, a small radius is set and vice versa. As for each node point (e.g., NA0 and NA1), if it intersects with the other road segments, record the reciprocal of the distance between the foot point and intersection point as the matching rate p m e = { 1 / d i ,          d i > 0.1 10 ,                 d i 0.1 , in which d i represents the distance of nodes NAi and NBi. Otherwise, pme = 0. If several roads intersect with a section plane leading to pme1, pme2,…, pmen, then the largest matching rate is regarded as its matching rate. As for road segments R1 and R2, the matching rate is l m e = i = 1 k 1 d i , where k is the number of nodes in R1. The final matching rate between two road networks is t m e = j = 1 m l m e j , where m is the number of road segments in RA.
(4) Determination of optimal transformation.
Repeat (1)–(3) until the matching rate reaches a certain threshold or all intersection pairs have been iteratively selected from PA and PB. The transformation matrix with the highest matching rate is selected as the optimal transformation matrix. All points of PB are transformed and recorded as PC = {PCi, i = 0,1,2,…,v}.
(5) Coarse registration.
The successfully matched points from P A and P C are selected as MA = {MAi, i = 0,1,2,…,m} and MC = {MCi, i = 0,1,2,…,n}, respectively. The least mean square (LMS) method is used to calculate the registration relationships between set MA and set MC. The acquired registration transformational matrices R and T are used to transform the LiDAR points. Finally, the coarse registration is finished.
Figure 3. Coarse registration of 3D road networks. (a) Airborne road networks and intersections; (b) Vehicle road networks and intersections; (c) Matching rate of two single road segments from 3D road networks.
Figure 3. Coarse registration of 3D road networks. (a) Airborne road networks and intersections; (b) Vehicle road networks and intersections; (c) Matching rate of two single road segments from 3D road networks.
Remotesensing 07 13921 g003

2.2. Fine Registration with 3D Building Contours

In this section, on the basis of the above coarse registration, a locally fine registration work is done with 3D building contours. Coarse registration with road networks can provide a rough registration result of points; nevertheless, due to the positioning precision of extracted road networks, the coarse registration of point clouds is not so satisfactory. In addition, it is difficult to achieve good registration results by applying a single registration relationship for large-scale datasets.

2.2.1. 2D Building Contours from Vehicle LiDAR

In this section, the 2D building contours are extracted from vehicle LiDAR data based on two steps, including elevation difference filtering and height value accumulation.
(1)
Elevation difference filtering.
Compared with other surface features, obvious elevation differences exist along building contours. Thus, the building contours can be extracted by setting an elevation difference threshold. Here a 2D regular grid is constructed with the point cloud projected on it. The elevation differences of points inside each grid are analyzed and the grids with larger elevation differences than the preset threshold are retained.
(2)
Height value accumulation.
Disturbed by the trees and some other objects, the extraction result only from using the elevation difference is not satisfactory. As shown in Figure 4a, building contours of both stage-style and herringbone roofs are in a certain elevation range. Exact building contours can be acquired by obtaining the corresponding elevation range. For this purpose, a further filtering procedure with height value accumulation is proposed.
Figure 4. Theory of height value accumulation. (a) A sample of building roofs; (b) A sample of height value accumulation; (c) Height histogram; (d) Extraction of elevation range of building contours.
Figure 4. Theory of height value accumulation. (a) A sample of building roofs; (b) A sample of height value accumulation; (c) Height histogram; (d) Extraction of elevation range of building contours.
Remotesensing 07 13921 g004
In Figure 4b, LiDAR points of a building facade and a tree are presented. As we can see, the building contour points are divided into four grids and the tree points are divided into three grids. The grids corresponding to the building have consistent highest point (Z3), while that of the tree vary. More importantly, grids corresponding to buildings obviously have a large number of points. If the point number of grids corresponding to each elevation range is added, the range which contains buildings' highest points will stand out as peaks. According to this, buildings can be extracted from the elevation range. Details are shown as follows:
Step 1: Projecting points to 2D grids.
Construct 2D regular grids and divide the projected points. Then, select the highest point Zi and calculate the point number Ni of each grid.
Step 2: Calculating the elevation range.
Calculate the elevation range (Zmin, Zmax) with the highest point and lowest point in the point cloud. Set a small interval as Zs and divide the elevation range, getting the set S = {Sj, j = 1,2,…,n}, where n = (ZmaxZmin)/Zs.
Step 3: Height value accumulation.
For all the grids, if the highest point Zi is within the interval Sj; then the accumulation value Accj is recalculated as Accj = Accj + Ni, and the height histogram is shown in Figure 4c.
Step 4: Obtaining the elevation range interval of buildings.
The median of the height value accumulation in Figure 4c is set as a threshold, and the data which is smaller than the threshold is eliminated. The partial derivative is calculated, and the peak interval is obtained. Finally, the elevation range interval of the buildings (Figure 4d) is used to extract building contours.
(3)
2D contour extraction.
As for the extracted grids, the points inside them are determined. A random sample consensus (RANSAC) algorithm [40] is used to fit a plane. Additionally, the 2D building contours are obtained through the projection of the plane.

2.2.2. 2D Building Contours from Airborne LiDAR

In this section, the 2D building contours are extracted from airborne LiDAR data by using the RIMM algorithm [9].
Airborne LiDAR points provide rich and abundant top information of buildings, from which reliable architecture area can be extracted. Here, building areas are extracted using the RIMM algorithm. For this method, an opening operation is first conducted with a window larger than all the buildings in a particular area. On this basis, the opening operation is iterated by gradually decreasing the window at a fixed step length. Furthermore, the surface features can be extracted by fitting the size of the corresponding window. The elevation differences between two consecutive iterations are compared, and parts with elevation differences exceeding the minimum building height are regarded as buildings. Binary images are generated using the extracted building areas. Hough transformation is conducted for obtaining the 2D airborne building contours.

2.2.3. Extraction of 3D Building Contours

The aim of this section is to extract the 3D building contours. As building contours are extracted from 2D grids, isolated 3D contours may be extracted as a 2D single contour, as shown in Figure 5a. Thus, a 3D contour extraction method with point elevation is introduced [21]. Details are shown as follows.
(1)
Projection and division of points.
A plane perpendicular to the XY plane is constructed, and point clouds inside the contour grids are projected onto this plane (Figure 5b). The projected points are divided into blocks with a small interval, which is usually slightly larger than the average point spacing.
(2)
Points clustering.
The elevation and gradient difference of neighboring blocks are calculated according to the highest point in each block. If the differences are small enough, the neighboring blocks are clustered; otherwise, a new cluster is generated.
(3)
3D contour fitting.
The RANSAC approach is used to fit points of each cluster and 3D building contours are obtained.
Figure 5. Contour segmentation using point elevation. (a) A sample of a projected 2D contour; (b) A sample of 3D contour segments.
Figure 5. Contour segmentation using point elevation. (a) A sample of a projected 2D contour; (b) A sample of 3D contour segments.
Remotesensing 07 13921 g005

2.2.4. Fine Registration with 3D Building Contours

Fine registration with 3D building contours consists of two steps: searching for conjugate contours and fine registration with reliable conjugate contours. Firstly, two pairs of conjugate contours are used to acquire a registration relationship of airborne and vehicle contours, and all conjugate contours can be obtained after that. On this basis, a rule is defined to determine reliable conjugate contours, which are finally used for the fine registration of airborne and vehicle LiDAR.
The airborne and vehicle building contours are recorded as LA = {LAi, i = 1,2,…,m} and LB = { LBi, i = 1,2,…,n}, respectively. The fine registration with building contours are as follows:
(A) Searching for conjugate contours
(1) Selection of two pairs of conjugate contours.
After the coarse registration with 3D road networks, a rough registration relationship between two datasets is obtained. Therefore, further fine registration with building contours should be conducted within a searching space. Randomly select two conjugate contours from LA and LB respectively, recorded as LA1, LA2, LB1, and LB2. The following conditions should be met: (a) { l A n g T h r e a n g l D i s t T h r e d i s t l D i f T h r e d i f , where l A n g = arccos ( l 1 · l 2 | l 1 | · | l 2 | ) is the angle between the straight lines in which the two contours are located, l D i s t = | ( l 1 × l 2 ) · P 1 P 2 | l 1 × l 2 | | is the distance between the straight lines in which the two contours are located, l D i f = | | l 1 | | l 2 | | is the length difference of the two contours. l 1 and l 2 refer to the directions of the two contours. (b) LA1 and LB1 are neither parallel nor coplanar, nor are LA2 and LB2. (c) The angles and distances between LA1 and LB1 equal those between LA1 and LB1.
(2) Calculation of rotation matrix.
Use two pairs of conjugate segments to calculate the rotation matrix. Here a vector-based transformation model [29] is adopted to calculate the rotation matrix R by using two pairs of conjugate segments. The elements in rotation matrix R are determined by three rotation angles. Supposing there is an antisymmetric matrix S = [ 0 γ β γ 0 α β α 0 ] where α, β, γ are independent. Rotation matrix R can be expressed using the elements of S as
R = 1 Δ [ 1 + α 2 β 2 γ 2 2 γ 2 α β 2 α γ 2 β 2 γ 2 α β 1 α 2 + β 2 γ 2 2 α 2 β γ 2 α γ + 2 β 2 α 2 β γ 1 α 2 β 2 + γ 2 ]
where Δ = 1 + α 2 + β 2 + γ 2 ; R is the Rodrigues matrix. The characteristics of the antisymmetry matrix and Rodrigues matrix are:
{ 1 )        S T = S 2 )      R = ( I S ) 1 ( I + S )
where I is a 3 × 3 unit matrix and RRT = I. In this study, spatial vectors are first obtained with the 3D building contours. Assuming that the unit vectors corresponding to two pairs of conjugate contours are v and w, we get equation:
v = R w
From (2) of Equation (2), Equation (3) should be modified as follows:
L = S b
where L = v − w; and b = v + w. After S is substituted in Equation (4), the following form can be written:
[ 0 b z b y b z 0 b x b y b x 0 ] [ α β γ ] = [ L x L y L z ]
where (bx, by, bz) are the vector components of b, and (Lx, Ly, Lz) are those of L. Then the parameter vector x can be approximated by using the following linear least squares estimation.
x = ( M T M ) 1 M T L
where M is the coefficient matrix of Equation (5), and rotation matrix R should be calculated by Equations (1) and (6).
(3) Calculation of translation matrix.
After obtaining the rotation matrix R, the transition matrix can be obtained through collinearity equations. As shown in Figure 6, a b is a contour in LA, c’d’ is the corresponding contour of LB. Theoretically, a b and c ' d ' should be collinear, meaning that the actual position of c’d’ is cd, in which ( X c Y c Z c ) = ( X c ' Y c ' Z c ' ) + ( d x d y d z ) , ( X d Y d Z d ) = ( X d ' Y d ' Z d ' ) + ( d x d y d z ) . According to the collinearity equation, the formula X c X a X b X a = Y c Y a Y b Y a = Z c Z a Z b Z a X d X a X b X a = Y d Y a Y b Y a = Z d Z a Z b Z a can be procured for each pair of contours. After derivation, { ( Y b Y a ) d x + ( X a X b ) d y = ( X b X a ) ( Y c ' Y a ) ( Y b Y a ) ( X c ' X a ) ( Z b Z a ) d y + ( Y a Y b ) d z = ( Y b Y a ) ( Z c ' Z a ) ( Z b Z a ) ( Y c ' Y a ) { ( Y b Y a ) d x + ( X a X b ) d y = ( X b X a ) ( Y d ' Y a ) ( Y b Y a ) ( X d ' X a ) ( Z b Z a ) d y + ( Y a Y b ) d z = ( Y b Y a ) ( Z d ' Z a ) ( Z b Z a ) ( Y d ' Y a ) is obtained. With two pairs of conjugate contours, a least square method can be used to solve the equations with dx, dy, dz as the variables.
Figure 6. A sample of collinear lines.
Figure 6. A sample of collinear lines.
Remotesensing 07 13921 g006
(4) Determination of all conjugate contours.
Transforming all contours by using the obtained matrix. If the conditions in { l A n g T h r e a n g l D i s t T h r e d i s t l D i f T h r e d i f are satisfied, the contours are considered as conjugate contours.
(B) Fine registration with reliable conjugate contours
Fine registration with reliable conjugate contours is carried out to obtain a more precise transformation matrix. In the above searching procedure for conjugate contours, only two pairs of conjugate contours are used. Thus, the registration accuracy can be enhanced by using more contours.
(1)
Selection of reliable conjugate contours.
If the building rooftops have eaves, there may be obvious errors with airborne and vehicle building contours. It is inappropriate to utilize all the contours for the fine registration. Here the angles and distances between conjugate contours are analyzed and the obvious outliers are discarded. Details are as follows:
Step 1: Selecting contours by angle.
As for all conjugate contours of L A an L B , calculate their angles as α = {αi, i = 0,1,2,…,k}. Group the angles in α and calculate the total number of conjugate contours in each group. If the group with the largest number of contours reaches a certain percent of the total contours, the contours in the group are considered as reliable ones. Otherwise, join other groups of segment pairs until the proportion of matched pairs is more than a certain threshold. Here the percent threshold is set according to the overall precision of extracted building contours.
Step 2: Selecting contours by distance.
Just as in Step 1, distances between conjugate contours are also used for selecting the remaining contours.
(2)
Fine registration.
As for all the reliable conjugate contours, use the vector-based transformation model [29] and collinearity equation for calculating the transformation matrix. The fine transformation matrix with maximum matching rate can be obtained until all reliable conjugate contours have been iteratively selected. Finally, the fine registration of airborne and vehicle LiDAR data can be achieved.

2.3. Summary of Threshold Parameters

As a few parameters are used in this study, a summarization on the setting of the key thresholds is shown in Table 1. The setting basis of these thresholds includes three types: data source, calculation, and empiric. The term “data source” means that a threshold is set according to the real data. The term “calculation” indicates that a threshold can be calculated automatically in the method. If this method is applied in some other experimental data, the “data source” and “calculation” thresholds are easy to automatically determine, which cannot limit the applicability of the proposed method. The term “empiric” means that the thresholds are set empirically.
The calculation formulas are described briefly as follows. In the coarse registration with road networks, the radius of the circles is set to 1 m (i.e., Wmin/5, where Wmin = 5 m is the minimum width of road) for obtaining 3D road networks.
In the extraction of 3D building contours, the size of blocks is one to two times the average point spacing. In this study, we take 100% as the maximum roof slope. The elevation difference of neighboring blocks is set to 2 m (i.e., 2 × D × i, where D = 1 m is the block size and i = 100% is the maximum roof slope). The distance threshold (Thredif) is set to 5 m according to the width of a lane.
Table 1. Key parameters used in the proposed method.
Table 1. Key parameters used in the proposed method.
MethodParameterScaleSetting Basis
Coarse registration with road networksExtraction of three dimensional (3D) road networksRadius of small circle1 m·W/4Calculation
Determination of matching rateInterval of 3D section planes1 mEmpiric
The radius of section plane60 mData source
Fine registration with building contoursExtraction of two dimensional (2D) building contours from vehicle LiDAR2D regular grid1 m × 1 mData source
Elevation difference15 mData source
Elevation interval Zs4–5 times the average point spacingEmpiric
Extraction of 3D building contoursElevation difference2 × D × ICalculation
Angle difference20°Empiric
Fine registrationAngle threshold ThreangEmpiric
Distance threshold Thredist5 m (width of a lane)Calculation
Length difference Thredif10 mEmpiric

3. Experiments and Analysis

3.1. Experimental Data

The experimental area is located around Olympic Sports Center, Nanjing, China (32.0°N, 118.7°E), with a total area of 4000 m × 4000 m. The experimental data includes airborne LiDAR points (Figure 7a), vehicle LiDAR points (Figure 7b), and the trajectory path of vehicle LiDAR (Figure 7c). The SSW (Shou shi/Si wei) mobile mapping system (360° scanning cope, surveying range 2–300 m, reflectance 80%, and point frequency 200,000 points/s) invented by the Chinese Academy of Surveying and Mapping is used to capture the vehicle LiDAR data and a topographic map of 1:500 is used for the correction. The whole data is about 30 G. The total amount of vehicle points is nearly one billion. The average point spacing of airborne LiDAR points is about 0.5 m, with a horizontal precision of 30 cm and vertical precision of 15 cm. The total amount of airborne points is about 78 million. As shown in Figure 7a,b, the whole area is used for the coarse registration and Area A is selected for the locally fine registration, and Area B is used for evaluating the registration accuracy.
Figure 7. Experimental data. (a) Airborne LiDAR data; (b) Vehicle LiDAR data; (c) Vehicle trajectory path.
Figure 7. Experimental data. (a) Airborne LiDAR data; (b) Vehicle LiDAR data; (c) Vehicle trajectory path.
Remotesensing 07 13921 g007

3.2. Coarse Registration with 3D Road Networks

Figure 8a shows the extracted 3D airborne road networks using the method mentioned in Section 2.1.1. As we can see, there are 131 road intersections in the airborne road network (black lines).
Figure 8. Coarse registration with road networks. (a) Airborne road network (black lines); (b) Vehicle road network (red lines); (c) Registered road networks.
Figure 8. Coarse registration with road networks. (a) Airborne road network (black lines); (b) Vehicle road network (red lines); (c) Registered road networks.
Remotesensing 07 13921 g008
In Figure 8b, 37 road intersections are seen in vehicle trajectory path (red lines). The actual number of conjugate intersections is 30. The coarse registration with 3D road networks is conducted, in which the radius of the section circle is 80 m, and the interval θ is set 1 m. The calculated matching rate is 4012.5 and the coarse registration result is shown in Figure 8c, where 30 pairs of intersections are successfully matched. As we can see, airborne and vehicle road networks are well matched. Little distortion (i.e., geometric scaling) is seen in the matched networks, which presents that little rotation is seen along the Z axis. This coarse registration can offer a rough transformation relationship for the two datasets.

3.3. Fine Registration with 3D Building Contours

3.3.1. Extraction of 3D Building Contours

Figure 9a presents local vehicle LiDAR points around Area A. The points are projected to regular grids (1 m × 1 m) and the elevation threshold is set to 15 m for the filtering. In Figure 9b, after elevation filtering, most of the building contours are retained with other non-building grids eliminated. Nevertheless, as we can see along certain building contours (e.g., SA, SB and SC), the contour grids are of large width, which may lead to the imprecision of vectorized contours. The proposed high value accumulation is used for a further filtering. In Figure 9c, 18 peaks are acquired, corresponding to 18 elevation ranges. Using these elevation ranges, more precise contours are obtained (Figure 9d). Compared with Figure 9b, the width of contour grids is obviously lessened, especially in Area SA, SB and SC. Finally, with Hough transform, 42 building contours are obtained from vehicle LiDAR data.
As for airborne LiDAR in Figure 9e, the RIMM algorithm is used to extract the buildings. The thresholds are set as follows: the size of original morphological window, 106 m; the size of window reduction in each step, 10 m; height difference, 3 m; roughness, 1.6. The extracted building areas are shown in Figure 9f. Through the extraction of 3D building contours, 562 building contours are obtained (Figure 9g).
Figure 9. Building contour extraction of Area A. (a) Vehicle LiDAR data; (b) Elevation difference filtering; (c) Height value accumulation; (d) Vehicle building contours; (e) Airborne LiDAR data; (f) RIMM for buildings; (g) Airborne building contours.
Figure 9. Building contour extraction of Area A. (a) Vehicle LiDAR data; (b) Elevation difference filtering; (c) Height value accumulation; (d) Vehicle building contours; (e) Airborne LiDAR data; (f) RIMM for buildings; (g) Airborne building contours.
Remotesensing 07 13921 g009

3.3.2. Fine Registration with 3D Building Contours

Figure 10 is the fine registration of airborne and vehicle LiDAR around Area A. Figure 10a shows the result after coarse registration, in which the black segments refer to airborne contours and red segments refer to vehicle contours. After searching the conjugate building contours, Figure 10b is obtained, where 15 pairs of building contours are successfully matched. A further fine registration is conducted in Figure 10c. Compared with Figure 10b, almost all the building contours match well, especially in FA, FB and FC.
Figure 10. Fine registration with building contours in Area A. (a) Result of coarse registration; (b) Result after using the conjugate contours; (c) Result of fine registration by only using conjugate contours in (b), and all airborne contours (black lines) are retained for visualization.
Figure 10. Fine registration with building contours in Area A. (a) Result of coarse registration; (b) Result after using the conjugate contours; (c) Result of fine registration by only using conjugate contours in (b), and all airborne contours (black lines) are retained for visualization.
Remotesensing 07 13921 g010

3.4 Result and Analysis

3.4.1 Visual Evaluation

Figure 11 shows the final registration result around Area A, where the blue points refer to airborne LiDAR points and green ones refer to vehicle LiDAR points.
Figure 11. Registration result of Area A using the proposed method. (a) Registration result in Area A; (b) Details of VA; (c) Details of VB; (d) Details of VC; (e) Details of VD.
Figure 11. Registration result of Area A using the proposed method. (a) Registration result in Area A; (b) Details of VA; (c) Details of VB; (d) Details of VC; (e) Details of VD.
Remotesensing 07 13921 g011
The overall registration result of airborne and vehicle points is presented in Figure 11a. Figure 11b–e are the four details of VA, VB, VC and VD in Figure 11a, in which the green points and blue points match well along the building contours. With these integrated airborne and vehicle points, a multi-view all-round information of surface features is obtained. Further quantitative evaluations are conducted with building contours and ground points for the horizontal precision and vertical precision, respectively.

3.4.2 Evaluation on Horizontal Accuracy with Building Contours

An evaluation by using building contours is conducted to evaluate the horizontal accuracy of the registration result. Here the building contours are extracted by manual means from registered airborne and vehicle LiDAR, respectively. To make a quantitative assessment of these registration results, the transect distance and angle between the contours are calculated. The calculation method of transect distance is described as follows. One contour is taken as a baseline, and lines perpendicular to this baseline are then constructed with an interval. Thus, the distance between the baseline and corresponding contour is a transect distance. The average transect distance between two conjugate contours is the distance between two contours.
Figure 12. Evaluation on horizontal accuracy using building contours of Area A (left) and Area B (right). (a) Result of coarse registration; (b) Result by using the conjugate contours; (c) Result of fine registration (all airborne contours are retained for visualization); (d) ICP refined result.
Figure 12. Evaluation on horizontal accuracy using building contours of Area A (left) and Area B (right). (a) Result of coarse registration; (b) Result by using the conjugate contours; (c) Result of fine registration (all airborne contours are retained for visualization); (d) ICP refined result.
Remotesensing 07 13921 g012
Figure 12a–d are the digitized building contours in Area A (left) and Area B (right), in which black and red segments are building contours extracted from airborne and vehicle LiDAR data, respectively. Figure 12a shows the digitized building contours after coarse registration with 3D road networks, where obvious offset is seen. As shown in Table 2, the average and maximum distance of Area A is 17.44 m and 21.03 m, respectively, with the average and maximum angle offset being 0.95° and 1.7°. The average and maximum distance of Area B is 7.43 m and 12.68 m, respectively, with the average and maximum angle offset being 0.88° and 2.1°. Figure 12b shows the registration result after using two pairs of building contours. As we can see, the digitized contours match well and only small offsets are seen among the contours. Through statistics, the average and maximum distance of Area A is 2.53 m and 4.30 m, respectively, with the average and maximum angle offset being 0.82° and 1.6°. The average and maximum distance of Area B is 1.59 m and 3.79 m, respectively, with the average and maximum angle offset being 0.69° and 1.3°. The horizontal registration accuracy has been greatly improved by using building contours. Figure 12c is the final registration result with reliable conjugate building contours. The building contours digitized from airborne and vehicle LiDAR data match even better than Figure 12b. In Table 2, the average and maximum distance of Area A is 0.73 m and 1.90 m, respectively, with the average and maximum angle offset being 0.32°and 1.2°. The average and maximum distance of Area B is 0.63 m and 1.73 m, respectively, with the average and maximum angle offset being 0.48° and 1.1°. Figure 12d is the registration result by directly using the ICP method after the coarse registration. The average and maximum distance of Area A is 1.52 m and 2.55 m, respectively, with the average and maximum angle offset being 0.47° and 1.5°. However, the average and maximum distance of Area B is 5.27 m and 11.23 m, respectively, with the average and maximum angle offset being 0.72° and 1.9°.
Table 2. Horizontal accuracy of the registration result.
Table 2. Horizontal accuracy of the registration result.
MethodTransect Distance (m)Line Angle (°)
AverageMaxAverageMax
ABABABAB
Coarse registration17.447.4321.0312.680.950.881.72.1
Searching result2.531.594.303.790.820.691.61.3
Fine registration0.730.631.901.730.320.481.21.1
ICP refined result1.525.272.5511.230.470.721.51.9
According to the above, we can see that the fine registration horizontal accuracy reaches the meter level, which is relatively high. Compared with the registration result using only two pairs of building contours, the fine registration with reliable conjugate building contours can achieve better results. For horizontal accuracy, the fine registration results are better than the ICP results both in Area A and B.

3.4.3. Evaluation on Vertical Accuracy with Common Ground Points

In order to evaluate the registration accuracy along the vertical direction, some common ground points are selected. As shown in Figure 13, the ground points (white circles) are evenly distributed in Area A and Area B, respectively. The elevation of each ground point is obtained by averaging the Z values of its neighboring points. The elevation differences between airborne and vehicle ground points are shown in Table 3.
Figure 13. Location of common ground points. (a) Common ground points of Area A; (b) Common ground points of Area B.
Figure 13. Location of common ground points. (a) Common ground points of Area A; (b) Common ground points of Area B.
Remotesensing 07 13921 g013
After coarse registration with road networks, the Average error, Max error, and RMSE error of common ground points in Area A are 0.92 m, 1.17 m, and 0.97 m, respectively. The Average error, Max error, and RMSE error of common ground points in Area B are 1.08 m, 1.33 m, and 0.84 m, respectively. As we can see, in spite of dozens of meters of errors along the horizontal direction, the errors along the vertical direction reached the decimeter level after coarse registration. The reason lies in the fact that the experimental area is flat in general, which leads to similar elevations of road intersections. If the experiment is carried out in areas with undulating terrains, the errors will be larger. Though relatively high registration accuracy has been achieved with road networks, the registration is further improved after fine registration using building contours as shown in Table 3. After searching the conjugate contours, the Average error, Max error, and RMSE error in Area A are 0.46 m, 0.63 m, and 0.50 m. In Area B, the Average error, Max error, and RMSE error of searching result are 0.59 m, 0.92 m, and 0.68 m. Based on this, after fine registration, the Average error, Max error, and RMSE error in Area A are 0.39 m, 0.50 m, and 0.42 m, respectively. The Average error, Max error, and RMSE error in Area B are 0.43 m, 0.75 m, and 0.36 m, respectively. The ICP result is performed based on the points of Area A and B after coarse registration. In Area A, the Average error, Max error, and RMSE error of the ICP result are 0.37 m, 0.61 m, and 0.28 m, respectively. In Area B, the Average error, Max error, and RMSE error of the ICP result are 0.46 m, 0.72 m, and 0.21 m, respectively.
As we can see, high registration accuracy is obtained using the proposed method. However, the vertical accuracies of ICP registration results are better than the fine registration results in Area A and Area B.
Table 3. Vertical accuracy of the registration result.
Table 3. Vertical accuracy of the registration result.
MethodAverage Error (m)Max Error (m)RMSE (m)
A/BA/BA/B
Coarse registration0.92/1.081.17/1.330.97/0.84
Searching result0.46/0.590.63/0.920.50/0.68
Fine registration0.39/0.430.50/0.750.42/0.36
ICP result0.37/0.460.61/0.720.28/0.21

4. Discussion

The registration accuracy of the airborne and vehicle LiDAR point cloud not only depends on the registration algorithm, but also relies on the precision of the laser scanning system and the scope of the experimental area. The whole experimental area in the study is about 4000 m × 4000 m and the average point spacing of the airborne LiDAR data is about 0.50 m. In this circumstance, it is difficult to achieve extremely high registration accuracy for a large scene dataset with the same parameters. In study [32], the registration results of airborne and terrestrial LiDAR data reached a horizontal accuracy of 0.15 to 0.5 m and a vertical accuracy of 0.20 m. Thus, a hierarchical registration strategy, which contains coarse and fine registration procedures, is used to meet the needs of different registration accuracies.
The registration result in Figure 11 and the accuracy evaluation in Table 2 and Table 3 demonstrate the relatively good performance of the proposed method in an urban area. For comparison, the ICP registration method is performed over two small experimental areas (Area A and B). The ICP method is conducted based on the coarse registered point clouds with a good initial state. In addition, due to the massive LiDAR data (about 30 G), the ICP method could not be directly applied to the whole area. The horizontal accuracy of the fine registration results in Area A and B are 0.39 m and 0.43 m, respectively. The horizontal accuracy of the ICP results in Area A and B are 1.42 m and 5.27 m, respectively. According to the experimental results, the horizontal accuracy of the proposed method is better than the ICP method. However, the vertical accuracies of the ICP registration result are slightly better than the fine registration result both in Area A and B. The ICP method is performed mainly based on the common ground points of airborne and vehicle LiDAR data, which is prone to lead a horizontal shift. Thus, the ICP method would achieve a registration result with good vertical accuracy and unstable horizontal accuracy.
Although the proposed registration scheme can obtain good results in the urban area, there are some limitations. Firstly, the proposed registration method is hard to deal with in non-urban regions, which contain almost no geometric buildings. Besides, if the road networks of a city are following a regular grid, the coarse registration would not work due to the strong similarity of road networks. In addition, if most of the buildings in the experimental area have significant eaves, there may be obvious errors between airborne and vehicle building contours. As a result, few conjugate building contours can be extracted, and it would be difficult to achieve the ideal registration result with fine registration.

5. Conclusions

This paper proposes a hierarchical registration approach for airborne and vehicle LiDAR data. The keys of this approach lie in a coarse registration method with 3D road networks and a fine registration method with 3D building contours. In the coarse registration procedure, the extracted airborne road networks are registered with vehicle trajectory lines based on the road network matching rate. In the fine registration procedure, the matched conjugate contours are obtained using the Rodriguez matrix and collinearity equation. Finally, the fine registration is conducted by using reliable conjugate contours. Through the experiment, the following conclusions can be reached:
(1) The coarse registration method with 3D road networks can provide a rough transformation matrix for a long-range registration task. With the coarse registration, further fine registration can be done within a small searching space, thus effectively avoiding local optimal result and greatly reducing the calculation amount.
(2) Three-dimensional building contours present high positioning precision and the fine registration method with 3D building contours can achieve a relatively good registration result. The fine registration result achieves an accuracy of 0.73 m in the horizontal direction and 0.39 m in the vertical direction.
In future work, the proposed method will be applied to different experimental areas to test its robustness and effectiveness. However, it is still difficult to achieve ideal results for the registration of those buildings with eaves. Therefore, we will perform a future study on eaves by using the center points of building roofs or other auxiliary data.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant No. 41501456, 41371017). Sincere thanks are given for the comments and contributions of the anonymous reviewers and members of the editorial team.

Author Contributions

Liang Cheng proposes the core idea of hierarchical registration method and designs the technical framework of the paper. Yang Wu implements the registration method of airborne and vehicle LiDAR data. Lihua Tong implements the method of extraction of road networks from airborne and terrestrial LiDAR data. Yanming Chen implements the method of extraction of building contours, and is responsible for the implementation of the comparative experiments. Manchun Li is responsible for experimental analysis and discussion.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, B.; Yu, B.; Yue, W.; Shu, S.; Tan, W.; Hu, C.; Huang, Y.; Wu, J.; Liu, H. A voxel-based method for automated identification and morphological parameters estimation of individual street trees from mobile laser scanning data. Remote Sens. 2013, 5, 584–611. [Google Scholar] [CrossRef]
  2. Meng, X.; Currit, N.; Zhao, K. Ground filtering algorithms for airborne LiDAR data: A review of critical issues. Remote Sens. 2010, 2, 833–860. [Google Scholar] [CrossRef]
  3. Cheng, L.; Gong, J.; Li, M.; Liu, Y. 3D building model reconstruction from multi-view aerial imagery and LiDAR data. Photogramm. Eng. Remote Sens. 2011, 77, 125–139. [Google Scholar] [CrossRef]
  4. Cheng, L.; Wu, Y.; Wang, Y.; Zhong, L.; Chen, Y.; Li, M. Three-dimensional reconstruction of large multilayer interchange bridge using airborne LiDAR data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 8, 691–707. [Google Scholar] [CrossRef]
  5. Kent, R.; Lindsell, J.A.; Laurin, G.V.; Valentini, R.; Coomes, D.A. Airborne LiDAR detects selectively logged tropical forest even in an advanced stage of recovery. Remote Sens. 2015, 7, 8348–8367. [Google Scholar] [CrossRef]
  6. Cheng, L.; Tong, L.; Wang, Y.; Li, M. Extraction of urban power lines from vehicle-borne LiDAR data. Remote Sens. 2014, 6, 3302–3320. [Google Scholar] [CrossRef]
  7. Lehtomäki, M.; Jaakkola, A.; Hyyppä, J.; Kukko, A.; Kaartinen, H. Detection of vertical pole-like objects in a road environment using vehicle-based laser scanning data. Remote Sens. 2010, 2, 641–664. [Google Scholar] [CrossRef]
  8. Yang, B.; Wei, Z.; Li, Q.; Li, J. Automated extraction of street-scene objects from mobile LiDAR point clouds. Int. J. Remote Sens. 2012, 33, 5839–5861. [Google Scholar] [CrossRef]
  9. Cheng, L.; Zhao, W.; Han, P.; Zhang, W.; Shan, J.; Liu, Y.; Li, M. Building region derivation from LiDAR data using a reversed iterative mathematic morphological algorithm. Opt. Commun. 2012, 286, 244–250. [Google Scholar] [CrossRef]
  10. Weinmann, M.; Weinmann, M.; Hinz, S.; Jutzi, B. Fast and automatic image-based registration of TLS data. ISPRS-J. Photogramm. Remote Sens. 2011, 66, S62–S70. [Google Scholar] [CrossRef]
  11. Böhm, J.; Haala, N. Efficient integration of aerial and terrestrial laser data for virtual city modeling using lasermaps. In Proceedings of ISPRS WG III/3, III/4, V/3 Workshop on Laser Scanning, Enschede, the Netherlands, 12–14 September 2005; pp. 12–14.
  12. Hohenthal, J.; Alho, P.; Hyyppä, J.; Hyyppä, H. Laser scanning applications in fluvial studies. Prog. Phys. Geogr. 2011, 35, 782–809. [Google Scholar] [CrossRef]
  13. Bremer, M.; Sass, O. Combining airborne and terrestrial laser scanning for quantifying erosion and deposition by a debris flow event. Geomorphology 2012, 138, 49–60. [Google Scholar] [CrossRef]
  14. Heckman, T.; Bimböse, M.; Krautblatter, M.; Haas, F.; Becht, M.; Morche, D. From geotechnical analysis to quantification and modelling using LiDAR data: A study on rockfall in the Reintal catchment, Bavarian Alps, Germany. Earth Surf. Process. Landf. 2012, 37, 119–133. [Google Scholar] [CrossRef]
  15. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  16. Chen, H.; Bhanu, B. 3D free-form object recognition in range images using local surface patches. Pattern Recognit. Lett. 2007, 28, 1252–1262. [Google Scholar] [CrossRef]
  17. He, B.; Lin, Z.; Li, Y.F. An automatic registration algorithm for the scattered point clouds based on the curvature feature. Opt. Laser Technol. 2012, 46, 53–60. [Google Scholar] [CrossRef]
  18. Makadia, A.; Patterson, A.I.; Daniilidis, K. Fully automatic registration of 3D point clouds. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; pp. 1297–1304.
  19. Bae, K.; Lichti, D. A method for automated registration of unorganized point clouds. ISPRS J. Photogramm. Remote Sens. 2008, 63, 36–54. [Google Scholar] [CrossRef]
  20. Barnea, S.; Filin, S. Keypoint based autonomous registration of terrestrial laser point-clouds. ISPRS-J. Photogramm. Remote Sens. 2008, 63, 19–35. [Google Scholar] [CrossRef]
  21. Cheng, L.; Tong, L.; Li, M.; Liu, Y. Semi-automatic registration of airborne and terrestrial laser scanning data using building corner matching with boundaries as reliability check. Remote Sens. 2013, 5, 6260–6283. [Google Scholar]
  22. Cheng, L.; Tong, L.; Wu, Y.; Chen, Y.; Li, M. Shiftable leading point method for high accuracy registration of airborne and terrestrial LiDAR data. Remote Sens. 2015, 7, 1915–1936. [Google Scholar] [CrossRef]
  23. Eo, Y.D.; Pyeon, M.W.; Kim, S.W.; Kim, J.R. Coregistration of terrestrial LiDAR points by adaptive scale-invariant feature transformation with constrained geometry. Autom. Constr. 2012, 25, 49–58. [Google Scholar] [CrossRef]
  24. Jaw, J.J.; Chuang, T.Y. Registration of ground-based LiDAR point clouds by means of 3D line features. J. Chin. Inst. Eng. 2008, 31, 1031–1045. [Google Scholar] [CrossRef]
  25. Lee, J.; Yu, K.; Kim, Y.; Habib, A.F. Adjustment of discrepancies between LiDAR data strips using linear features. IEEE Geosci. Remote Sens. Lett. 2007, 4, 475–479. [Google Scholar] [CrossRef]
  26. Bucksch, A.; Khoshelham, K. Localized registration of point clouds of botanic trees. IEEE Geosci. Remote Sens. Lett. 2013, 10, 631–635. [Google Scholar] [CrossRef]
  27. Teo, T.A.; Huang, S.H. Surface-based registration of airborne and terrestrial mobile LiDAR point clouds. Remote Sens. 2014, 6, 12686–12707. [Google Scholar] [CrossRef]
  28. Von Hansen, W. Robust automatic marker-free registration of terrestrial scan data. In Proceedings of the Photogrammetric Computer Vision, Bonn, Germany, 20–22 September 2006; pp. 105–110.
  29. Zhang, D.; Huang, T.; Li, G.; Jiang, M. Robust algorithm for registration of building point clouds using planar patches. J. Sur. Eng. 2011, 138, 31–36. [Google Scholar] [CrossRef]
  30. Brenner, C.; Dold, C.; Ripperda, N. Coarse orientation of terrestrial laser scans in urban environments. ISPRS-J. Photogramm. Remote Sens. 2008, 63, 4–18. [Google Scholar] [CrossRef]
  31. Biber, P.; Straßer, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 27–31 October 2003; pp. 2743–2748.
  32. Wu, H.; Scaioni, M.; Li, H.; Li, N.; Lu, M.; Liu, C. Feature-constrained registration of building point clouds acquired by terrestrial and airborne laser scanners. Int. J. Appl. Remote Sens. 2014, 8, 083587. [Google Scholar] [CrossRef]
  33. Hansen, W.; Gross, H.; Thoennessen, U. Line-based registration of terrestrial and aerial LiDAR data. In Proceedings of International Archives of the Photogrammetry, Remote sensing, and Spatial Information Sciences, Beijing, China, 3–11 July 2008; pp. 161–166.
  34. Carlberg, M.; Andrews, J.; Gao, P.; Zakhor, A. Fast surface reconstruction and segmentation with ground-based and airborne LiDAR range data. In Proceedings of the Fourth International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT), Atlanta, GA, USA, 18–20 June 2008.
  35. Hu, J.; You, S.; Neumann, U. Approaches to large-scale urban modelling. IEEE Comput. Graph. Appl. 2003, 23, 62–69. [Google Scholar]
  36. Früh, C.; Zakhor, A. Constructing 3D city models by merging aerial and ground views. IEEE Comput. Graph. Appl. 2003, 23, 52–61. [Google Scholar] [CrossRef]
  37. Li, N.; Huang, X.; Zhang, F.; Wang, L. Registration of aerial imagery and LiDAR data in desert area using the centroids of bushes as control information. Photogramm. Eng. Remote Sens. 2013, 79, 743–752. [Google Scholar] [CrossRef]
  38. Parmehr, E.G.; Fraser, C.S.; Zhang, C.; Leach, J. Automatic registration of optical imagery with 3D LiDAR data using statistical similarity. ISPRS-J. Photogramm. Remote Sens. 2014, 88, 28–40. [Google Scholar] [CrossRef]
  39. Clode, S.; Rottensteiner, F.; Kootsookos, P.J.; Zelniker, E.E. Detection and vectorisation of roads from LiDAR data. Photogramm. Eng. Remote Sens. 2007, 73, 517–535. [Google Scholar] [CrossRef]
  40. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Cheng, L.; Wu, Y.; Tong, L.; Chen, Y.; Li, M. Hierarchical Registration Method for Airborne and Vehicle LiDAR Point Cloud. Remote Sens. 2015, 7, 13921-13944. https://0-doi-org.brum.beds.ac.uk/10.3390/rs71013921

AMA Style

Cheng L, Wu Y, Tong L, Chen Y, Li M. Hierarchical Registration Method for Airborne and Vehicle LiDAR Point Cloud. Remote Sensing. 2015; 7(10):13921-13944. https://0-doi-org.brum.beds.ac.uk/10.3390/rs71013921

Chicago/Turabian Style

Cheng, Liang, Yang Wu, Lihua Tong, Yanming Chen, and Manchun Li. 2015. "Hierarchical Registration Method for Airborne and Vehicle LiDAR Point Cloud" Remote Sensing 7, no. 10: 13921-13944. https://0-doi-org.brum.beds.ac.uk/10.3390/rs71013921

Article Metrics

Back to TopTop