Next Article in Journal
Identification and Geographic Distribution of Accommodation and Catering Centers
Next Article in Special Issue
Indoor Positioning Method Using WiFi RTT Based on LOS Identification and Range Calibration
Previous Article in Journal
Mapping Brick Kilns to Support Environmental Impact Studies around Delhi Using Sentinel-2
Previous Article in Special Issue
A Real-Time Infrared Stereo Matching Algorithm for RGB-D Cameras’ Indoor 3D Perception

Drift Invariant Metric Quality Control of Construction Sites Using BIM and Point Cloud Data

Department of Civil Engineering, Geomatics Section, KU Leuven—Faculty of Engineering Technology, 9000 Ghent, Belgium
Authors to whom correspondence should be addressed.
All authors contributed equally to this work.
ISPRS Int. J. Geo-Inf. 2020, 9(9), 545;
Received: 24 July 2020 / Revised: 31 August 2020 / Accepted: 7 September 2020 / Published: 14 September 2020
(This article belongs to the Special Issue 3D Indoor Mapping and Modelling)


Construction site monitoring is currently performed through visual inspections and costly selective measurements. Due to the small overhead in construction projects, additional resources are scarce to frequently conduct a metric quality assessment of the constructed objects. However, contradictory, construction projects are characterised by high failure costs which are often caused by erroneously constructed structural objects. With the upcoming use of periodic remote sensing during the different phases of the building process, new possibilities arise to advance from a selective quality analysis to an in-depth assessment of the full construction site. In this work, a novel methodology is presented to rapidly evaluate a large number of built objects on a construction site. Given a point cloud and a set of as-design BIM elements, our method evaluates the deviations between both datasets and computes the positioning errors of each object. Unlike the current state of the art, our method computes the error vectors regardless of drift, noise, clutter and (geo)referencing errors, leading to a better detection rate. The main contributions are the efficient matching of both datasets, the drift invariant metric evaluation and the intuitive visualisation of the results. The proposed analysis facilitates the identification of construction errors early on in the process, hence significantly lowering the failure costs. The application is embedded in native BIM software and visualises the objects by a simple color code, providing an intuitive indicator for the positioning accuracy of the built objects.
Keywords: building information modeling; quality control; construction site; point cloud building information modeling; quality control; construction site; point cloud

1. Introduction

Failure costs are prevalent in the construction industry and range from 5% to 20% of the total project cost [1,2]. Furthermore, most of the error-induced costs occur during the construction of the building’s structural elements. In the work of Love et al. [3], it is stated that non-conformances during structural steel and concrete works constitute a total of over 50% of the failure costs. It is therefore vital to asses the quality of constructed elements during the execution phase. This includes the evaluation of the constructed elements with respect to their as-design shape in terms of geometry and appearance [4]. In this work, the emphasis is on the metric quality assessment and more specifically on the positioning errors of the objects. The resulting information can also be used for progress monitoring and for quantity take-offs [5]. Overall, the early detection of erroneously placed objects results in a strong cutback in the related repair costs, hence decreasing the overall failure costs of the project.
Despite its importance, the automation of construction site and progress monitoring workflows is lacking [6,7]. The construction industry currently still relies on visual inspections and sparse selective measurements executed by workers on site such as foremen and construction site managers. However, for an accurate and correct assessment of the positioning quality, these methods do not suffice. Alternatively, costly surveyor teams are employed to inspect the objects but due to budget limitations, only a small subset of the objects is evaluated. As a result, construction errors are not detected in time and cause the above described failure costs.
One approach to systematically evaluate construction sites is through point cloud data produced by remote sensing that is, laser scanning or photogrammetry, which produce vast amounts of geometric and visual information of the construction site in a more cost-effective manner (Figure 1). For progress monitoring purposes and metric quality assessments, it is paramount that these recorded datasets are properly (geo)referenced [8]. Traditionally, the procedure to align a point cloud with the corresponding as-design BIM model relies on a rigid body transformation of the former based on geometric correspondences [9]. Although this is a common practice, significant erroneous results may be encountered as inaccuracies still persist in the point cloud data. One of the driving factors is sensor drift, caused by propagating registration errors [10]. Especially for larger construction sites with complex layouts, clutter and poor accessibility, significant drift effects can be expected. Moreover, the measurement noise generated on construction site objects is substantial which further complicates the registration [11]. The presence of clutter and construction materials also inevitably result in (partial) occlusions of numerous construction elements [12]. This complicates their assessment, as analyses can only be executed on the visible parts of the elements.
In this research, we propose a novel method to assess the metric quality of building elements regardless of drift, noise, partial occlusions and suboptimal (geo)referencing. More specifically, we compute the systematic positioning error of each built object on a construction site given its as-design shape and neighboring objects. The framework we present aids the construction industry to assess construction site objects more profoundly in an efficient way and at a reduced cost (Figure 1). In summary, the main contributions are:
  • A robust framework to assess the metric quality of a large number of construction site objects in an efficient manner
  • A novel method that evaluates the discrepancies between a recorded point cloud and the BIM objects considering frequent construction site obstacles
  • An intuitive visualization displaying objects according to the Level of Accuracy (LOA) specification ranges
The remainder of this work is structured as follows. In Section 2, the related work on metric quality assessment is discussed. This is followed by Section 3, which gives an overview of our method and its most important contributions and advantages. In Section 4 the methodology of our application is presented. The results of the application are presented and discussed in Section 5 and Section 6. Finally, the conclusions and future optimisations are presented in Section 7.

2. Background & Related Work

Construction site assessment is a vital aspect of a project’s execution phase. Nevertheless, the automation of this procedure is still very much the topic of ongoing research [13,14]. An exhaustive literature overview of construction monitoring is presented by Alizadehsalehi et al. [15]. In this section, we discuss the works that focus on remote sensing data capture and those related to metric quality assessment on site.

2.1. Construction Site Geometry

The capture of construction sites is problematic. There is a plethora of techniques, both static and mobile, that can capture a construction site environment [16,17,18,19]. However, several inherent problems yet remain. A first obstacle is the site complexity with its limited accessibility due to construction operations and the presence of building materials, equipment, workers and support structures such as moldings. As a result, the point cloud data is heavily occluded and more drift can accumulate due to suboptimal sensor positioning and reduced fields of view [20,21]. Kalyan et al. [22] report that while construction sites can be captured, the resulting point cloud often lacks the accuracy to check for building tolerances. Zhang et al. [23] recognise this issue and propose an occlusion evaluation during the data acquisition. However, the registration of the datasets is also complicated by the objects themselves. The lack of texture for photogrammetric processes and the lack of geometric data dispersion for lidar approaches cause errors in the registration procedures of both approaches. As a result, the registration errors of mobile sensors or photogrammetric processes can easily reach up to several centimeters in average sized construction sites [10].
A second obstacle is the 3D point accuracy influenced by noise, drift and (geo)referencing. Most quality assessments rely on an accurate registration between the as-design model and the observed construction site [15]. However, the (geo)referencing of a construction site environment is error prone due to occluded markers, the site complexity and GPS denied zones. As such, GPS (geo)referencing accuracy of photogrammetric and lidar processes again is several centimeters in average sized construction sites if costly surveyor teams are not employed [8]. Noise is also rampant in construction sites due to the operations that are accompanied by the presence of temporary objects, workers, dust, vibrations and so on. Additionally, construction materials such as rebar are prone to noise due to their reflectivity characteristics and shape. Overall, it is stated that the noise, clutter, drift and (geo)referencing errors along with the occlusions complicate the proper assessment of an object’s positioning errors and must be taken into consideration.

2.2. Metric Quality Assessment

The assessment of the metric quality of a completed construction site object involves the estimation of the positioning errors of the object with respect to the building tolerances [24]. It is often referred to as scan-vs.-BIM [9,25] and is considered a high accuracy task which is challenging due to above defined errors [26]. For this reason, the spatial component of the analysis is sometimes overlooked [27]. For instance, Tang et al. and Anil et al. [28,29] make an assessment by purely reasoning on the visual information of an object without considering its location. For those that do include the metrics, the most common way is to (geo)reference the dataset and compute the shortest Euclidean distances between the sampled point cloud and the ground truth. For instance, Liu et al. [30] evaluate the dimensional accuracy of structure components given a perfectly aligned point cloud. Similarly, Nguyen et al. [20] perform on-site dimensional inspections given a properly referenced point cloud.
There are a number of researchers that attempt to enhance the (geo)referencing accuracy by computing an additional transformation between both datasets. Schnabel et al. [31] apply RANSAC-based procedures [32] to match as-design shapes to the point cloud. However, they only reach an accuracy of circa 0.15 m in a cloud-to-cloud based approach, which is insufficient for thorough positioning assessments. To improve the their target-based registration, Maalek et al. [33] actively mitigate clutter by detecting planes in the point cloud and removing outliers from the dataset. Xu et al. [34] also retrieve geometric primitives from construction sites and show that robust matching can overcome construction site noise for both photogrammetric and lidar datasets. Also from a computer vision approach, several algorithms such as Oriented FAST and Rotated BRIEF b (ORB) [35] and Signature of Histograms of Orientations (SHOT) [36] provide a solution for matching 3D shapes. However, the different types of data that is, point clouds and BIM models, with significant levels of noise and occlusions in the former and an abstract representation of the latter, make it challenging to exploit these algorithms [27]. An interesting alternative is to use Iterative Closest Point (ICP)-based [37] algorithms to retrieve the deviations between both datasets. These algorithms have proven to successfully retrieve the proper transformation between already closely aligned datasets such is the case with coarsely (geo)referenced construction site datasets [7,25,38,39,40]. The common concept of all aforementioned algorithms is that they consider the datasets as rigid bodies. Consequentially, the positioning errors are systematically exaggerated as these methods do not account for sensor noise and registration errors.
Closely related to quality control is damage inspection. This task has both the detection of progress monitoring and the assessment of the metric quality control. For instance, Valenca et al. [41] detect and assess cracks on concrete bridges using image processing supported by laser scanning survey. Similar to the evaluation of positioning errors, the proposed approaches assume a perfect alignment between the as-design objects and the observed data. Cabaleiro et al. [42], Zhong et al. [43], Adhikari et al. [44] all propose crack detection algorithms that build upon this assumption. It is interesting to note that this assumption does in fact hold up in smaller datasets where drift errors are less rampant.
Another closely related application to positioning errors is the dimensional evaluation of individual elements. This is also typically expressed as the shortest Euclidean distances between the sampled observations and the as-design shape of the objects [45,46]. However, several works also evaluate the translations and rotations of parts of the constructed elements such as presented by Wang et al. [47]. Dimension evaluations also include the association of the observed data with the parametric information that defines the object such as the radius of a pipe [7,9,48] or tunnel [49]. These methods show that the percentage and distribution of the observed surface of the object is a key factor in the reliability of the evaluation and needs to be taken into consideration. The reversed operation, that is, scan-to-BIM, such as presented by Tran et al. [50] also requires quality control but of the modeled objects. In these applications, exact positioning is always used since the measured point clouds are always assumed to be correct [51].
Given the transformations or deviations between the datasets, the results are communicated to the stakeholders. In it simplest form, this is a point cloud colored based on the deviations and a table showing the deviation parameters [20,29] (Figure 2a). Aside from the above discussed errors, the visualisation is heavily cluttered with stray points and noise which is challenging to evaluate by a user. Additionally, only the errors of the observed points are shown while stakeholders are interested in the positioning error of the entire object. As such, there is a mismatch between this representation and the expectations of the stakeholders. Similar to this study, Anil et al. [29] color the BIM instead of the point cloud. Each element is colored in a single color conform accuracy intervals, providing much needed clarity but also making it impossible to assess sub-element deviations. A popular set of accuracy intervals are the level of Accuracy (LOA) specification ranges which specify the measured and represented accuracy in a 95 % error interval [52]. The most commonly used intervals for construction tolerances are LOA30 [ | e ^ w i | 0.0015 m] (green), LOA20 [0.0015 m | e ^ w i | 0.05 m] (orange) and LOA10 [ | e ^ w i | 0.05 m] (red). Alternatively, Chen and Cho [27] chose to represent both, which is resource intensive. Tabular representations are also considered to show the deviations such as presented in Wang et al. [53], Kalyan et al. [22] and Guo et al. [48]. However, the errors are still mainly reported for only the point cloud instead of the BIM.

3. Overview

In this section, a general overview of the pipeline is presented.
  • The pipeline has two inputs, that is, a BIM model and a (geo)referenced point cloud (Figure 3a). In order for the method to properly operate, the majority of constructed elements should be built within tolerances and a minimum portion of the surface area of each target object should be observed in the point cloud. Also, the (geo)referencing accuracy of the point cloud should be equal or better than half of the average width of the target objects on the construction site.
  • Prior to the metric quality assessment, the data is preprocessed. For every target object in the BIM, a mesh representation is generated and the (geo)referenced point cloud is segmented using these objects. Following, several object characteristics are computed including the theoretical transformation resistance and the percentage of the observed surface of the object, both of which are used to enhance the error assessment.
  • The first step in the error assessment is the individual error estimation. An ICP-based algorithm is applied to compute the best fit transformation between an object’s as-design shape and its observed points. Given an object’s rotation and translation parameters, the 95 % error vector is established for the object. However, this error is exaggerated due to drift and (geo)referencing errors and thus should be compensated.
  • The second step in the error assessment is the adjustment of each object’s error vector using the dominant transformation in the vicinity of that object (Figure 3b). To this end, a cost function is defined between every object and its nearest neighbors. Given each object’s characteristics and ICP error, the best fit parameters for the dominant transformation are defined. The adjusted positioning error vector is then established by applying both the individual and the dominant transformation.
  • The resulting error vectors are used to visualise each object’s position errors. The error vectors are assigned to one of the Level of Accuracy (LOA) specification ranges and each BIM object is colored conform its respective interval (Figure 3c). The result is a colored BIM model and tabular error ellipses along the cardinal axes which can be intuitively interpreted by the stakeholders.
The final section of the methodology includes the implementation of the method. The method is tested on a simulated construction site with known ground truth and compared to a traditional metric quality assessment. Following, the method is also applied to a photogrammetric and lidar dataset to evaluate the differences.

4. Methodology

In this section, a detailed explanation is presented of the metric quality assessment of the constructed objects on the construction site using point cloud data. To the achievement of this goal, the following assumptions are made—(1) The point clouds are (geo)referenced with a minimum accuracy of approximately half the object’s width so alignment procedures will converge to the correct solution in the majority of cases, (2) A significant portion of each target object is observed and (3) the majority of elements are constructed within tolerance. To increase the method’s readability, the following notations are used throughout this section. Parameters and single objects are denoted by lowercase letters for example, w i is the boundary representation representation of a BIM object. Sets are denoted by upper case letters for example, P w i is the segmented point set associated with a certain BIM object. Finally, boldface upper case letters are used for supersets for example, the collection of all the point sets P w i is denoted as P w i . To simplify the notations, the preprocessing and local positioning is explained for a single object. In contrast, the adjusted error assessment is explained for the entire dataset due to the interactions between the objects.

4.1. Data Preprocessing

The data preprocessing entails the preparation of the BIM objects and the processing of the point cloud data (Figure 4a). First, the pregistered point clouds are structured as a voxel octree which allows for faster neighborhood searches. In parallel, a triangular or quad mesh geometry is extracted from every object w i W in the BIM that is to be metrically analysed. Second, the point cloud is segmented so that each segment contains the observations of a target object. As the point cloud is correctly placed save for drift and registration errors, a coarse spatial segmentation is conducted. To this end, the Euclidean distances between the point cloud P and a uniformly sampled BIM mesh Q w i is observed (Equation (1)) (Figure 4b).
P w i = p P | q Q w i : min q p q < t d ,
where Q w i is the sampled point cloud on the mesh and P w i the segmented point cloud for w i based on the threshold distance t d . Note that t d should at least be equal to the maximum expected positioning error plus the registration and referencing errors since p P w i will not considered to evaluate w i for efficiency purposes.
Once P w i is established, two shape characteristics of w i are extracted for the error estimation in a later stage that is, the theoretical transformation resistance I w i and the percentage of the observed surface of the object O w i . The former is a metric that quantifies the proportional impact of a certain transformation on the Euclidean distances between the initial and transformed dataset. It is defined as the percentage of points of an object that is impacted by a specific transformation. for example, an object with a significant surface area oriented perpendicular to the Z-axis will have a high transformation resistance to translations along the Z-axis and rotations around the X( γ )-and Y( β )-axis as the distance between points of both datasets is significantly impacted by such transformations. The theoretical transformation resistance of a translation that is, t x , referred to as i t x , w i , is defined as follows. First, every q Q w i is assigned to the cardinal axis it will impact the most given its normal n ( q ) (Figure 4b) (Equation (2)).
Q t x , w i = q Q w i | | n x ( q ) | | n y ( q ) | | n z ( q ) | i t x , w i = | Q t x , w i | | Q w i | ,
where Q t x , w i is the set of points assigned to the X-axis and n x ( q ) the x component of n ( q ) . The theoretical transformation resistance i t x , w i is then found by computing the ratio between the points contributing to the resistance of t x and the samples Q w i . For rotations, the union of both opposing axes is taken as both point sets will be impacted by the transformation that is, i α , w i = | Q t x , w i Q t y z , w i | | Q w i | . The resulting set I w i thus has 3 translation { i t x , i t y , i t z } and 3 rotation { i γ , i β , i α } parameters.
The second metric is the percentage of the observed surface of the object O w i . Analogue to Q w i , P w i is divided over the 3 axes after which every q Q w i is tested whether it lies within the same threshold distance t d as the initial segmentation. The observed surface of the object along the X-axis, referred to as O t x , w i , is defined as follows (Equation (3)).
P t x , w i = p P w i | | n x ( p ) | | n y ( p ) | | n z ( p ) | O t x , w i = 1 | Q t x , w i | q Q t x , w i | p P t x , w i : min p p q t d ,
where the test group for O α , w i again is composed of Q t x , w i Q t y , w i . Similar to I w i , the resulting set O w i has 3 translation and 3 rotation parameters.

4.2. Error Estimation

Following the preprocessing, each target object’s positioning errors on the construction site in relation to its as-design shape are determined. For every w i , the 95 % error ellipses are established along the cardinal axes. As discussed in Section 2, a mere shortest Euclidean distance evaluation between the as-design and observed shape yields inaccurate results due to sensor drift and poor (geo)referencing accuracy. We therefore propose a two-step procedure that first evaluates an object’s local positioning errors after which the errors are adjusted considering the dominant transformation in the vicinity of the object.

4.2.1. Local ICP

First, an Iterative Closest Point (ICP) [37] is used to establish the local positioning errors between P w i and Q w i of each object w i . Given a number of iterations, the best fit rigid body transformation T w i is computed based on the correspondences in P w i w and Q w i (Equation (4)) (Figure 4b).
E w i = min R , t , q Q w i p P w i ( R p + t ) q 2 R T R = I , det ( R ) = 1 ,
where E w i is the mean squared error of the matches between P w and Q w . T w i = R t 0 T 1 consists of the R R 3 x 3 rotation matrix, which is decomposed in { α , β , γ } , and the translation vector t = { t x , t y , t z } R 3 . The result is a rigid body transformation T w i for every observed object w i W in the scene.

4.2.2. Dominant Transformation

It is our hypotheses that while a single rigid body transformation does not apply to the full construction site due to drift, it does apply for a local cluster of objects W i around w i (Figure 4a). As such, we establish a dominant transformation T ^ W i based on the neighbors of w i . Notably, every translation { t x , t y , t z } and rotation { γ , β , α } parameter of the dominant transformation is computed individually to maximise its reliability. First, the neighbors of w i are retrieved within a threshold distance t n by observing the Euclidean distance between the boundaries of the objects (Equation (5)).
W i = w j | w i , w j W : d ( w i , w j ) . t n
Next, the frequency f w i F W i of each translation and rotation in W i is determined by observing the relative differences between the transformation. For instance, the frequency of all the t x in W i , referred to as F t x , W i , is defined as follows (Equation (6)).
F t x , W i = f w i | t x , w i , t x , w j t x , W i : f w i = 1 | W i | j = 1 | W i | t x , w i t x , w j ,
where t x , W i is the set of t x parameters in the cluster W i . Finally, the best fit transformation parameters for W i are found by optimizing a score function based on the characteristics of each w i W i . The characteristics include the previously determined theoretical transformation resistance I w i , the object surface Area A w i , the percentage of observed surface area O w i , the frequency F w i and the error value of the local transformation E w i . The values are combined in the feature vectors X w i X W i which is normalised and made associative to 1. Additionally a set of weights Ω is defined that balances the influence of the parameters. Analogue to the notations above, the dominant translation in the X-axis, referred to as t ^ x , W i is found as follows (Equation (7)).
t ^ x , W i = t x , w ^ | w i W i : w ^ = argmax w i x t x , w i X t x , w i ω x t x , w I ,
where x t x , w i is the set of t x parameters { i t x , w i , a t x , w i , o t x , w i , f t x , w i , E w i } for w i W i and ω = { ω I , ω A , ω O , ω F , ω E } the weights of each characteristic. Analogue, every dominant translation { t ^ x , t ^ y , t ^ z } and rotation { γ ^ , β ^ , α ^ } parameter is determined and combined into the dominant transformation T ^ W i . As a result, this transformation is composed of the most reliable parameters in the vicinity of w i .

4.2.3. Error Assessment

Given the dominant transformation T ^ W i , the adjusted positioning error of w i is computed. Conform the LOA specification, the 95 % confidence error vector e w i is computed for every w i . To this end, half of the diagonal vector d w i is subjected to the local ICP transformation T w i 1 and the dominant transformation T ^ W i (Equation (8)).
e w i = 0.95 T ^ W i T w i 1 d w i 2 d w i 2 ,
where the reliability of the error estimation is inversely proportional to the observed surface area O w i , the root mean squared error of the local ICP E w i | P w i | and the 95 % confidence value of the 3D point accuracy σ p . As such, objects with an insufficient coverage are not evaluated and errors within either accuracy margin are not reported (Equation (9)).
e ^ w i = e w i , | e w i | E w i | P w i | | | | e w i | 1.96 σ p | | O w i t o 0 , else
where e ^ w i = { e x , e y , e z } is the final positioning error vector with the 95 % error ellipses in X,Y,Z taking into consideration the errors caused by drift, noise, and clutter and registration errors.

4.3. Analysis Visualisation

Given e ^ w i , w i is colored conform an error schema (Figure 3c). In this research, the Level of Accuracy schema is adopted [52]. Each error component { e x , e y , e z } is assigned to a specific 95 % error interval of the LOA specification: LOA30 [ | e ^ w i | 0.0015 m] (green), LOA20 [0.0015 m | e ^ w i | 0.05 m] (orange) and LOA10 [ | e ^ w i | 0.05 m] (red). By coupling this error to the LOA ranges, the elements are colored, making the result intuitively interpretable for foremen, construction site managers and other stakeholders. The intervals are exposed to the user so custom intervals can be defined for other applications for example, accurate prefab quality assessment.

4.4. Implementation

The registration and georeferencing of the point clouds can be performed in any commercial photogrammetric or lidar package (depending on the sensor). The method itself is implemented in Mcneel Rhinoceros Grasshopper [54]. As such, the preprocessed point cloud data is directly imported from drive or accessed as native Autodesk Revit [55] or Mcneel Rhinoceros objects [54]. The BIM objects are directly accessed in Revit through the Rhino.Inside API [56] which is also manipulated to extract the mesh geometries of the BIM elements from Revit and to visualise the colored objects in their native environment. The local ICP builds upon native Matlab code and is exposed through our Scan-to-BIM Grasshopper Plug-in ( along with the necessary point cloud handling code. The global quality assessment builds upon the Cloudcompare API [57] which is extended to fuel the cost function of the dominant transformation selection.

5. Experiments

The experiments are conducted on data that is obtained from a construction site in Ghent, Belgium. The site is approximately 80 by 80 metres. The considered stage is an intermediate construction state of the parking lot in the basement. An as-design BIM model of that phase is available as well as recorded data via remote sensing techniques (both with terrestrial laser scanning and photogrammetry), as will be discussed later on.

5.1. Data Description

A multitude of different datasets is used for the experiments (Table 1). For the vast majority of the experiments, it is chosen to perform the analyses on synthetic data of the site. The underlying reason is that only then exact and hence trustworthy ground truth data is available. If the ground truth data were also measured, even via more precise sensors and/or registration techniques, phenomena such as occlusions and sensor noise remain, hence hindering unambiguous conclusions on the achieved results. Therefore it is hence decided to perform the analyses on synthetic data to avoid such possible registration errors or misinterpretations for example, due to occlusions and thus truly assess the performance of the proposed method. As the ultimate goal is to perform the proposed framework on actual recorded real-world construction data, in a subsequent section, the validity of our approach up till now is shortly verified by assessing the observed point clouds. However, no highly precise and accurately registered ground truth data for these datasets is available, hence the arguable determination whether or not elements are (in)correctly assigned to a certain error class. In case of the realistic data, it hence is only meaningful to compare the difference in results of the photogrammetric data as input versus the terrestrial laser scanning data. This again clearly shows the straightforward choice for synthetic data to assess the validity of our approach.
The synthetic data is generated from the sampled point cloud of the as-design BIM model. It consists of 129 sub point clouds, one for each BIM object. A multitude of transformations is applied to these sub point clouds such that a set of realistic point clouds is created. The synthetic data is subjected to two types of errors that are frequently encountered in point cloud recordings: errors caused by drift and by georeferencing. The magnitude of the errors is determined based on previous experiences and experiments such that realistic values are set for the various errors. The applied georeferencing error is set to a translation of { 0.03 ; 0.02 ; 0.04 } (m) along the 3 main axes. In contrast to georeferencing, drift is a more predictable but variable phenomenon. Therefore, these errors are determined based on the distance to the nearest body of control. Based on the distance from its centroid to the nearest control point, each object has a varying drift error with a maximum of { 0.015 ; 0.02 ; 0 } (m) and { 0 ; 0 ; 0.017 } (rad)) for respectively the translation and rotation. Finally, actual positioning errors are also imposed on an arbitrary number of objects with a maximum translation of 0.04m and rotation of 0.087 rad around the 3 main axes. The result is a set of point cloud observations, each with a slightly different transformation which represents a realistic dataset save for noise and occlusions.
The applied transformations in the former paragraph deliver a basic, realistic point cloud. To truly assess the proposed method, an extra series of transformations is applied, based on 4 parameters, to reflect the different types of Point Cloud Data (PCD) that can be encountered. The alterations are based on the site itself (percentage of displaced elements; amount of displacement) as well as on the data acquisition (amount of drift; rotational drift axes). Two sets of displacement and drift parameters (normal and extreme) are hence applied to several datasets. The normal displacement parameters are set to a maximum translation of 0.04 m and rotation of 0.087 rad. The extreme displacement parameters are set to respectively 0.1 m and 0.122 rad. Likewise, the normal drift parameters are { 0.015 ; 0.02 ; 0 } (m) and { 0 ; 0 ; 0.017 } (rad) for the translations and rotations, while in the extreme case these are altered to respectively { 0.025 ; 0.03 ; 0.02 } (m) and { 0 ; 0 ; 0.061 } (rad).

5.2. Synthetic Data Assessments

As mentioned earlier, the majority of the experiments is conducted on more trustworthy synthetic data (both for the tweaked datasets representing measured data as well as the ground truth) such that the true performance of our proposed method is evaluated.
A first experiment evaluates the influence of the three separate induced errors (georeferencing, drift and object displacement). When performing an absolute analysis, which currently is the common procedure, a global ICP algorithm is used in the process. It considers the datasets as rigid bodies and calculates the best fit alignment between both. The difference in the amount of influence of each error is deducted from the resulting translation and rotation parameters of the ICP optimisation. To correctly assess the positioning error, these parameters should be zero. However, the experiment shows that drift has the largest negative influence on the assessment. The drift results in an erroneous optimal alignment that is displaced by approximately 5 mm whereas the maximum applied drift translation is 3 cm. In contrast, the georeferencing error is correctly filtered out and the influence of incorrectly located elements is negligible compared to the drift (as long as the majority of the elements is placed correctly). These experiments point out that considering the quality assessment on an absolute project-level is defective, especially in cases where large drift effects are to be expected in the recorded dataset. However, applying ICP solely on the object level is also error prone since it also assumes a perfect alignment of both datasets. Therefore, our proposed framework is situated in between these 2 extremes as it assesses a singular element with its nearest neighbouring elements.
Using the described approach that subdivides the construction site in smaller parts for a locally more correct alignment, a series of experiments are performed on the 6 discussed datasets. It is chosen to also visually output the results by coloring each of the elements according to its adjusted error vector coupled to the LOA ranges. The colored assessment results are presented in Figure 5 and Figure 6. They show both the result of the currently conventional absolute analysis as well as the proposed relative analysis alongside the ground truth data. Furthermore, a numerical representation is also included where the number of elements per LOA class is depicted. It should be noted that the floor slab is not considered in all of the relative analyses. it spans the entire construction site and thus is so large that it suffers from internal drift, which cannot be compensated. Therefore it is colored gray in all visual results and excluded from the resulting tables.
The validity of our approach is clearly reflected in the figures: all erroneously placed elements are detected as faulty. Overall, in almost all of the cases the assigned LOA class is correct. Furthermore, it can be noted that each of the incorrectly located elements is assigned to the correct class or the one above (i.e., the class with worse results). On the downside however, also several other elements are considered as faulty, despite their correct placement. When considering the separate analyses, it can be noted that a value of 30% incorrectly placed elements yields slightly worse results compared to 10% and 20%. However, it is rather unlikely to encounter a site where 30% of the elements are severely displaced. In contrast to the absolute method, the analyses point out that the proposed approach is still valid even when applying a larger drift or element displacement.
As we opted for a colored analysis visualisation according to the LOA ranges, the results are arguable. The LOA ranges are quite broad, hence elements with a rather large difference in error can still be assigned to the same class. Therefore, we also assess the adjusted error vectors that lay at the basis of the visualisation. The magnitude of the error vectors on the 6 datasets are shown in Table 2.
For a realistic dataset (Dataset 1), very decent results are obtained. Moreover, also in the more challenging datasets (Dataset 2–6) the proposed framework performs quite well. A median difference in the error vector of our approach versus the ground truth of 1 mm is obtained (with rather high standard deviations ranging from 5 to 13 mm, mostly due to a small number of rather severe outliers) for all datasets, even the highly deformed ones. Our method also vastly outperforms the traditional absolute method with respective differences ranging from 6 to 12 mm (with standard deviations ranging from 11 to 48 mm). When considering the individual elements and their results, element number 23, 28 and 108 are the most representative for the executed assessments. These elements are respectively a medium size wall (approximately 4 meters), a column and a small wall (approximately 2 meters) and are spread over the scene. Relatively decent results are obtained for the absolute analysis and even better results are achievable with the proposed relative analysis. Roughly 80% of the data has similar values for the absolute and relative analyses as these 2 elements. An excellent example can be found for element number 8, a medium sized wall, where the absolute analysis performs very bad while our method yields acceptable results. For some elements, both analyses underperform. In general it can be stated that our method performs well for small to medium-sized object. However in some cases, anomalies arise. Mostly this is due to either large (erroneous) elements in the close neighborhood that highly influence the results of the surrounding smaller elements, a group of neighbouring elements that are all located erroneously in the same direction (i.e., all translated for approximately the same distances along x- and y-axis) or due to a relative small number of points on the element. Furthermore it can be noted that in rare cases such as for element 84, a medium sized wall, the relative method scores worse than the absolute analysis. When comparing different dataset results it becomes clear that the worst results are obtained in dataset 6 (large drift) for the absolute method and likewise dataset 6, closely followed by dataset 3 (30% displaced elements) for the relative method.
In conclusion it can be stated that for all datasets our approach proves beneficiary over the conventional method. It is able to detect all incorrectly placed elements, although a small set of correct elements also is determined as faulty, ranging from approximately 1% in the more realistic datasets to a maximum of 6% in the heavily tweaked datasets. Therefore it can be noted that our method is rather overshooting. However, when coupling back to the purpose of the analysis, namely metric quality control for effective construction site monitoring, overdetection is preferable over underdetection. When incorrect elements are not detected when assessing the site at a specific moment, the consequences of their erroneous placement are likely to be noticed later on in the building process. However, the later in the construction process, the higher the repair or modification costs.

5.3. Realistic Data

We also executed experiments on actual recorded data. As our method is sensor-independent, it works both for photogrammetric point cloud data as well as the data recorded with a terrestrial laser scanner. The photogrammetric dataset is constituted based on a total of 1453 images spread over the scene. Moreover, a terrestrial laser scanning dataset was also recorded. A total of 19 scans are aligned via the cloud-to-cloud technique. Both datasets are registered in the same coordinate system as the BIM model. Via total station measurements and a set of surveyor points used for the stake-out of the buildings, a new set of Ground Control Points (GCPs) on the surrounding facades is determined in coordinates. In turn, these GCPs enable the registration of the photogrammetric dataset in the BIM model reference system. This is ensured via a pre-registered reference module that incorporates the correct coordinate system. The followed method is described in depth in previous work [8]. The laser scanning dataset is registered via the traditional method of indicating ground control points. The subsequent quality assessments of both resulting datasets by the absolute and relative methods are presented in Figure 7. In contrast to the synthetic data experiments, it is noted that the proposed framework provides similar results as the absolute method under the assumption that every object was correctly built. Partial or complete occlusions of elements due to the cluttered environment (as displayed in Figure 2) play a major role in this phenomenon. Occlusions cause sparser PCD that in turn makes it harder to correctly align two smaller datasets compared to larger ones as fewer correspondences are present. Moreover, in the considered remote sensing datasets, some elements are not yet completed (e.g., molding of an in-situ cast concrete wall). In future work, it is therefore required to first assess the element’s state and subsequently only perform the analysis on the finished elements.
No ground truth data for the captured datasets is available, hence the only meaningful assessment is the comparison between the achieved results for both dataset types. It is remarkable that, despite the larger amount of points for the TLS dataset (2.5 million) in comparison to the photogrammetric dataset (1.7 million), the total number of elements that are considered for the quality assessment is significantly lower for the TLS dataset (59) compared to the photogrammetric dataset (91). Although the photogrammetric dataset consists of a lower number of points, these are to be found more on the element surfaces compared to TLS. This shows that state-of-the-art photogrammetric technologies can compete with low-level laser scanners in terms of accuracy and can also be used for quality control tasks. A driving factor in the poor lidar results is the noise caused by mixed-edge pixels and ghosting which is less of a problem in photogrammetric processes.
A second remark is that a significant portion of the larger elements is considered as erroneously located in the relative assessments and in a lesser extent also in the absolute assessment of the TLS dataset while the opposite is the case for the vast majority of the smaller elements. It is assumed that for large elements such as some of the present walls of 30 m and more, internal drift effects start to play a role. Another remark is that some elements are determined as correctly built and thus assigned to LOA30 in one dataset while these contrastingly are assigned to the erroneous LOA10 class when assessing the other dataset. It can be concluded that this tells more on the recorded data quality than the actual (in)correct location of elements.

6. Discussion

In this section, the pros and cons of the procedure are discussed and compared to the alternative methods presented in the literature. A first major aspect in the error assessment is the data association between the observed and as-design shape of the objects that is, which point correspondences are used for the detection and evaluation. As stated in the literature study, there are three strategies relevant to this work that is, absolute, relative and descriptor-based correspondences. Absolute methods such as presented by Bosche et al. [9] reach sub-centimeter accuracy with correctly positioned point cloud data. However, if this is not the case, our method, based on the relative positioning, clearly outperforms the absolute evaluation as it accounts for drift and (geo)referencing errors that can reach up to 15cm for mobile systems [26]. However, at its core, ICP is unintelligent and can be outperformed by descriptor-based correspondences such as ORB or SHOT. Especially for objects with significant detailing, descriptors have proven to be very successful with reaching sub-centimeter accuracy in realistic conditions [58]. However, the photogrammetry and lidar datasets reveal that discriminative points near the object’s edges or corners are systematically occluded and error prone as the as-design shape frequently is an abstraction of the constructed state.
A second aspect is the accuracy of the remote sensing inputs. Given perfect data, an absolute comparison is the most straightforward and suitable solution to perform a reliable quality control. One might expect that this accuracy is a given with the ongoing sensor advancements but current trends beg to differ. Construction site monitoring and qualitative quality control will remain challenge due to the speed and low budget by which these tasks must be performed. As such, future work will investigate to which extent that low-end sensors, enhanced through relative positioning, can be deployed for metric quality control on construction sites. This requires a deeper understanding of the drift effects that currently plague construction sites which is understudied in the literature. Overall, if fast quality control can be achieved with affordable low-end sensors, the impact on construction validation will be enormous. In future work, we will therefore study the decrease in construction failure and failure costs to fuel the valorization of this technology.
A third aspect is the validity of the rigid body transformation assumption. In the presented method, it is hypothesized that a single rigid body transformation can be defined for every object. It builds upon the assumption that the error propagation in the registration of consecutive sensor setups is negligible for a single object. While the experiments prove that this is correct, it does not apply to extremely large objects. For instance, the floor slab and some of the larger walls in the experiments span over the entire project and thus are subject to drift. This internal drift is currently not accounted for in the presented method and is part of future work. A possible solution is to discretize the objects into sections and treat each section individually. This will provide a more nuanced error and progress estimation for large objects that are typically built in different stages.
A fourth aspect is the error vector definition. The LOA specification prescribes LOA ranges for the 95 % confidence intervals of the errors. However, this applies only to observations that actually belong to the target object, which are challenging to isolate in cluttered and noisy environments that are construction sites. Moreover, since the objective is to track positioning errors, our method defines a search area in which every observation is considered regardless of its association to the target object. As a result, the LOA inliers are littered with false positives that negatively impact the error assessment. In the literature, the average, the median and even the 70 % value of the Euclidean distances between the correspondences have been reported, which are incorrect as they include the noise, false positives, registration errors, (geo)referencing errors and even the resolution of the point cloud. In contrast, we base our error vectors on the ICP transformations which correctly report the positioning errors but are sensitive to occlusions and clutter. This is compensated in the score function for the dominant transformation that considers the transformation resistance and the object’s occlusions, leading to very promising results.
Aside from the pros and cons, there are important limitations of the method to consider. First of all, there is the scalability of the method. The initial segmentation with a linear computation complexity makes the method very efficient but each new object still has to be tested against the entire point cloud, rendering larger projects unfeasible. Also, the method currently does not deal with internal drift which will be solved through discritization in future work. Third, the error vector representation along the cardinal axes is less suited for more complex and larger objects. In future work, we will study a Principal component analysis (PCA)-based representation and how object discritization can lead to more intuitive error vectors. Lastly, the method prefers the impact of larger objects which might not necessarily be placed more correctly. Furthermore, also limitations in the conducted experiments exist. So far the proposed method was assessed based upon synthetic data for a number of good reasons as described earlier. However, more elaborate future assessments are essential to verify the method’s validity when performing analyses with actual recorded data. For instance, the influence of (severe) occlusions, absent in the synthetic dataset, is still an understudied subject. However, key to such recorded data experiments is a very accurate ground truth dataset, unavailable in our experiments so far, that serves as comparison and evaluation.

7. Conclusions & Future Work

In this paper, a novel metric quality assessment method is proposed to evaluate whether built objects on a construction site are built within tolerance. The presented two-step procedure computes the systematic error vector of each entity regardless of sensor drift, noise, clutter and (geo)referencing errors. In the first step, the individual positioning error of each object is determined based on ICP. However, as this is error prone due to occlusions, uneven data distributions and the above errors, the surroundings of each object are also considered. In the second step, we therefore establish a dominant transformation in the vicinity of each object and compensate the object’s initial error vector with this transformation, thus eliminating the above described systematic errors. The resulting adjusted positioning error is reported with respect to the Level of Accuracy (LOA) specification. In this work, LOA10, 20 and 30 are considered which form the common range of construction site tolerances and their errors. Notably, the method relies on a spatial data association in the first step and thus the initial point cloud should be (geo)referenced with a maximum error equal to the search area around each object. This research will lead to lower failure costs as errors are detected in an early stage where they can be mitigated without expensive building adjustments.
The method is tested on a variety of test sites. First, the method’s performance is empirically determined and compared to a traditional error assessment. To this end, a simulated dataset is generated from an actual construction site with a fixed ground truth and systematically introduced errors that represent the drift, (geo)referencing and actual positioning errors. Next, the outputs of our method and the traditional method are compared for actual construction site measurements obtained by photogrammetry and lidar. Overall, the experiments show that the presented method significantly outperforms the traditional method. Its ability to mitigate noise, drift and (geo)referencing errors results in a more realistic error assessment, which is vital to help stakeholders reduce failure costs. However, some errors still remain such as the internal drift of massive objects for example, a floor slab that spans over the entire construction site does not comply with the rigid body transformation assumption and will therefore be error prone.
The future work includes the preparation for valorization by increasing the method’s robustness and ability to process different types of sensor data. Also, it will be studied to which degree failure costs are lowered by periodic quality control no construction sites. On the scientific front, we are studying the discretization of the input BIM objects to mitigate internal drift, the implementation of a dynamic search radius to establish the dominant transformation and also an automated georeferencing procedure which currently is frequently overlooked in the state of the art. Additionally, we are investigating the opportunities to asses internal object dimensions regardless of the object’s type or shape given the error vectors.

Author Contributions

Maarten Bassier, Stan Vincke and Heinder De Winter conceptualized the research and Maarten Vergauwen supervised the work. All authors have read and agreed to the published version of the manuscript.


This project has received funding from the VLAIO COOCK programme (grant agreement HBC.2019.2509), the FWO research foundation (FWO PhD SB fellowship 1S11218N) and the Geomatics research group of the Department of Civil Engineering, TC Construction at the KU Leuven in Belgium.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Man, S.S.; Chan, A.H.; Wong, H.M. Risk-taking behaviors of Hong Kong construction workers—A thematic study. Saf. Sci. 2017, 98, 25–36. [Google Scholar] [CrossRef]
  2. Wong, T.K.M.; Man, S.S.; Chan, A.H.S. Critical factors for the use or non-use of personal protective equipment amongst construction workers. Saf. Sci. 2020, 126, 104663. [Google Scholar] [CrossRef]
  3. Love, P.E.; Teo, P.; Morrison, J. Revisiting Quality Failure Costs in Construction. J. Constr. Eng. Manag. 2018, 144, 05017020. [Google Scholar] [CrossRef]
  4. Chen, L.; Luo, H. A BIM-based construction quality management model and its applications. Autom. Constr. 2014, 46, 64–73. [Google Scholar] [CrossRef]
  5. Chen, K.; Lu, W.; Peng, Y.; Rowlinson, S.; Huang, G.Q. Bridging BIM and Building: From a Literature Review to an Integrated Conceptual Framework. Int. J. Proj. Manag. 2015, 33, 1405–1416. [Google Scholar] [CrossRef]
  6. Golparvar-Fard, M.; Peña Mora, F.; Savarese, S. D4AR - A 4-dimensional Augmented Reality Model for Automating Construction Progress Monitoring Data Collection, Processing and Communication. J. Inf. Technol. Constr. 2009, 14, 129–153. [Google Scholar]
  7. Bosché, F. Automated Recognition of 3D CAD Model Objects in Laser Scans and Calculation of As-built Dimensions for Dimensional Compliance Control in Construction. Adv. Eng. Inform. 2010, 24, 107–118. [Google Scholar] [CrossRef]
  8. Vincke, S.; Vergauwen, M. Geo-Registering Consecutive Construction Site Recordings Using a Pre-Registered Reference Module. Remote Sens. 2020, 12, 1928. [Google Scholar] [CrossRef]
  9. Bosché, F.; Guillemet, A.; Turkan, Y.; Haas, C.T.; Haas, R. Tracking the Built Status of MEP Works: Assessing the Value of a Scan-vs-BIM System. J. Comput. Civ. Eng. 2014, 28, 05014004. [Google Scholar] [CrossRef]
  10. Lehtola, V.V.; Kaartinen, H.; Nüchter, A.; Kaijaluoto, R.; Kukko, A.; Litkey, P.; Honkavaara, E.; Rosnell, T.; Vaaja, M.T.; Virtanen, J.P.; et al. Comparison of the selected state-of-the-art 3D indoor scanning and point cloud generation methods. Remote Sens. 2017, 9, 796. [Google Scholar] [CrossRef]
  11. Vincke, S.; Bassier, M.; Vergauwen, M. Image Recording Challenges for Photogrammetric Construction Site Monitoring. ISPRS Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2019, XLII-2/W9, 747–753. [Google Scholar] [CrossRef]
  12. Braun, A.; Borrmann, A. Combining inverse photogrammetry and BIM for automated labeling of construction site images for machine learning. Autom. Constr. 2019, 106, 102879. [Google Scholar] [CrossRef]
  13. Wang, Q.; Kim, M.K. Applications of 3D point cloud data in the construction industry: A fifteen-year review from 2004 to 2018. Adv. Eng. Inform. 2019, 39, 306–319. [Google Scholar] [CrossRef]
  14. Son, H.; Bosché, F.; Kim, C. As-built data acquisition and its use in production monitoring and automated layout of civil infrastructure: A survey. Adv. Eng. Inform. 2015, 29, 172–183. [Google Scholar] [CrossRef]
  15. Alizadehsalehi, S.; Yitmen, I. A Concept for Automated Construction Progress Monitoring: Technologies Adoption for Benchmarking Project Performance Control. Arab. J. Sci. Eng. 2019, 44, 4993–5008. [Google Scholar] [CrossRef]
  16. Omar, T.; Nehdi, M.L. Data acquisition technologies for construction progress tracking. Autom. Constr. 2016, 70, 143–155. [Google Scholar] [CrossRef]
  17. Alizadehsalehi, S.; Yitmen, I. The Impact of Field Data Capturing Technologies on Automated Construction Project Progress Monitoring. Procedia Eng. 2016, 161, 97–103. [Google Scholar] [CrossRef]
  18. Pučko, Z.; Šuman, N.; Rebolj, D. Automated continuous construction progress monitoring using multiple workplace real time 3D scans. Adv. Eng. Inform. 2018, 38, 27–40. [Google Scholar] [CrossRef]
  19. Behnam, A.; Wickramasinghe, D.C.; Ghaffar, M.A.A.; Vu, T.T.; Tang, Y.H.; Isa, H.B.M. Automated progress monitoring system for linear infrastructure projects using satellite remote sensing. Autom. Constr. 2016, 68, 114–127. [Google Scholar] [CrossRef]
  20. Nguyen, C.H.P.; Choi, Y. Comparison of point cloud data and 3D CAD data for on-site dimensional inspection of industrial plant piping systems. Autom. Constr. 2018, 91, 44–52. [Google Scholar] [CrossRef]
  21. Asadi, K.; Asce, S.M.; Ramshankar, H.; Noghabaei, M.; Han, K.; Asce, A.M. No Title. J. Comput. Civ. Eng. 2019, 33, 04019031. [Google Scholar] [CrossRef]
  22. Kalyan, T.S.; Zadeh, P.A.; Staub-French, S.; Froese, T.M. Construction Quality Assessment Using 3D as-built Models Generated with Project Tango. Procedia Eng. 2016, 145, 1416–1423. [Google Scholar] [CrossRef]
  23. Zhang, C.; Kalasapudi, V.S.; Tang, P. Rapid data quality oriented laser scan planning for dynamic construction environments. Adv. Eng. Inform. 2016, 30, 218–232. [Google Scholar] [CrossRef]
  24. Lou, J.; Xu, J.; Wang, K. Study on Construction Quality Control of Urban Complex Project Based on BIM. Procedia Eng. 2017, 174, 668–676. [Google Scholar] [CrossRef]
  25. Bosché, F. Plane-based registration of construction laser scans with 3D/4D building models. Adv. Eng. Inform. 2012, 26, 90–102. [Google Scholar] [CrossRef]
  26. Rebolj, D.; Pučko, Z.; Babič, N.Č.; Bizjak, M.; Mongus, D. Point cloud quality requirements for Scan-vs-BIM based automated construction progress monitoring. Autom. Constr. 2017, 84, 323–334. [Google Scholar] [CrossRef]
  27. Chen, J.; Cho, Y.K. Point-to-point Comparison Method for Automated Scan-vs-BIM Deviation Detection. In Proceedings of the 17th International Conference on Computing in Civil and Building Engineering, Tampere, Finland, 4–7 June 2008. [Google Scholar]
  28. Tang, P.; Anil, E.; Akinci, B.; Huber, D. Efficient and Effective Quality Assessment of As-Is Building Information Models and 3D Laser-Scanned Data. In International Workshop on Computing in Civil Engineering; ASCE: Miami, FL, USA, 2011; pp. 486–493. [Google Scholar] [CrossRef]
  29. Anil, E.B.; Tang, P.; Akinci, B.; Huber, D. Deviation analysis method for the assessment of the quality of the as-is Building Information Models generated from point cloud data. Autom. Constr. 2013, 35, 507–516. [Google Scholar] [CrossRef]
  30. Liu, J.; Zhang, Q.; Wu, J.; Zhao, Y. Dimensional accuracy and structural performance assessment of spatial structure components using 3D laser scanning. Autom. Constr. 2018, 96, 324–336. [Google Scholar] [CrossRef]
  31. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for point-cloud shape detection. Comput. Graph. Forum 2007, 26, 214–226. [Google Scholar] [CrossRef]
  32. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  33. Maalek, R.; Lichti, D.D.; Ruwanpura, J.Y. Automatic recognition of common structural elements from point clouds for automated progress monitoring and dimensional quality control in reinforced concrete construction. Remote Sens. 2019, 11, 1102. [Google Scholar] [CrossRef]
  34. Xu, Y.; Tuttas, S.; Hoegner, L.; Stilla, U. Geometric Primitive Extraction from Point Clouds of Construction Sites Using VGS. IEEE Geosci. Remote Sens. Lett. 2017, 14, 424–428. [Google Scholar] [CrossRef]
  35. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar] [CrossRef]
  36. Salti, S.; Tombari, F.; Di Stefano, L. SHOT: Unique signatures of histograms for surface and texture description. Comput. Vis. Image Underst. 2014, 125, 251–264. [Google Scholar] [CrossRef]
  37. Besl, P.; McKay, N. Method for Registration of 3-D Shapes. In Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence, Boston, MA, USA, 30 April 1992; Volume 14, pp. 239–256. [Google Scholar] [CrossRef]
  38. Kaneko, S.; Kondo, T.; Miyamoto, A. Robust matching of 3D contours using iterative closest point algorithm improved by M-estimation. Pattern Recognit. 2003, 36, 2041–2047. [Google Scholar] [CrossRef]
  39. Gruen, A.; Akca, D. Least squares 3D surface and curve matching. ISPRS J. Photogramm. Remote Sens. 2005, 59, 151–174. [Google Scholar] [CrossRef]
  40. Du, S.; Zheng, N.; Ying, S.; You, Q.; Wu, Y. An Extension of the ICP Algorithm Considering Scale Factor. In Proceedings of the 2007 IEEE International Conference on Image Processing, San Antonio, TX, USA, 16–19 September 2007; Volume 5, pp. 193–196. [Google Scholar] [CrossRef]
  41. Valença, J.; Puente, I.; Júlio, E.; González-Jorge, H.; Arias-Sánchez, P. Assessment of cracks on concrete bridges using image processing supported by laser scanning survey. Constr. Build. Mater. 2017, 146, 668–678. [Google Scholar] [CrossRef]
  42. Cabaleiro, M.; Lindenbergh, R.; Gard, W.F.; Arias, P.; van de Kuilen, J.W. Algorithm for automatic detection and analysis of cracks in timber beams from LiDAR data. Constr. Build. Mater. 2017, 130, 41–53. [Google Scholar] [CrossRef]
  43. Zhong, X.; Peng, X.; Yan, S.; Shen, M.; Zhai, Y. Assessment of the feasibility of detecting concrete cracks in images acquired by unmanned aerial vehicles. Autom. Constr. 2018, 89, 49–57. [Google Scholar] [CrossRef]
  44. Adhikari, R.S.; Moselhi, O.; Bagchi, A. Image-based retrieval of concrete crack properties for bridge inspection. Autom. Constr. 2014, 39, 180–194. [Google Scholar] [CrossRef]
  45. Kim, M.K.K.; Cheng, J.C.P.; Sohn, H.; Chang, C.C.C. A framework for dimensional and surface quality assessment of precast concrete elements using BIM and 3D laser scanning. Autom. Constr. 2015, 49, 225–238. [Google Scholar] [CrossRef]
  46. Kim, M.K.K.; Wang, Q.; Park, J.W.W.; Cheng, J.C.; Sohn, H.; Chang, C.C.C. Automated dimensional quality assurance of full-scale precast concrete elements using laser scanning and BIM. Autom. Constr. 2016, 72, 102–114. [Google Scholar] [CrossRef]
  47. Wang, Q.; Kim, M.K.; Cheng, J.C.; Sohn, H. Automated quality assessment of precast concrete elements with geometry irregularities using terrestrial laser scanning. Autom. Constr. 2016, 68, 170–182. [Google Scholar] [CrossRef]
  48. Guo, J.; Yuan, L.; Wang, Q. Time and Cost Analysis of Geometric Quality Assessment of Structural Columns based on 3D Terrestrial Laser Scanning. Autom. Constr. 2020, 110, 103014. [Google Scholar] [CrossRef]
  49. Han, J.Y.; Guo, J.; Jiang, Y.S. Monitoring tunnel profile by means of multi-epoch dispersed 3-D LiDAR point clouds. Tunn. Undergr. Space Technol. 2013, 33, 186–192. [Google Scholar] [CrossRef]
  50. Tran, H.; Khoshelham, K.; Kealy, A. Geometric comparison and quality evaluation of 3D models of indoor environments. ISPRS J. Photogramm. Remote Sens. 2019, 149, 29–39. [Google Scholar] [CrossRef]
  51. Bassier, M.; Vergauwen, M. Topology Reconstruction of BIM Wall Objects From Point Cloud Data. Remote Sens. 2020, 12, 1800. [Google Scholar] [CrossRef]
  52. U.S. Institute of Building Documentation. USIBD Level of Accuracy (LOA) Specification Guide v3.0-2019; Technical Report; U.S. Institute of Building Documentation: Tustin, CA, USA, 2019. [Google Scholar]
  53. Wang, J.; Sun, W.; Shou, W.; Wang, X.; Wu, C.; Chong, H.Y.; Liu, Y.; Sun, C. Integrating BIM and LiDAR for Real-Time Construction Quality Control. J. Intell. Robot. Syst. Theory Appl. 2015, 79, 417–432. [Google Scholar] [CrossRef]
  54. Robert McNeel & Associates. Rhinoceros 6. Technical Report Last Visited on 06-06-2020. 2019. Available online: (accessed on 11 September 2020).
  55. Autodesk Inc. Revit; Technical Report, Last Visited on 06-06-2020; Autodesk Inc.: Mill Valley, CA, USA, 2020. [Google Scholar]
  56. Robert McNeel & Associates. Rhino.Inside; Technical Report, Last Visited on 06-06-2020; Robert McNeel & Associates: Washington, WA, USA, 2019. [Google Scholar]
  57. Girardeau-Montaut, D. CloudCompare. 2016. Available online: (accessed on 10 September 2020).
  58. Liu, H.; Tan, T.H.; Kuo, T.Y. A Novel Shot Detection Approach Based on ORB Fused with Structural Similarity. IEEE Access 2019, 8, 2472–2481. [Google Scholar] [CrossRef]
Figure 1. Example inputs for automated construction site monitoring, showing the extend of the clutter, noise, occlusions and other obstacles that automated procedures have to overcome.
Figure 1. Example inputs for automated construction site monitoring, showing the extend of the clutter, noise, occlusions and other obstacles that automated procedures have to overcome.
Ijgi 09 00545 g001
Figure 2. Example overview of the different visualizations of the metric analyses of the objects on the construction site.
Figure 2. Example overview of the different visualizations of the metric analyses of the objects on the construction site.
Ijgi 09 00545 g002
Figure 3. Overview of the metric quality assessment procedure with intermediate results.
Figure 3. Overview of the metric quality assessment procedure with intermediate results.
Ijgi 09 00545 g003
Figure 4. Overview methodology to conduct a metric quality control of a BIM object in the coordinate system of the project regardless of drift, sensor noise, clutter and georeferecing accuracy.
Figure 4. Overview methodology to conduct a metric quality control of a BIM object in the coordinate system of the project regardless of drift, sensor noise, clutter and georeferecing accuracy.
Ijgi 09 00545 g004
Figure 5. Comparative overview figure for the first 3 datasets with varying percentages of displaced elements. The BIM elements are coloured according to their locational displacement coupled to the LOA classes both for the conventional absolute analysis as well as the proposed relative analysis alongside the expected ground truth results. Corresponding tables are included that give the number of elements per LOA class.
Figure 5. Comparative overview figure for the first 3 datasets with varying percentages of displaced elements. The BIM elements are coloured according to their locational displacement coupled to the LOA classes both for the conventional absolute analysis as well as the proposed relative analysis alongside the expected ground truth results. Corresponding tables are included that give the number of elements per LOA class.
Ijgi 09 00545 g005
Figure 6. Comparative overview figure for the second 3 datasets with extra larg drift and displacements and drift around the 3 main axes. The BIM elements are coloured according to their locational displacement coupled to the LOA classes both for the conventional absolute analysis as well as the proposed relative analysis alongside the expected ground truth results. Corresponding tables are included that give the number of elements per LOA class.
Figure 6. Comparative overview figure for the second 3 datasets with extra larg drift and displacements and drift around the 3 main axes. The BIM elements are coloured according to their locational displacement coupled to the LOA classes both for the conventional absolute analysis as well as the proposed relative analysis alongside the expected ground truth results. Corresponding tables are included that give the number of elements per LOA class.
Ijgi 09 00545 g006
Figure 7. Comparative overview figure of the conducted absolute and relative locational quality assessments for 2 real-world datasets captured through photogrammetry and terrestrial laser scanning. Corresponding tables are included that give the number of elements per LOA class. The floor slab is not considered in both assessment methods as it is only observed from a single (top) face.
Figure 7. Comparative overview figure of the conducted absolute and relative locational quality assessments for 2 real-world datasets captured through photogrammetry and terrestrial laser scanning. Corresponding tables are included that give the number of elements per LOA class. The floor slab is not considered in both assessment methods as it is only observed from a single (top) face.
Ijgi 09 00545 g007
Table 1. Overview of the various datasets and the applied transformations.
Table 1. Overview of the various datasets and the applied transformations.
BIMAs-design BIM model
BIM PCDSampled PCD of the above BIM model
Dataset 1Dataset with 10% of the elements with normal (N) displacements and a normal drift
applied around the Z-axis only
Dataset 2Dataset with 20% of the elements with normal displacements and a normal drift applied
around the Z-axis only
Dataset 3Dataset with 30% of the elements with normal displacements and a normal drift applied
around the Z-axis only
Dataset 4Dataset with 10% of the elements with normal displacements and a normal drift applied
around the X-, Y- and Z-axis
Dataset 5Dataset with 10% of the elements with extra large (XL) displacements and a normal drift
applied around the Z-axis only
Dataset 6Dataset with 10% of the elements with normal displacements and an extra large drift
applied around the Z-axis only
TLSRecorded Terrestrial Laser Scanning dataset of the construction site
PHOTRecorded photogrammetric dataset of the construction site
ErrorTransformation parameters
E g e o r e f T { 0.03 ; 0.02 ; 0.04 } ( m )
E d r i f t (N) T m a x { 0.015 ; 0.020 ; 0.000 } ( m ) and R m a x { 0 ; 0 ; 0.017 } ( r a d )
E d r i f t (XL) T m a x { 0.025 ; 0.030 ; 0.020 } ( m ) and R m a x { 0 ; 0 ; 0.061 } ( r a d )
E w _ d i s p l . (N) T m a x { 0.04 ; 0.04 ; 0.04 } ( m ) and R m a x { 0.087 ; 0.087 ; 0.087 } ( r a d )
E w _ d i s p l . (XL) T m a x { 0.10 ; 0.10 ; 0.10 } ( m ) and R m a x { 0.122 ; 0.122 ; 0.122 } ( r a d )
Table 2. Overview table of the error vectors for a subset (5 out of the 129) of representative elements alongside the overall median and standard deviation (sd) values. For simplicity reasons all error vectors were transformed to a singular root-mean-square error (RMSE) value.
Table 2. Overview table of the error vectors for a subset (5 out of the 129) of representative elements alongside the overall median and standard deviation (sd) values. For simplicity reasons all error vectors were transformed to a singular root-mean-square error (RMSE) value.
Error Vector (mm)Dataset 1Dataset 2
overall median 6 0 6 1
overall sd 15 5 14 5
Dataset 3Dataset 4
overall median 7 1 6 0
overall sd 14 12 11 5
Dataset 5Dataset 6
overall median 6 0 12 1
overall sd 14 5 48 13
Back to TopTop