Next Article in Journal
An Analytics Platform for Integrating and Computing Spatio-Temporal Metrics
Next Article in Special Issue
Diachronic Reconstruction and Visualization of Lost Cultural Heritage Sites
Previous Article in Journal
Examining Trade-Offs between Social, Psychological, and Energy Potential of Urban Form
Previous Article in Special Issue
Multi-Scale and Multi-Sensor 3D Documentation of Heritage Complexes in Urban Areas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Dimensional Digital Documentation of Cultural Heritage Site Based on the Convergence of Terrestrial Laser Scanning and Unmanned Aerial Vehicle Photogrammetry

Department of Cultural Heritage Conservation Sciences, Kongju National University, Gongju 32588, Korea
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2019, 8(2), 53; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8020053
Submission received: 27 November 2018 / Revised: 3 January 2019 / Accepted: 21 January 2019 / Published: 24 January 2019
(This article belongs to the Special Issue Data Acquisition and Processing in Cultural Heritage)

Abstract

:
Three-dimensional digital technology is important in the maintenance and monitoring of cultural heritage sites. This study focuses on using a combination of terrestrial laser scanning and unmanned aerial vehicle (UAV) photogrammetry to establish a three-dimensional model and the associated digital documentation of the Magoksa Temple, Republic of Korea. Herein, terrestrial laser scanning and UAV photogrammetry was used to acquire the perpendicular geometry of the buildings and sites, where UAV photogrammetry yielded higher planar data acquisition rate in upper zones, such as the roof of a building, than terrestrial laser scanning. On comparing the two technologies’ accuracy based on their ground control points, laser scanning was observed to provide higher positional accuracy than photogrammetry. The overall discrepancy between the two technologies was found to be sufficient for the generation of convergent data. Thus, the terrestrial laser scanning and UAV photogrammetry data were aligned and merged post conversion into compatible extensions. A three-dimensional (3D) model, with planar and perpendicular geometries, based on the hybrid data-point cloud was developed. This study demonstrates the potential for using the integration of terrestrial laser scanning and UAV photogrammetry in 3D digital documentation and spatial analysis of cultural heritage sites.

1. Introduction

There are many cultural heritage sites in the Republic of Korea. However, the sites incurred constant deformation owing to deteriorations and disasters. The acquisition of geospatial information based on numeric data is very important for systematic management to respond to the deformation. The use of three-dimensional (3D) coordinate data for multi-faceted analysis has recently increased in the conservation and management of cultural heritage sites [1,2,3,4,5,6]. In particular, the precise investigation and monitoring of cultural heritage sites in terms of preventive conservation has gained attention, and digital documentation processes have been recognized as an essential element in conservation, rather than the active repair of cultural heritage sites [7,8,9].
In addition, the efficiency of survey methods should be considered for the acquisition of 3D numeric data because most heritage sites are fairly large. To achieve this goal, 3D terrestrial laser scanning and unmanned aerial vehicle (UAV) photogrammetry are considered to be representative of documentation technology because they create a digital model from recorded data that is nearly identical to the physical geometry [10,11,12,13]. These technologies have relatively cheap operational expenses [14,15] and can rapidly and accurately reacquire high-resolution images [16,17]. Recently, multiple studies pertaining to the lightening of equipment and increasing of precision have been performed [18]. In particular, these technologies have been applied in conservation assessment [19,20], building information modeling (BIM) [21,22,23], and archaeological documentation [24,25,26].
Terrestrial laser scanning measures the 3D spatial information of an object within a certain distance of the ground using a laser [27,28]. This method can quickly acquire the geometry of a large cultural heritage site, owing to its high operation speed, mobility, and accessibility. The data acquisition rate for terrestrial laser scanning is optimum when in a perpendicular direction.
UAV photogrammetry provides some alternative advantages when compared with laser scanning [29]. The orthoimage created via UAV photogrammetry allows for distances, angles, plane coordinates, and areas to be directly measured as the relationships between different locations are identical to those on the topographic map [30]. UAV photogrammetry has a higher planar data acquisition rate in upper zones, such as the roof of a building, than terrestrial laser scanning.
In general, temple sites include multiple buildings and are relatively wide, open, and complex. Therefore, documenting the shape of the entire site exclusively through terrestrial laser scanning is difficult. In particular, acquiring data at positons where the scanner is not accessible is difficult, for example the roof of a building. In addition, the acquired data has a low point density owing to the restricted field of view even when the scan is performed for a long duration and from a high position. The superior mobility and accessibility of UAV photogrammetry must be actively utilized to overcome these disadvantages. If terrestrial laser scanning and UAV photogrammetry can be appropriately integrated, multidirectional numerical information together with the arrangements of architectural heritage sites could be acquired [31,32,33,34]. Accordingly, the fusion of laser scanning and photogrammetry has been widely used for the 3D modeling of buildings [35,36] and cultural heritages [37,38].
This study applied both terrestrial laser scanning and UAV photogrammetry to the 3D digital documentation of Magoksa Temple in Gongju, a representative temple of the Republic of Korea [39]. Point clouds of the temple site obtained from the two technologies were integrated after their accuracy had been verified to produce an orthoimage and a 3D model using specialized software. In particular, overall workflow of 3D fusion modeling and the possibility of application of the completed terrain model were discussed. The results of this study are expected to contribute to the development of accurate 3D documentation and spatial analysis of heritage sites.

2. Study Area and Method

2.1. Study Area

The Magoksa Temple, located in Gonju, Republic of Korea, was built in the 7th century. The temple was included in the UNESCO World Heritage list in 2018 as one of the “Traditional Buddhist Mountain Temples of Korea.” This temple is a good example of how a Korean-style temple layout exhibits harmony between the inner and outer space with the surrounding scenery, which in this case is a mountainous area. This feature led to the temple’s recognition as a World Heritage site. The Magoksa Temple can comprehensively exhibit thoughts and the consciousness unique to Korean Buddhism together with the life and culture associated with a mountain temple.
The Magoksa Temple is divided into northern and southern territories based around a central stream, as shown in Figure 1a. The two territories have both horizontal and vertical arrangements. The northern territory primarily contains the Daeungbojeon Hall, the Daegwanbojeon Hall, and a Five-Story Stone Pagoda. The main heritage site of the southern territory is the Yeongsanjeon Hall. The five-story stone pagoda in this temple has a Lamaistic upper part, which is unique in the Republic of Korea; this hybrid style evidences active cultural exchanges with the Yuan Dynasty in the 13th century (Figure 1b).

2.2. Method

The terrestrial laser scanner (Leica, ScanStation C10) used to create the 3D model of the Magoksa Temple is based on the time of flight of the laser pulse (Figure 2a). The scanner used for this study has a maximum scan speed of 50,000 points per second and is accurate to 6 and 4 mm in terms of position and distance between 1 m and 50 m, respectively. A specialized external camera with a fisheye lens (SIGMA, 8-mm F3.5 EX DG Circular FISHEYE) was used for texture mapping. The scanned point cloud data were post-processed using Cyclone 9.3 software produced by Leica Geosystems.
UAV photogrammetry was conducted using a rotorcraft drone (Leica, Aibot X6). The UAV used is an autonomously flying and high-performance hexacopter with stable flight performance, specially designed for demanding tasks in surveying, industrial inspection, agriculture, and forestry. Moreover, the UAV had various on-board sensors, including GPS, a gyroscope, an ultrasonic transducer, and a smart camera system. In addition, to create a 3D model, a 24 MP mirrorless digital camera (SONY, Alpha 6000) with a 20-mm lens was mounted on the drone for photogrammetry. Generally, the camera calibration is performed using the bundle adjustment option of specialized software [40] or developed methods [41]. However, a special camera calibration was not performed because this study used a new camera and lens for photogrammetry.
Based on the previous statement, a gimbal became a mandatory part of the UAV equipment because it smooths the angular movements of a camera and provides advantages for acquiring better images [42]. This UAV is able to provide high-quality data, and it has an exceptional camera mount that has automatic pitch-and-roll compensation of the camera gimbal for smoother pictures and video output (Figure 2b).
PhotoScan 1.3.4 Professional Edition software produced by Agisoft was used for the photogrammetric process. The software allows for professional 3D modeling of the captured aerial photographs. The software automatically processes the photographs to produce a 3D reconstruction of the images. A ground control point (GCP) survey of the Magoksa Temple was conducted across five points, (CP1–CP5) using the virtual reference station (VRS) GPS (Trimble, R6 Model 3) and levelling (Figure 2c). The VRS GPS system used for this survey is accurate to 8 mm + 0.5 ppm and 15 mm + 0.5 ppm in terms of horizontality and verticality, respectively.
Specifically, the survey was performed by considering positions that were not obstructed by nearby obstacles and where the visual field could be secured. The results of the survey points were obtained by fixing four unified control stations (UCS) and conducting a GPS baseline analysis. In addition, the levelling was utilized to improve the vertical accuracy of GPS survey. These GCPs were used to relocate the terrestrial laser scanning and UAV photogrammetry models in the true north direction and obtain further positional information regarding the land surface.

3. Result and Discussion

3.1. Terrestrial Laser Scanning

The most important consideration in a field scan is to select a suitable scan position for the object of interest. In this study, the field scan was performed from 82 positions, considering various factors such as the measurable range and overlap as the Magoksa Temple covers a fairly wide area. In addition, the stone pagoda, a key part of the heritage site, was scanned from 24 positions by dividing the pagoda into lower and upper sections based on its height. The 3D shape of the temple was generated by registering, merging, and filtering the scanning data and inputting the GCP data.
Shape registering was conducted by selecting at least three corresponding points from each point cloud. As a result, all registered point clouds were merged and filtered to generate a complete object model. Finally, the laser scanning models were converted to an absolute coordinate system based on the five GCPs to increase the accuracy and reliability of the topographical survey. Figure 3 shows the point clouds and texture mapping images of the Magoksa Temple obtained by merging and filtering the laser scanning data. Notably, the overall 3D shape reflected the perpendicular geometry of the buildings; however, high-level planar point clouds (e.g., the geometry of the roof) were not acquired sufficiently for geometrical reconstruction.

3.2. UAV Photogrammetry

UAV photogrammetry was used to determine the overall topography of the Magoksa Temple and acquire the 3D numerical data of the upper part, where using the terrestrial laser scanner was not possible. This was achieved by conducting a planning survey suitable for the Magoksa Temple and then performing overall modeling using specialized software. First, two UAV flight plans were determined for the southern and northern territories of the Magoksa Temple. Next, the flight ranges (approximately 250 × 400 m) and paths were established on the satellite map using specialized software, and the following conditions were input into the software: vertical camera angle; altitude = 80 m; overlap = 80%; sidelap = 60%; flight speed = 3 m/s; and flight time = 20 min. Then, the UAV flight was performed under autonomous control according to the preset path.
Image processing involved aligning the images, inputting GCP, reconstructing the images, building point clouds, and mapping the textures. First, the 213 still photographs obtained by the UAV camera, the alignment of which was based on the shooting sequences and image feature points. Then, geometric reconstruction using the five GCPs was performed to scale and orient the 3D model. Finally, the orthoimage was completed by mapping the texture. The data acquisition rate in the planar direction was higher for UAV photogrammetry as compared to the terrestrial laser scanning (Figure 4).

3.3. Accuracy Assessment

The 3D digital documentation of individual architectural heritages and artifacts has been performed in the past. Recent developments in laser scanning technology and UAV photogrammetry have extended the scope of digital documentation to large-scale architectural heritages and sites. In particular, UAV photogrammetry has been used to acquire 3D numerical data across a wide area, which, when combined with terrestrial laser scanning data, provides a large number of point clouds; this reduces the time required for data acquisition. In this respect, studies on the digital documentation of cultural heritage sites using different technologies have been conducted in conjunction with the deployment of UAV [43,44,45,46,47]. In this paper, we focused on evaluating the reliability of the point data obtained via terrestrial laser scanning and UAV photogrammetry to enhance the convergent usability of both datasets by comparing the position accuracy of the point cloud data.
First, we compared the coordinate errors with the laser scanning and photogrammetry based on the five GCPs to evaluate the overall quality and absolute positional accuracy (Table 1, Figure 5). As a result, the X, Y, and Z coordinates of the laser scanning result deviated in the range −0.009–0.005 m (RMS 0.006 m), −0.015–0.001 m (RMS 0.008 m), and −0.011–0.099 m (RMS 0.045 m), respectively. Additionally, the X, Y, and Z coordinates of the photogrammetry result exhibited discrepancies that range from −0.005–0.044 m (RMS 0.024 m), −0.028–0.027 m (RMS 0.021 m), and −0.009–0.079 m (RMS 0.044 m), respectively. Overall, the X and Y coordinates of the terrestrial laser scanning and UAV photogrammetry methods exhibit a good degree of agreement with the GCP values at the millimeter-level accuracy. However, the Z coordinate shows high discrepancies at the centimeter-level accuracy. The comparison results according the survey methods show that the terrestrial laser scanning result is more accurate than UAV photogrammetry.
The relative positional accuracy between terrestrial laser scanning and UAV photogrammetry were analyzed using randomly selected points (Figure 6). Coordinate discrepancies were calculated at a total of 40 points on the ground and on buildings (Table 2, Figure 7). The maximum negative and positive difference values on the ground were -0.020 m and 0.030 m in X coordinate (RMS 0.015 m), −0.028 m and 0.025 m in Y coordinate (RMS 0.015 m), and -0.012 m and 0.030 m in Z coordinate (RMS 0.010 m), respectively. Moreover, the X, Y, and Z coordinates on the buildings deviated in the range of −0.037–0.074 m (RMS 0.043 m), −0.049–0.049 m (RMS 0.028 m), and −0.071–0.057 m (RMS 0.037 m), respectively. Overall, the differences of the coordinates were less than ±0.030 m on the ground and ±0.080 m on the buildings. In particular, there were significantly higher deviations on some building points. The RMS of the buildings was approximately two to three times bigger than RMS on the ground. The point cloud comparison was an effective approach to investigate the correspondence of the terrestrial laser scanning and UAV photogrammetry methods. In the future, detailed and macroscopic analyses of overall datasets should be performed using graphical tools.
To determine the shape accuracy of individual buildings in addition to the overall deviation analysis of cultural heritage sites, the roof section of the Daegwangbojeon Hall, a representative building in the Magoksa Temple, was examined. Figure 8 shows that the photogrammetric point cloud data is complementary to areas of low point density in the scanning point cloud data. Thus, the level of accuracy of both laser scanning and photogrammetry is high enough to be widely used in survey drawings for buildings as well as in BIM. In this respect, a 3D model that combines terrestrial laser scanning and UAV photogrammetry is deemed a useful technology for the digital documentation of large and complex cultural heritage sites such as the Magoksa Temple, because such models jointly overcome the limitations of data acquisition through a single technology.

3.4. Integrated Three-Dimensional Modeling

Considering the properties of terrestrial laser scanning and UAV photogrammetry, it is clear that the two technologies provide complementary information. However, the synergic characteristics of both systems can be fully utilized only after the successful registration of the laser scanning and photogrammetry data relative to a common reference frame [48]. In general, a popular mathematical computation is the “Iterative Closet Point” (ICP) algorithm [49,50].
This study created a 3D model of the Magoksa Temple, including the terrain and buildings, by combining laser scanning data that focused on perpendicular point clouds with UAV photogrammetry data that focused on the planar point clouds. To achieve this goal, the two given datasets of the scanning and photogrammetry applied model-based fusion using the five GCPs. Then, the relative position of the laser scanning and photogrammetry results was determined via bundle adjustment and ICP algorithm such that the mean squared error (RMS) given by the sum of the squared distances between the corresponding points was minimized. The process was continued until either the RMS fell down below a given threshold or no further improvement could be achieved. As a result of the overlap error statistics between the laser scanning and photogrammetry results, the RMS is gradually decreased depending on the alignment stages, and the final RMS value is 0.005 m (Table 3).
The completed 3D shape exhibited perfect planar and perpendicular geometries, including wooden buildings and the surrounding environment (Figure 9). As shown in Figure 10, the Magoksa Temple has east–west and south–north lengths of approximately 100 and 230 m, respectively, and the temple is located between 100 and 130 m above sea level. Although the Magoksa Temple seems like a mountain temple in its layout the survey results indicate that the Magoksa Temple is located on a site that is virtually flat. This 3D terrain model of the site is expected to be used as basic data for conservation and management of the Magoksa Temple.
The workflows for the processes involved in the acquisition of the point cloud data on the Magoksa Temple using terrestrial laser scanning and UAV photogrammetry together with the 3D modeling process that combines the data from these two technologies can be summarized as follows (Figure 11). First, laser scanning collects a primary set of 3D shape data through field scans, data registering, merging, filtering, and the input of GCPs. Through feature point matching, GCP inputting, point cloud creating, and texture mapping, UAV photogrammetry produces a 3D model created from the aerial photographs obtained after the setup of a flight plan. If data convergence is not required for both models, laser scanning produces a 3D model based on the perpendicular geometry, and photogrammetry produces an orthoimage model based on the planar geometry. In contrast, if the convergence of the point cloud data is required between these two approaches, generating a 3D model based on the hybrid point cloud data is possible, which includes topography and building shape through shape registering and merging, after changing the data to have compatible extensions.

4. Conclusions

This study established a 3D model and accurate digital documentation of the Magoksa Temple using terrestrial laser scanning and UAV photogrammetry. The two technologies were used to acquire multidirectional numerical information of the temple site. Laser scanning showed a high data acquisition rate in the perpendicular direction. In contrast, photogrammetry generated high-level planar point clouds. Laser scanning or photogrammetry are useful individual techniques for digital documentation of cultural heritage sites to determine the layout conditions and topographical features based on an orthoimage image; however, these techniques are of limited application if precise survey drawings are required. Thus, constructing a 3D model that includes topography as well as building shapes through a hybrid convergence technology, using both terrestrial laser scanning and UAV photogrammetry, is crucial.
The accuracy of the two technologies based on GCPs prior to their convergence was analyzed: laser scanning has higher positional accuracy than photogrammetry, and the overall discrepancy of the two technologies was sufficient to generate convergent data. The photogrammetric point cloud data was aligned and merged based on the laser scanning results. Consequently, photogrammetry could increase the value of a 3D model by complementing the point cloud data for the upper parts of buildings that are difficult to acquire through laser scanning. Furthermore, the accuracy of the overall topography as well as the shape of an individual building is increased, which enhances the composition of survey drawings and the BIM.
Integrated 3D models using both terrestrial laser scanning and UAV photogrammetry are expected to contribute to the improved applicability of hybrid point clouds and digital documentation methodologies for cultural heritage sites and individual monuments. However, future work must focus on reducing the positional discrepancies between the two survey technologies and determine the tendency of positional discrepancies to vary according to the different scales and geomorphic environments of the cultural heritage site. Moreover, cross-validation with various other measurement techniques such as total station is required to increase the reliability of positional accuracy. Although some further studies for data acquisition, fusion workflow, accuracy assessment, and applicability improvement have been conducted, the 3D integrated model of terrestrial laser scanning and UAV photogrammetry seems to be a powerful tool for the digital documentation and conservation management of the cultural heritage sites.

Author Contributions

Conceptualization, Y.H.J.; methodology, Y.H.J.; software, S.H.; validation, Y.H.J. and S.H.; formal analysis, Y.H.J.; investigation, Y.H.J. and S.H.; resources, Y.H.J.; data curation, S.H.; writing—original draft preparation, Y.H.J.; writing—review and editing, Y.H.J.; visualization, S.H.; supervision, Y.H.J.; project administration, Y.H.J.; funding acquisition, Y.H.J.

Funding

This research was funded by the Ministry of Science, ICT and Future Planning (NRF-2016R1C1B2010883).

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aragóna, E.; Munar, S.; Rodríguez, J.; Yamafunec, K. Underwater photogrammetric monitoring techniques for mid-depth shipwrecks. J. Cult. Herit. 2018, 34, 255–260. [Google Scholar] [CrossRef]
  2. Discamps, E.; Muth, X.; Gravina, B.; Lacrampe-Cuyaubère, F.; Chadelle, J.; Faivre, J.; Maureille, B. Photogrammetry as a tool for integrating archival data in archaeological fieldwork: Examples from the Middle Palaeolithic sites of Combe-Grenal, Le Moustier, and Regourdou. J. Archaeol. Sci. Rep. 2016, 8, 268–276. [Google Scholar] [CrossRef]
  3. Erenoglu, R.C.; Akcay, O.; Erenoglu, O. An UAS-assisted multi-sensor approach for 3D modeling and reconstruction of cultural heritage site. J. Cult. Herit. 2017, 26, 79–90. [Google Scholar] [CrossRef]
  4. O’Driscoll, J. Landscape applications of photogrammetry using unmanned aerial vehicles. J. Archaeol. Sci. Rep. 2018, 22, 32–44. [Google Scholar] [CrossRef]
  5. Themistocleous, K. Model reconstruction for 3-D vizualization of cultural heritage sites using open data from social media: The case study of Soli, Cyprus. J. Archaeol. Sci. Rep. 2017, 14, 774–781. [Google Scholar] [CrossRef]
  6. Wilson, L.; Rawlinson, A.; Frost, A.; Hepher, J. 3D digital documentation for disaster management in historic buildings: Applications following fire damage at the Mackintosh building, The Glasgow School of Art. J. Cult. Herit. 2018, 31, 24–32. [Google Scholar] [CrossRef]
  7. Messaoudi, T.; Véron, P.; Halin, G.; Luca, L.D. An ontological model for the reality-based 3D annotation of heritage building conservation state. J. Cult. Herit. 2018, 29, 100–112. [Google Scholar] [CrossRef]
  8. Xiao, W.; Mills, J.; Guidi, G.; Rodríguez-Gonzálvez, P.; Barsanti, S.G.; González-Aguilera, D. Geoinformatics for the conservation and promotion of cultural heritage in support of the UN Sustainable Development Goals. ISPRS J. Photogramm. Remote. Sens. 2018, 142, 389–406. [Google Scholar] [CrossRef]
  9. Zimmer, B.; Liutkus-Pierce, C.; Marshall, S.T.; Hatala, K.G.; Metallo, A.; Rossi, V. Using differential structure-from-motion photogrammetry to quantify erosion at the Engare Sero footprint site, Tanzania. Quat. Sci. Rev. 2018, 198, 226–241. [Google Scholar] [CrossRef]
  10. Angelo, L.D.; Stefano, P.D.; Fratocchi, L.; Marzola, A. An AHP-based method for choosing the best 3D scanner for cultural heritage applications. J. Cult. Herit. 2018, 34, 109–115. [Google Scholar] [CrossRef]
  11. Galantucci, L.M.; Pesce, M.; Lavecchia, F. A stereo photogrammetry scanning methodology, for precise and accurate 3D digitization of small parts with sub-millimeter sized features. CIRP Ann-Manuf. Techn. 2015, 64, 507–510. [Google Scholar] [CrossRef]
  12. Herráez, J.; Martínez, J.C.; Coll, E.; Martín, M.T.; Rodríguez, J. 3D modeling by means of videogrammetry and laser scanners for reverse engineering. Measurement 2016, 87, 216–227. [Google Scholar] [CrossRef]
  13. Vacaa, G.; Dessì, A.; Sacco, A. The use of nadir and oblique UAV images for building knowledge. ISPRS Int. J. Geo-Inf. 2017, 6, 393. [Google Scholar] [CrossRef]
  14. Galantucci, L.M.; Piperi, E.; Lavecchia, F.; Zhavo, A. Semi-automatic low cost 3D laser scanning system for reverse engineering. Procedia CIRP 2015, 28, 94–99. [Google Scholar] [CrossRef]
  15. Reznicek, J.; Pavella, K. New low cost 3D Scanning Techniques for Cultural Heritage Documentation. In Proceedings of the International Archives of Photogrammetry and Remote Sensing, XXXVI-B5, Beijing, China, 3–11 July 2008; pp. 237–240. [Google Scholar]
  16. Tannant, D.D. Review of photogrammetry-based techniques for characterization and hazard assessment of rock faces. Int. J. Georesour. Environ. 2015, 1, 76–87. [Google Scholar] [CrossRef]
  17. Zlot, R.; Bosse, M. Three-dimensional mobile mapping of caves. J. Cave Karst Stud. 2014, 76, 191–206. [Google Scholar] [CrossRef]
  18. Reichert, J.; Schellenberg, J.; Schubert, P.; Wilke, T. 3D scanning as a highly precise, reproducible, and minimally invasive method for surface area volume measurements of scleractinian corals. Limnol Oceanogr. Meth. 2016, 14, 518–526. [Google Scholar] [CrossRef]
  19. Palomar-Vazquez, J.; Baselga, S.; Viñals-Blasco, M.; García-Sales, C.; Sancho-Espinós, I. Application of a combination of digital image processing and 3D visualization of graffiti in heritage conservation. J. Archaeol. Sci. Rep. 2017, 12, 32–42. [Google Scholar] [CrossRef]
  20. Quagliarini, E.; Clini, P.; Ripanti, M. Fast, low cost and safe methodology for the assessment of the state of conservation of historical buildings from 3D laser scanning: The case study of Santa Maria in Portonovo (Italy). J. Cult. Herit. 2017, 24, 175–183. [Google Scholar] [CrossRef]
  21. Biagini, C.; Capone, P.; Donato, V.; Facchini, N. Towards the BIM implementation for historical building restoration sites. Automat. Constr. 2016, 71, 74–86. [Google Scholar] [CrossRef]
  22. Fryskowska, A.; Stachelek, J. A no-reference method of geometric content quality analysis of 3D models generated from laser scanning point clouds for hBIM. J. Cult. Herit. 2018, 34, 95–108. [Google Scholar] [CrossRef]
  23. Simeone, D.; Cursi, S.; Acierno, M. BIM semantic-enrichment for built heritage representation. Automat. Constr. 2019, 97, 122–137. [Google Scholar] [CrossRef]
  24. Galeazzi, F. Towards the definition of best 3D practices in archaeology: Assessing 3D documentation techniques for intra-site data recording. J. Cult. Herit. 2016, 17, 159–169. [Google Scholar] [CrossRef]
  25. Scafuri, M.P.; Rennison, B. Scanning the H.L. Hunley: Employing a structured-light scanning system in the archaeological documentation of a unique maritime artifact. J. Archaeol. Sci. Rep. 2016, 6, 302–309. [Google Scholar] [CrossRef]
  26. Monna, F.; Esin, Y.; Magail, J.; Granjon, L.; Navarro, N.; Wilczek, J.; Saligny, L.; Couette, S.; Dumontet, A.; Chateau, C. Documenting carved stones by 3D modelling—Example of Mongolian deer stones. J. Cult. Herit. 2018, 34, 116–128. [Google Scholar] [CrossRef]
  27. Rüther, H.; Chazan, M.; Schroeder, R.; Neeser, R.; Held, C.; Walker, S.J.; Matmon, A.; Horwitz, L.K. Laser scanning for conservation and research of African cultural heritage sites: The case study of Wonderwerk Cave, South Africa. J. Archaeol. Sci. 2009, 36, 1847–1856. [Google Scholar] [CrossRef]
  28. Fabbri, S.; Sauro, F.; Santagata, T.; Rossi, G.; Waele, J.D. High-resolution 3-D mapping using terrestrial laser scanning as a tool for geomorphological and speleogenetical studies in caves: An example from the Lessini mountains (North Italy). Geomorphology 2017, 280, 16–29. [Google Scholar] [CrossRef]
  29. Fernández-Lozano, J.; Gutiérrez-Alonso, G. Improving archaeological prospection using localized UAVs assisted photogrammetry: An example from the Roman Gold District of the Eria River Valley (NW Spain). J. Archaeol. Sci. Rep. 2016, 5, 509–520. [Google Scholar] [CrossRef]
  30. Watanabe, Y.; Kawahara, Y. UAV photogrammetry for monitoring changes in river topography and vegetation. Procedia Eng. 2016, 154, 317–325. [Google Scholar] [CrossRef]
  31. Assali, P.; Grussenmeyer, P.; Villemin, T.; Pollet, N.; Viguier, F. Surveying and modeling of rock discontinuities by terrestrial laser scanning and photogrammetry: Semi-automatic approaches for linear outcrop inspection. J. Struct. Geol. 2014, 66, 102–114. [Google Scholar] [CrossRef]
  32. Yang, B.; Zang, Y.; Dong, Z.; Huang, R. An automated method to register airborne and terrestrial laser scanning. ISPRS J. Photogramm. 2015, 109, 62–76. [Google Scholar] [CrossRef]
  33. Aicardi, I.; Dabove, P.; Lingua, A.M.; Piras, M. Integration between TLS and UAV photogrammetry techniques for forestry applications. Ifor. Bioengsic. For. 2016, 10, 41–47. [Google Scholar] [CrossRef]
  34. Liang, H.; Li, W.; Lai, S.; Zhu, L.; Jiang, W.; Zhang, Q. The integration of terrestrial laser scanning and terrestrial and unmanned aerial vehicle digital photogrammetry for the documentation of Chinese classical gardens—A case study of Huanxiu Shanzhuang, Suzhou, China. J. Cult. Herit. 2018, 33, 222–230. [Google Scholar] [CrossRef]
  35. Forkuo, E.; King, B. Automatic Fusion of Photogrammetric Imagery and Laser Scanner Point Clouds. In Proceedings of the International Archives of Photogrammetry and Remote Sensing, XXXV–B4, Istanbul, Turkey, 12–23 July 2004; pp. 921–926. [Google Scholar]
  36. Vosselman, R. Fusion of Laser Scanning Data, Maps, and Aerial Photographs for Building. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Toronto, ON, Canada, 24–28 June 2002; pp. 85–88. [Google Scholar]
  37. Gašparović, M.; Malarić, I. Increase of Readability and Accuracy of 3D Models Using Fusion of Close Range Photogrammetry and Laser Scanning. In Proceedings of the International Archives of Photogrammetry and Remote Sensing, XXXIX–B5, Melbourne, Australia, 25 August–01 September 2012; pp. 93–98. [Google Scholar]
  38. Beraldin, J.-A. Integration of Laser Scanning and Close-Range Photogrammetry—The last decade and beyond. In Proceedings of the International Archives of Photogrammetry and Remote Sensing, XXXV–B4, Istanbul, Turkey, 12–23 July 2004; pp. 12–23. [Google Scholar]
  39. Jo, Y.H.; Kim, J. Three-Dimensional Digital Documentation of Heritage Sites Using Terrestrial Laser Scanning and Unmanned Aerial Vehicle Photogrammetry. In Proceedings of the XXVI International CIPA Symposium, Ottawa, ON, Canada, 28 August–1 September 2017; pp. 395–398. [Google Scholar]
  40. Pérez, M.; Agüera, F.; Carvajal, F. Low Cost Surveying Using an Unmanned Aerial Vehicle. In Proceedings of the International Archives of Photogrammetry and Remote Sensing, XL-1/W2, Rostock, Germany, 4–6 September 2013; pp. 311–315. [Google Scholar]
  41. Gašparović, M.; Gajski, D. Two-Step Camera Calibration Method Developed for Micro UAV’s. In Proceedings of the International Archives of Photogrammetry and Remote Sensing, XLI-B1, Prague, Czech Republic, 12–19 July 2016; pp. 829–833. [Google Scholar]
  42. Gašparović, M.; Jurjević, L. Gimbal influence on the stability of exterior orientation parameters of UAV acquired images. Sensors 2017, 17, 401. [Google Scholar] [CrossRef] [PubMed]
  43. Nikolakopoulos, K.G.; Soura, K.; Koukouvelas, I.K.; Argyropoulos, N.G. UAV vs. classical aerial photogrammetry for archaeological studies. J. Archaeol. Sci. Rep. 2017, 14, 758–773. [Google Scholar] [CrossRef]
  44. Liu, K.; Ding, H.; Tang, G.; Na, J.; Huang, X.; Xue, Z.; Yang, X.; Li, F. Detection of catchment-scale gully-affected areas using unmanned aerial vehicle (UAV) on the Chinese Loess Plateau. ISPRS Int. J. Geo-Inf. 2018, 5, 238. [Google Scholar] [CrossRef]
  45. Liu, Y.; Zheng, X.; Ai, G.; Zhang, Y.; Zuo, Y. Generating a high-precision true digital orthophoto map based on UAV images. ISPRS Int. J. Geo-Inf. 2018, 7, 333. [Google Scholar] [CrossRef]
  46. Salach, A.; Bakuła, K.; Pilarska, M.; Ostrowski, W.; Górski, K.; Kurczyński, Z. Accuracy assessment of point clouds from LiDAR and dense image matching acquired using the UAV platform for DTM creation. ISPRS Int. J. Geo-Inf. 2018, 7, 342. [Google Scholar] [CrossRef]
  47. Moon, D.; Chung, S.; Kwon, S.; Seo, J.; Shin, J. Comparison and utilization of point cloud generated from photogrammetry and laser scanning: 3D world model for smart heavy equipment planning. Automat Constr. 2019, in press. [Google Scholar] [CrossRef]
  48. Habib, A.; Ghanma, M.; Mitishita, E. Co-registration of photogrammetric and LIDAR data: Methodology and case study. Rev. Brasileira Cartogr. 2004, 56, 1–13. [Google Scholar]
  49. Besl, P.; McKay, N. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  50. Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Int. J. Comput. Vis. Image Underst. 1992, 10, 145–155. [Google Scholar] [CrossRef]
Figure 1. Field existence of the Magoksa Temple, Gongju: (a) location and temple layout; (b) five-story stone pagoda.
Figure 1. Field existence of the Magoksa Temple, Gongju: (a) location and temple layout; (b) five-story stone pagoda.
Ijgi 08 00053 g001
Figure 2. Equipment and methods used for 3D digital documentation: (a) terrestrial laser scanning using a specialized external camera; (b) hexacopter unmanned aerial vehicle (UAV); (c) five ground control points.
Figure 2. Equipment and methods used for 3D digital documentation: (a) terrestrial laser scanning using a specialized external camera; (b) hexacopter unmanned aerial vehicle (UAV); (c) five ground control points.
Ijgi 08 00053 g002
Figure 3. Orthoimage and detailed view of the Magoksa Temple created via terrestrial laser scanning: (a) reflectance intensity image; (b) RGB texture mapping image.
Figure 3. Orthoimage and detailed view of the Magoksa Temple created via terrestrial laser scanning: (a) reflectance intensity image; (b) RGB texture mapping image.
Ijgi 08 00053 g003
Figure 4. Orthoimage and detailed view of the Magoksa Temple created via UAV photogrammetry: (a) density image; (b) RGB texture mapping image.
Figure 4. Orthoimage and detailed view of the Magoksa Temple created via UAV photogrammetry: (a) density image; (b) RGB texture mapping image.
Ijgi 08 00053 g004
Figure 5. Graphs showing coordinate errors of the terrestrial laser scanning and UAV photogrammetry based on the five ground control points.
Figure 5. Graphs showing coordinate errors of the terrestrial laser scanning and UAV photogrammetry based on the five ground control points.
Ijgi 08 00053 g005
Figure 6. A total of 40 randomly selected points on the ground and buildings for analysis of the relative positional accuracy between terrestrial laser scanning and UAV photogrammetry.
Figure 6. A total of 40 randomly selected points on the ground and buildings for analysis of the relative positional accuracy between terrestrial laser scanning and UAV photogrammetry.
Ijgi 08 00053 g006
Figure 7. Graphs showing relative positional coordinate discrepancies between terrestrial laser scanning and UAV photogrammetry.
Figure 7. Graphs showing relative positional coordinate discrepancies between terrestrial laser scanning and UAV photogrammetry.
Ijgi 08 00053 g007
Figure 8. Point clouds showing the roof section of the Daegwangbojeon Hall.
Figure 8. Point clouds showing the roof section of the Daegwangbojeon Hall.
Ijgi 08 00053 g008
Figure 9. Process and result of integrated 3D modeling.
Figure 9. Process and result of integrated 3D modeling.
Ijgi 08 00053 g009
Figure 10. True orthoimage and cross-section of Magoksa Temple’s terrain.
Figure 10. True orthoimage and cross-section of Magoksa Temple’s terrain.
Ijgi 08 00053 g010
Figure 11. Workflow of integrated 3D modeling using terrestrial laser scanning and UAV photogrammetry.
Figure 11. Workflow of integrated 3D modeling using terrestrial laser scanning and UAV photogrammetry.
Ijgi 08 00053 g011
Table 1. Comparison of coordinate errors of the point data obtained via terrestrial laser scanning and UAV photogrammetry based on the five ground control points.
Table 1. Comparison of coordinate errors of the point data obtained via terrestrial laser scanning and UAV photogrammetry based on the five ground control points.
GCPsTerrestrial Laser ScanningUAV Photogrammetry
XYZXYZ
CP10.000 m−0.004 m0.000 m0.030 m0.005 m−0.009 m
CP20.001 m0.001 m0.000 m−0.003 m0.001 m0.024 m
CP3−0.007 m0.000 m−0.001 m−0.001 m0.027 m0.005 m
CP40.005 m−0.008 m−0.011 m0.044 m−0.028 m0.052 m
CP5−0.009 m−0.015 m0.099 m−0.005 m−0.025 m0.079 m
Mean−0.002 m−0.005 m0.017 m0.013 m−0.004 m0.030 m
RMS0.006 m0.008 m0.045 m0.024 m0.021 m0.044 m
Table 2. Relative positional coordinate discrepancies between terrestrial laser scanning and UAV photogrammetry.
Table 2. Relative positional coordinate discrepancies between terrestrial laser scanning and UAV photogrammetry.
Point No.Ground (G)Buildings (B)
XYZXYZ
10.012 m0.012 m−0.002 m0.061 m−0.033 m−0.050 m
20.013 m−0.008 m−0.010 m−0.037 m−0.016 m0.023 m
30.026 m0.008 m−0.009 m0.073 m−0.001 m−0.044 m
40.016 m−0.007 m−0.005 m0.021 m0.009 m−0.016 m
5−0.001 m0.004 m−0.002 m0.053 m−0.005 m−0.028 m
6−0.004 m0.002 m0.019 m0.047 m0.040 m−0.028 m
7−0.013 m−0.003 m0.004 m0.000 m0.003 m0.057 m
8−0.012 m0.007 m0.001 m0.061 m−0.035 m−0.015 m
90.009 m−0.006 m−0.003 m−0.012 m0.012 m−0.065 m
100.008 m−0.028 m0.030 m0.010 m−0.026 m0.018 m
110.021 m−0.019 m0.002 m0.043 m−0.047 m0.037 m
120.019 m−0.022 m0.000 m0.062 m0.041 m−0.031 m
130.004 m0.006 m−0.008 m0.074 m−0.013 m−0.049 m
14−0.004 m0.013 m0.002 m0.037 m0.031 m−0.011 m
150.030 m−0.014 m−0.003 m0.050 m−0.049 m−0.021 m
160.011 m−0.021 m−0.011 m0.022 m0.049 m0.018 m
17−0.011 m−0.006 m−0.010 m0.003 m 0.012 m0.015 m
18−0.010 m0.011 m−0.012 m0.030 m−0.002 m0.039 m
19−0.020 m0.025 m0.009 m−0.007 m−0.014 m−0.071 m
200.007 m−0.027 m0.000 m−0.032 m0.018 m−0.004 m
Mean0.005 m−0.004 m0.000 m0.028 m−0.001 m−0.011 m
RMS0.015 m0.015 m0.010 m0.043 m0.028 m0.037 m
Table 3. Overlap error statistics between the laser scanning and photogrammetry according to alignment stages.
Table 3. Overlap error statistics between the laser scanning and photogrammetry according to alignment stages.
Alignment ProcessingMax Search Distance ParametersIterationsRMSMean
First stage0.100 m120.038 m0.030 m
Second stage0.050 m100.020 m0.016 m
Third stage0.025 m180.012 m0.010 m
Fourth stage0.012 m60.006 m0.005 m

Share and Cite

MDPI and ACS Style

Jo, Y.H.; Hong, S. Three-Dimensional Digital Documentation of Cultural Heritage Site Based on the Convergence of Terrestrial Laser Scanning and Unmanned Aerial Vehicle Photogrammetry. ISPRS Int. J. Geo-Inf. 2019, 8, 53. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8020053

AMA Style

Jo YH, Hong S. Three-Dimensional Digital Documentation of Cultural Heritage Site Based on the Convergence of Terrestrial Laser Scanning and Unmanned Aerial Vehicle Photogrammetry. ISPRS International Journal of Geo-Information. 2019; 8(2):53. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8020053

Chicago/Turabian Style

Jo, Young Hoon, and Seonghyuk Hong. 2019. "Three-Dimensional Digital Documentation of Cultural Heritage Site Based on the Convergence of Terrestrial Laser Scanning and Unmanned Aerial Vehicle Photogrammetry" ISPRS International Journal of Geo-Information 8, no. 2: 53. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi8020053

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop