Next Article in Journal
Spatiotemporal Evolution of Wetland Eco-Hydrological Connectivity in the Poyang Lake Area Based on Long Time-Series Remote Sensing Images
Next Article in Special Issue
Accuracy Assessment of Low-Cost Lidar Scanners: An Analysis of the Velodyne HDL–32E and Livox Mid–40’s Temporal Stability
Previous Article in Journal
Sub-Bottom Sediment Classification Using Reliable Instantaneous Frequency Calculation and Relaxation Time Estimation
Previous Article in Special Issue
Quality Analysis of Direct Georeferencing in Aspects of Absolute Accuracy and Precision for a UAV-Based Laser Scanning System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Method for UAV Lidar Precision Testing Used for the Evaluation of an Affordable DJI ZENMUSE L1 Scanner

Department of Special Geodesy, Faculty of Civil Engineering, Czech Technical University in Prague, Thákurova 7, 166 29 Prague, Czech Republic
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(23), 4811; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13234811
Submission received: 19 October 2021 / Revised: 20 November 2021 / Accepted: 24 November 2021 / Published: 27 November 2021
(This article belongs to the Special Issue Accuracy Assessment of UAS Lidar)

Abstract

:
Lately, affordable unmanned aerial vehicle (UAV)-lidar systems have started to appear on the market, highlighting the need for methods facilitating proper verification of their accuracy. However, the dense point cloud produced by such systems makes the identification of individual points that could be used as reference points difficult. In this paper, we propose such a method utilizing accurately georeferenced targets covered with high-reflectivity foil, which can be easily extracted from the cloud; their centers can be determined and used for the calculation of the systematic shift of the lidar point cloud. Subsequently, the lidar point cloud is cleaned of such systematic shift and compared with a dense SfM point cloud, thus yielding the residual accuracy. We successfully applied this method to the evaluation of an affordable DJI ZENMUSE L1 scanner mounted on the UAV DJI Matrice 300 and found that the accuracies of this system (3.5 cm in all directions after removal of the global georeferencing error) are better than manufacturer-declared values (10/5 cm horizontal/vertical). However, evaluation of the color information revealed a relatively high (approx. 0.2 m) systematic shift.

1. Introduction

Remote sensing methods of mass data collection are increasingly used in the current technical and scientific practice. These techniques, such as UAV or aerial photogrammetry, or 3D scanning (ground, mobile or airborne), provide point clouds, which are subsequently used for modeling of terrain or above-ground objects. For small scale applications, UAV photogrammetry combined with SfM (structure from motion) processing has been the most economical and, hence, the most widely used method the far. The difference between laser scanning and SfM photogrammetry lies mainly in the point cloud generation characteristics; while SfM reconstructs a model from visible imagery and some smoothing of surfaces can occur, laser scanning simply measures individual points on the surfaces of individual objects.
SfM is presently used in many applications including volume determination [1,2], mining [3], forestry [4], determination of solar potential [5], terrain changes [6,7], natural hazards [8,9], etc. It is, however, always necessary to consider the limitations of the methods associated with the suitable flight strategy [10,11,12] ground control points (GCPs) placement [13,14], processing algorithms [15], etc. Laser scanning (terrestrial, mobile or airborne) has been used in many fields including heritage conservation, [16,17], geology [18,19] river bathymetry [20], flood risk evaluation [21], archeology [22], monitoring of coastal changes [23,24,25], bridge constructions [26], mining areas [27], energy sector [28,29], etc. It can be reasonably expected that UAV lidar systems will, in the future, find application at least in some of these fields as well.
In view of the generally (almost prohibitively) high price of lidar scanners that can be mounted on UAVs (considering the limited payload of the drone and resulting necessity for miniaturization of the scanner and highly accurate positioning/orientation system consisting of the inertial navigation system and GNSS RTK receiver, as well as the need for extremely accurate synchronization of all parts), they are not widely used in practice. Where used, commercial solutions such as, for example, Yellowscan Mapper [30], Phoenix Scout-16 [31], or RPLiDAR [32] are employed. These are usually more expensive but more user-friendly than “home-made” systems assembled from individual components [33,34,35].
Most UAV–lidar systems are based on similar laser scanners (Velodyne VLP-16, later renamed to Puck, Velodyne HDL-32, Riegl LMS-Q680i, or IBEO LUX 4L) and, therefore, have similar characteristics. An overview of laser scanners commonly used for UAV lidar can be found in [34].
Still, the total accuracy, i.e., the accuracy of determining the individual points, is the key characteristic affecting the possible utilization of any 3D system in practice. Some manufacturers provide details only on the accuracy of the individual components, i.e., partial accuracies of the laser scanner, of the GNSS RTK receiver, and the inertial measurement unit (IMU). Hence, determining the overall accuracy of UAV lidar systems is not so simple, and to obtain a meaningful result, it is necessary to compare the results with another measurement with a (known) better accuracy, such as another 3D scanner [20]. Another option is the use of control points georeferenced using GNSS RTK and/or total station [36]; nevertheless, the accuracy can be typically reliably determined only in the height component as the identification of any individual (control) point in the point cloud is unreliable due to the extremely high density of the point cloud. Torresan et al. proposed an unusual method for determining the total accuracy of a UAV–lidar system—they used the surfaces of large rectangular boxes and their intersections for determining systematic errors of point cloud coordinates (x, y, z) and standard deviations of elevation [33].
Recently, a new lidar 3D scanner DJI L1 appeared on the market at the price of Eur 12,300; together with the DJI Matrix 300 UAV and DJI Terra software, it constitutes a complete system for the acquisition of point clouds, including reflection intensities and true colors. The introduction of such (relatively) affordable lidar UAV systems will likely lead to an increase in their use, which highlights the importance of a method for independent evaluation of its accuracy that could be universally applicable to any other (new) system as well. Surprisingly, however, there are not many papers focusing on the full-scale accuracy testing of such a system and those available rarely delve deeper into the analysis of the errors.
For example, Siwiec [37] evaluated only the relative accuracy of a UAV/lidar system Ricopter/Riegl VUX-1 on flat surfaces of a bridge by intersecting a plane through the point cloud and subsequent calculation of deviations from this plane. However, such an approach can identify only one partial component of the global error associated with the real-world use of a UAV lidar system and, for this reason, we believe it to be unsuitable for deeper analysis. Salach et al. [30] investigated the elevation accuracy of UAV lidar-derived DTMs using UAV Hawk Moth with a YellowScan Surveyor scanner (based on the Velodyne VLP-16 sensor) at a 50 m flight altitude. The RMSEs ranged from 0.11 m to 0.145 m. Torresan et al. [33] evaluated the performance of a proprietary system containing a scanner LUX 4L and GNSS/INS VN-300 using intersections of planes forming the sides of rectangular boxes for the detection of reference points within the cloud. Standard deviations derived from several individual flights at a 10 m altitude ranged between 5 and 10 cm.
Hu et al. [34] tested a system consisting of a DJI Livox MID40 laser scanner, a starNeto unit (IMU), and a UAV DJI Matrice 210 carrier. Testing focused on forestry applications, the achieved accuracies were 0.5 m and worse; however, the authors did not focus on the exact evaluation of the scanning system accuracy but rather on the determination of forest stand heights). The authors also performed tests aiming to evaluate both global and relative accuracies of their system with a flight altitude of 150 m. The results were partially similar to ours—a systematic shift of approx. 3.7 cm in the horizontal and 4.7 cm in the vertical direction. Relative accuracies were determined by intersecting planes through the planar parts of objects, yielding standard deviations of 12 and 22 cm for horizontal and vertical coordinates. From the perspective of the comparison with our results, these values are interesting; although both manufacturer-declared and global accuracies are similar to ours, these local deviations reported in their paper are much higher. Nevertheless, as the authors did not provide details of the used methods, the critical evaluation of these results and head-to-head comparison to ours is impossible.
Fuad et al. [36] evaluated the elevation accuracy of a digital terrain model using an AL3 S1000 UAV system based on a lidar sensor Velodyne HDL-32 and its dependence on the flight height and shape of the terrain. In sloped terrain, RMSEs exceeded 0.25 m even at the flight altitude of 20 m (which generally provided the best results); in flat terrain, RMSEs were as low as approx. 0.02 m regardless of the flight altitude. Considering the manufacturer-declared distance meter accuracy (0.02 m), it is likely that the high RMSEs in the sloped terrain reflect a systematic shift of the point cloud, which was not removed in their study. Pilarska et al. [38] reviewed the accuracy of many systems; however, only a theoretical analysis of errors was performed and although such an overview of available solutions is useful, the accuracy evaluation lacks practical verification.
The results of the studies described above obviate that a universal testing algorithm allowing a deeper accuracy evaluation that would also reveal the individual components of the global error is needed. In this paper, we aimed to (i) propose a method for evaluation of the accuracy of UAV lidar systems and (ii) to apply this method in practice in the evaluation of the performance of a new and relatively affordable lidar scanner DJI Zenmuse L1 mounted on UAV DJI Matrice 300.

2. Materials and Methods

A rugged area of a small landfill with a variety of materials including vegetation cover was selected for testing. The testing was logically divided into two basic accuracy tests: (a) the overall positioning accuracy and orientation of the point cloud in space and (b) the internal quality of the cloud.
The total positioning accuracy and orientation were determined by comparing the results with those acquired by georeferencing of individual control points using GNSS RTK and total station (expected standard deviations of GNSS RTK measurements of the horizontal and vertical coordinates in the Czech Republic are approx. 0.02 and 0.04 m, respectively).
The unambiguous identification of individual points in the L1 lidar-acquired point cloud represented the first challenge. We used a method similar to that used for validation of the first terrestrial 3D scanners (such as Cyrax 2500 or Leica HDS 3000), i.e., targets made of high reflection foil facilitating their identification in the point cloud based on the reflection intensity.
To evaluate the internal quality of the point cloud, we used a comparison with a UAV photogrammetry/SfM-acquired point cloud with significantly better accuracy (standard deviation <0.01 m, see Section 2.4.1) and point density (as will be shown in Table 5) than the lidar point cloud (the manufacturer-declared standard deviations of which for DJI L1 are 0.1 m for horizontal and 0.05 m for vertical coordinates at 50 m flight altitude).

2.1. Used Instruments

2.1.1. Terrestrial Measurements

Terrestrial georeferencing was performed using a robotic total station Trimble S9 (standard deviation of distance measurements is 0.8 mm + 1 ppm, that of horizontal direction and zenith is 0.3 mgon) and a GNSS RTK receiver Trimble GeoXR.

2.1.2. UAV DJI Matrice 300

The UAV DJI Matrice 300 (Figure 1a), which was used as the platform for mounting the tested laser scanner DJI Zenmuse L1 and DJI Zenmuse P1 camera, is a professional quadcopter equipped with GNSS RTK receiver. The basic characteristics of this UAV are detailed in Table 1.

2.1.3. Laser Scanner DJI Zenmuse L1

The tested laser scanner DJI Zenmuse L1 (Figure 1b) combines data from an RGB sensor and the IMU unit in a stabilized 3 axis gimbal, thus providing a true color point cloud from the RGB sensor; the point cloud must be processed in the manufacturer-supplied software DJI Terra. Basic manufacturer-declared characteristics are shown in Table 2. Since the principal aim of the paper is to propose a method for evaluation of a complex system, a detailed description and discussion of the individual parts of the DJI L1 system beyond the provided characteristics is, therefore, irrelevant. More detailed information can be found on the manufacturer’s website (www.dji.com/cz/zenmuse-l1/specs, 20 November 2021).

2.1.4. DJI Zenmuse P1

The DJI Zenmuse P1 camera was used for the acquisition of data for photogrammetric processing, i.e., of data used as a reference for comparison with those acquired by the DJI Zenmuse L1 scanner. This camera was developed specifically for photogrammetry; in this experiment, it was mounted with a DL 35 mm F2.8 LS ASPH lens. Basic camera characteristics are shown in Table 3, detailed information can be found on the manufacturer’s website (www.dji.com/cz/zenmuse-p1/specs, 20 November 2021).

2.2. Testing Area

A rugged part of a building material landfill of approx. 100 × 100 m with materials of varying colors and ruggedness was chosen for testing. The vegetation cover was relatively minor; in the middle of the area of interest, there were two almost vertical concrete walls. Black and white photogrammetric targets and reflection targets for lidar were placed in the area, geodetic network points were stabilized by studs (Figure 2)

2.3. Terrestrial Measurements and Data Acquisition

2.3.1. Stabilization of (Ground) Control Points

GNSS points (see Figure 2b, points 4001–4004) were stabilized by studs. All ground control points and control points for photogrammetry were made of 40 × 40 cm hardboard and painted in a black-and-white checkerboard pattern (Figure 3a; positions marked in red in Figure 2, 14 targets in total). 20 targets for laser scanning (50 × 50 cm) were coated with high-reflection foils of one of four colors with grey or black diagonal crosses made of a 5 cm wide sticky tape (blue points in Figure 2b; see also Figure 3b–e. We tested four colors of high-reflectivity foil to determine whether they differ in reflectivity and, in effect, in their suitability for this purpose.

2.3.2. Terrestrial Measurements

Terrestrial measurements were performed in two faces using a total station Trimble S9 HP prior to the experiment itself (georeferencing in the local coordinate system). The positions of the centers and all corners of the hardboard target (i.e., 5 points per target) were measured for all control points and ground control points.
Four GNSS points stabilized by nails and a GNSS RTK receiver Trimble GeoXR with a 15 s observation connected to the CZEPOS permanent stations network were used for georeferencing. The accuracy of the GNSS RTK measurements was verified prior to the UAV flights, after the second flight, and after the last flight. These three measurements were used for the calculation of the standard deviation of terrestrial georeferencing (determined values of 0.0055, 0.0063, and 0.0166 m for x, y, and z coordinates, respectively).
The centers and corners of the individual targets in the JTSK coordinate system and Bpv elevation system were subsequently georeferenced based on these four GNSS points.

2.3.3. DJI Zenmuse L1 Data Acquisition (Lidar Data)

In all, three test flights were performed; two were at the altitude of 50 m with different data acquisition settings, the third was at the altitude of 70 m. The first flight (designated 50_1) was performed with the measurement type set to “normal“ and vertical gimbal pitch of −90°; settings of the two remaining flights (designated 50 m_2 and 70 m) were oblique with gimbal pitch of −60° (i.e., 30° deviation from the downward vertical line). The remaining settings were identical: use of calibration flight—“yes”; flight speed—5 m/s; side overlap 50%; echo mode—triple; lidar sample rate—160 kHz; scan mode—repeat; rgb coloring—yes. After designating the study area, the flight planning software DJI Ground Station Pro determined the optimal path and automatically performed the flight. The individual flight paths are shown in Figure 4.

2.3.4. Reference Data Acquisition with DJI Zenmuse P1 (Photogrammetric Data)

Similar to the flights with lidar, the software DJI GS Pro was used for the acquisition of photogrammetric data using the DJI Zenmuse P1 camera mounted on the UAV DJI Matrice 300 RTK. The flight path (see Figure 5) was a single grid with image acquisition set to oblique (i.e., the camera on the gimbal does not maintain a constant angle but alternately acquires images in nadir and oblique directions (nadir and 15 deg in four basic directions). Thus, acquired oblique imagery greatly improves the quality of post-flight internal orientation calibration (as shown, e.g., in [10]). In all, 999 images were taken, and camera positions at the moment of the acquisition were registered using a GNSS RTK receiver connected to the CZEPOS permanent stations network.

2.4. Data Processing and Calculations

Geodetic measurements were performed in the S-JTSK coordinate system and Bpv elevation system, all other measurements were transformed using an identical algorithm into the same coordinate system (or directly calculated in the coordinate system). The coordinate system, therefore, had no influence on the result, and, in view of the size of the area of interest (only 100 × 100 m), the results are valid for any coordinate system.

2.4.1. Processing of DJI Zenmuse P1 Data (Photogrammetric Data)

The reference point cloud based on DJI Zenmuse P1 imagery was georeferenced using the coordinates of five SfM targets designated as ground control points (GCPs) and camera coordinates determined by the onboard GNSS RTK receiver, considering the GCP accuracy of 0.03 m (i.e., the accuracy of GNSS RTK measurement in the Czech Republic; http://czepos.cuzk.cz, 20 November 2021). The remaining 9 SfM targets were used as check points. The calculations were performed in Agisoft Metashape ver. 1.7.1., with both Align and Dense cloud generation parameters set to “high”. RMSEs of individual coordinates as well as the total error (calculated as a square root of their sum of squares) are shown in Table 4.
As an additional (fully independent) method of point cloud quality evaluation, the centers of the high reflection targets from the DJI Zenmuse P1 point cloud were compared to those measured using the total station. This comparison yielded standard deviations of sX = 0.006 m; sY = 0.006 m; sZ = 0.010 m (total error = 0.014 m), indicating high quality. These deviations are slightly higher than accuracy characteristics determined on CPs; nevertheless, it is necessary to consider that the position was determined in a cloud with an average resolution of 12 mm (see Table 5)
These evaluations confirm a sufficiently high quality of the SfM point cloud allowing its use as a reference in view of the expected quality of the lidar point clouds. The numbers of points and other information are detailed in Table 5.

2.4.2. Processing of Zenmuse L1 Data (Lidar Data)

Processing of L1 data is simple—UAV data was opened in the proprietary software DJI Terra (no alternative software can be used). Next, the command New-Mission—Lidar Point Cloud Processing with preset parameters Point cloud density—High; Optimize Point Cloud Accuracy—Yes; Output Coordinate System—WGS84; Reconstruction output —PNTS, LAS triggered a calculation and exported the point cloud in the chosen format. The used LAS format recorded also additional attribute of each point, namely the RGB color, signal intensity, measurement time, order of the reflection, and other information.
Table 5 provides the basic information about the resulting point clouds.

2.5. Algorithms of Accuracy Assessment

As mentioned above, the accuracy assessment was divided into two parts—the global systematic positional and orientation error and the local error characterizing the remaining inaccuracies that were determined on well-controllable surfaces inside the testing area.

2.5.1. Global Accuracy of the Point Cloud

The global accuracy was determined using the high-intensity points (i.e., lidar targets) from the point cloud. Figure 6 shows a detail of the point cloud color-coded according to the intensities; red dots indicate high reflectivity foils.
These points were manually verified; strong reflections from (e.g.,) glass shards or car bodywork were removed. The resulting point cloud was divided into partial clouds representing individual targets. For each such partial cloud, an intensity cut-off was set to yield a point cloud corresponding in size to the control point (0.5 × 0.5 m). The cut-offs for yellow, white, and red targets were practically identical (intensity of 160.0) while for the blue foil, the intensities were somewhat lower, and the cut-off was set to 150.5. Subsequently, coordinates of the centers of the targets were calculated as the mean coordinates of the points with reflectivities higher than the cut-off in the individual partial point clouds/targets.
The knowledge of the coordinates of these targets based on terrestrial measurement allowed us to calculate the magnitude of systematic errors in individual coordinates, i.e., to calculate parameters of linear transformation fitting the target coordinates from the point cloud to those determined by total station measurement.
The average systematic error was calculated for each coordinate separately; in addition, we performed also the 2.5D transformation (shifts in three coordinates and a rotation about the Z-axis) and 3D transformation (3 shifts, 3 rotations). Each calculation was performed separately using the least squares method in CloudCompare 2.12 software. The transformation parameters and residual errors after transformation well described the global characteristics of the point cloud. All available points were used for each evaluation (i.e., 20 points).

2.5.2. Evaluation of the Local Accuracy of the Point Cloud

As the whole point cloud cannot be reliably (i.e., with the accuracy corresponding to the assumed point cloud accuracy) cleaned of vegetation points and SfM and Lidar perform highly differently in this respect, only areas free of vegetation were used for evaluation. Due to the varying character of such subsections (especially relief), they were further divided into flat (horizontal), rugged (heaps of building material, etc.) and vertical areas (walls). Each of these classes was used for the description of a different type of accuracy. The evaluation was performed using the quadratic mean of the cloud distance of individual points from the reference point cloud (i.e., SfM point cloud from DJI Zenmuse P1) and the function cloud/cloud in the CloudCompare software set to the local triangulation to 12 neighboring points. In all, 8 flat areas, 9 rugged, and two vertical areas were evaluated (see Figure 7).

2.5.3. Color Information Shift

During the point cloud processing, a significant shift of the color information compared to refection intensities was detected on the high reflection targets. The magnitude of the shift was calculated by determining the center of the target in the color-coded cloud as well as in the reflection intensity-based cloud and their comparison, yielding global RMSEs as well as RMSEs for individual coordinates.

3. Results

3.1. Visual Evaluation of the Point Clouds

Figure 8 shows the character of the lidar (red dots) and photogrammetric (blue) point clouds in the three types of areas. We can see that where the surfaces are relatively even, the point clouds do not differ by much (just random errors can be observed). However, where there are greater changes (higher curvatures) of the surface, we can observe additional deviations with a non- random character, i.e., systematic shift. This can be best observed in Figure 8c, where the differences are much higher near the right angle of the surface. It is possible that this fluctuation might be caused by the size of the lidar spot– the distance is determined as a mean value of the pulse and where the spot is not perfectly round (i.e., when the angle of the beam is not perpendicular to the respective surface), this mean is slightly different from the distance of the center of the beam. This would also explain the reason for the biggest error being observed on and around the sharp edges.

3.2. Global Accuracy of the Point Cloud

RMSEs of the point clouds before and after transformations are detailed in Table 6, the transformation parameters (i.e., the systematic shifts and rotation) in Table 7. RMSE is defined as:
R M S E X =   ( Δ X 2 ) n , R M S E Y =   ( Δ Y 2 ) n , R M S E Z =   ( Δ Z 2 ) n , R M S E = R M S E X 2 + R M S E Y 2 + R M S E Z 2 3
where X, Y, Z are differences of checkpoint coordinates according to the reference measurement and the measurement from the particular point cloud.
The difference between the global position of the original point cloud and terrestrial georeferencing approximately corresponds to the accuracy of the GNSS RTK measurements (i.e., approx. 1 to 3.5 cm), with the maximum value of more than 5 cm reported for a single coordinate in Flight 1. The mean RMSEs for individual coordinates are 3.6, 2.5 and 2.9 cm (for flights 50 m_1, 50 m_2, and 70 m, respectively). After the application of the determined systematic shift values on the data (i.e., point cloud transformation to clean the data of the systematic shift), we observe a significant improvement of all RMSEs for individual coordinates as well as of the global RMSEs for individual flights to 1.5 cm or less. The total systematic error of point cloud georeferencing differed between flights; however, after transformation using determined systematic shifts for individual flights, the agreement of the lidar point clouds with the reference geodetic measurements dropped to vary similar values of approx. 1.5 cm in all flights.
The use of additional transformations (i.e., in our case, using the angle of rotation about the vertical axis and angles of rotation about the remaining two axes) practically did not improve the results further (RMSEs improved by 1 mm or less); this implies that our experiment was not burdened with any significant error in the orientation (rotation) of the entire point cloud. The angles detailed in Table 7 may appear relatively large; nevertheless, considering the flight altitude, we still speak of only a few centimeters (max. 2.5 cm).

3.3. Local Accuracy on the Vegetation-Free Areas

After cleaning the point clouds of the global shift as described above, the point clouds were compared against the SfM cloud; the overall results are shown in Table 8 (detailed results in Appendix A).
RMSEs on the flat surfaces show minor differences between individual areas; however, at the flight altitude of 50 m, these are close to the accuracy of the distance meter itself (manufacturer-declared accuracy of approx. 3 cm). At the flight altitude of 70 m, this error is about half again this number (4.4 cm).
The situation is similar where rugged surfaces are concerned; the errors are, however, somewhat higher (an increase from approx. 3 to 3.8 cm and from 4.4 to 4.8 cm at the 50 m and 70 m flight altitudes, respectively.
However, in the case of vertical surfaces, we observed a difference between the nadir (flight 50 m_1) and oblique (50 m_2) data acquisition. The higher error observed for the 50 m_1 flight indicates that for vertical surfaces, the vertical (nadir) acquisition is likely to provide poorer results than the oblique one.

3.4. The Shift of the Color Information

The color information shifts against signal intensity are shown in Table 9. The shift was relatively high, with the total error exceeding 2 decimeters. The mean shift indicates that the color information is systematically biased and, in addition, changes within the point cloud (the latter is characterized by a mean standard deviation of approx. 4.9 cm).
Figure 9 illustrates the color information shift on the example of the flight 50 m_1. Figure 10 shows an overlay of the color information (blue reflective foil) and point cloud colored by intensities.

4. Discussion

The presented paper proposed an algorithm for a full evaluation of a UAV lidar system overcoming the problem of identification of individual points within the cloud. The principal step lies in the use of portable square targets (0.5 × 0.5 m) covered with a highly reflective foil facilitating their easy identification in the point cloud as well as a relatively simple determination of the center of the target. This subsequently allows researchers to calculate the georeferencing error and remove it from the cloud through its spatial transformation. Here, the accuracy testing was performed on vegetation-free surfaces to prevent the effects of additional factors (e.g., vegetation removal). Besides smooth horizontal surfaces, we used also vertical surfaces and rugged areas (with surfaces both slopy and rough). Using this approach, we performed full testing of the DJI Zenmuse L1 system carried by a UAV DJI Matrice 300. Unlike many studies mentioned above, our results correspond to those provided by the manufacturers, especially where the lidar sensor accuracy is concerned—the RMSEs detected in our study were approx. 38 mm at the flight altitude of 50 m and approx. 48 mm at the flight altitude of 70 m, respectively (after removal of the georeferencing global error using transformation of the entire point cloud).
In our case, the determination of a simple systematic shift in all axes and its removal from the cloud was sufficient for the acquisition of optimal results; the highest observed systematic shift was 50 mm in a single coordinate; typical shifts ranged between 20 and 30 mm, which corresponds to the GNSS RTK measurement accuracy of the UAV onboard receiver. Had the global georeferencing error not been removed, RMSE would be at least twice as high (and even higher in sloped and rugged objects).
Our testing also showed that the global georeferencing error is not negligible (in our case, the georeferencing error was of approx. the same size as that of the lidar scanner). The use of the presented targets can help in the detection and subsequent significant reduction of this component of the error and can be applied both when evaluating a UAV lidar system and when performing routine measurements.
We evaluated our testing methods using three data acquisition flights. The basic flight altitude was set to 50 m as the manufacturer declared characteristics were valid for this altitude. As the manufacturer does not detail the mode of image acquisition (nadir vs. oblique), we performed both of these. In addition, to be able to evaluate the effect of the flight altitude on the accuracy, we also opted for a higher flight altitude of 70 m. Our results show that the mode of acquisition has practically no effect on the system performance while the higher altitude was associated with a slight worsening of the accuracy. This can be expected as the measured distances are longer and the lidar spot is larger, thus registering means from a greater area.
The fact that the color information provided by the lidar system is systematically positionally shifted is important for potential practical applications utilizing this information, such as the detection of ground points under the canopy based on differences in color. This shift could possibly be caused by the fact that registration of the color information (i.e., photo acquisition) is performed in certain intervals and the UAV system travels and records points even between any two acquisitions of color information. We cannot generalize this result of our study as it is valid only for the tested system; nevertheless, it seems logical that the same would apply to other systems as well. The manufacturer of our system does not provide information about the color information accuracy in the specifications, providing only the general information about the scanner accuracy, which is, in our case, better than that of color information; this might be misguiding and lead users to believe that the color information accuracy is the same.

5. Conclusions

We proposed a method for evaluating the quality of point clouds acquired using a lidar/UAV system. This method solves the problem of the identification of reference points within a dense point cloud using special targets with high reflectivity. Here, we summarize the key points of our approach:
  • Testing should be carried out in an area with surfaces that are approximately horizontal, vertical, and generally rugged, all without vegetation.
  • Prior to data acquisition, targets covered with a reflective film should be placed in the area and their coordinates determined using a reference method with accuracy superior to the expected accuracy of the tested system.
  • The UAV–lidar system point cloud should be supplemented with a reference point cloud (e.g., SfM as in our case) of the test area with significantly higher accuracy and detail.
  • The coordinates of the centers of targets are determined in the cloud using reflection intensities. Using these data, the systematic georeferencing error is calculated and removed from the cloud by linear transformation.
  • The resulting point cloud accuracy is determined as the RMSE of the distances between the reference (in our case, SfM) and tested (lidar) clouds for individual surfaces.
In addition to accuracy testing, the targets mentioned above can be used also to check or improve georeferencing of the entire cloud during routine measurements. Admittedly, this requires an additional processing step; on the other hand, it can further improve the accuracy of such routine measurements. Hence, the researchers should consider the level of accuracy they need in every individual case and if high accuracy is needed, this method should be used.
Results of our testing confirm that the achieved standard deviation (cleaned of the georeferencing error) is better than that declared by the manufacturer. Conversely, they have also shown the presence of this systematic georeferencing error, which is not negligible. The proposed method can help eliminate this component of error from the cloud (or, at least, significantly reduce it).
Automatic color coding of the point cloud by the tested lidar–UAV system is burdened with a relatively high (approx. 20 cm) positional inaccuracy, which may pose a problem for applications utilizing this information.

Author Contributions

Conceptualization, M.Š. and R.U.; methodology, M.Š. and R.U.; formal analysis, L.L.; investigation, L.L. and R.U.; data curation, M.Š.; writing—original draft preparation, M.Š., R.U., and L.L.; writing—review and editing, L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Grant Agency of CTU in Prague—grant number SGS21/053/OHK1/1T/11 “Optimization of acquisition and processing of 3D data for purpose of engineering surveying, geodesy in underground spaces and 3D scanning”.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Detailed Results—RMSE of Lidar Point Clouds vs. SfM Point Cloud

Table A1. RMSE of lidar point clouds vs. SfM point cloud on “flat” surfaces.
Table A1. RMSE of lidar point clouds vs. SfM point cloud on “flat” surfaces.
Flight L112345678All
50 m_10.0330.0320.0300.0310.0350.0360.0250.0280.032
50 m_20.0310.0280.0290.0270.0330.0340.0280.0290.030
70 m0.0520.0370.0390.0440.0480.0450.0410.0450.044
Table A2. RMSE of lidar point clouds vs. SfM point cloud on “rugged” surfaces.
Table A2. RMSE of lidar point clouds vs. SfM point cloud on “rugged” surfaces.
Flight L1123456789All
50 m_10.0360.0280.0360.0270.0380.0390.0380.0680.0310.038
50 m_20.0370.0290.0350.0280.0340.0440.0380.0510.0330.038
700.0520.0370.0450.0360.0490.0520.0510.0640.0390.048
Table A3. RMSE of lidar point clouds vs. SfM point cloud on “vertical” surfaces.
Table A3. RMSE of lidar point clouds vs. SfM point cloud on “vertical” surfaces.
Flight L112All
50 m_10.0250.0520.038
50 m_20.0290.0270.027
700.0450.0530.049

References

  1. Urban, R.; Štroner, M.; Kuric, I. The use of onboard UAV GNSS navigation data for area and volume calculation. Acta Montan. Slovaca 2020, 25, 361–374. [Google Scholar] [CrossRef]
  2. Kovanič, Ľ.; Blistan, P.; Štroner, M.; Urban, R.; Blistanova, M. Suitability of Aerial Photogrammetry for Dump Documentation and Volume Determination in Large Areas. Appl. Sci. 2021, 11, 6564. [Google Scholar] [CrossRef]
  3. Park, S.; Choi, Y. Applications of Unmanned Aerial Vehicles in Mining from Exploration to Reclamation: A Review. Minerals 2020, 10, 663. [Google Scholar] [CrossRef]
  4. Klouček, T.; Komárek, J.; Surový, P.; Hrach, K.; Janata, P.; Vašíček, B. The Use of UAV Mounted Sensors for Precise Detection of Bark Beetle Infestation. Remote Sens. 2019, 11, 1561. [Google Scholar] [CrossRef] [Green Version]
  5. Moudrý, V.; Beková, A.; Lagner, O. Evaluation of a high resolution UAV imagery model for rooftop solar irradiation estimates. Remote Sens. Lett. 2019, 10, 1077–1085. [Google Scholar] [CrossRef]
  6. Kovanič, Ľ.; Blistan, P.; Urban, R.; Štroner, M.; Blišťanová, M.; Bartoš, K.; Pukanská, K. Analysis of the Suitability of High-Resolution DEM Obtained Using ALS and UAS (SfM) for the Identification of Changes and Monitoring the Development of Selected Geohazards in the Alpine Environment—A Case Study in High Tatras, Slovakia. Remote Sens. 2020, 12, 3901. [Google Scholar] [CrossRef]
  7. Jaud, M.; Bertin, S.; Beauverger, M.; Augereau, E.; Delacourt, C. RTK GNSS-Assisted Terrestrial SfM Photogrammetry without GCP: Application to Coastal Morphodynamics Monitoring. Remote Sens. 2020, 12, 1889. [Google Scholar] [CrossRef]
  8. Žabota, B.; Kobal, M. Accuracy Assessment of UAV-Photogrammetric-Derived Products Using PPK and GCPs in Challenging Terrains: In Search of Optimized Rockfall Mapping. Remote Sens. 2021, 13, 3812. [Google Scholar] [CrossRef]
  9. Losè, L.T.; Chiabrando, F.; Tonolo, F.G.; Lingua, A. Uav Photogrammetry and Vhr Satellite Imagery for Emergency Mapping. The October 2020 Flood in Limone Piemonte (Italy). ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, XLIII-B3-2, 727–734. [Google Scholar] [CrossRef]
  10. Štroner, M.; Urban, R.; Seidl, J.; Reindl, T.; Brouček, J. Photogrammetry Using UAV-Mounted GNSS RTK: Georeferencing Strategies without GCPs. Remote Sens. 2021, 13, 1336. [Google Scholar] [CrossRef]
  11. Vacca, G.; Dessì, A.; Sacco, A. The Use of Nadir and Oblique UAV Images for Building Knowledge. ISPRS Int. J. Geo-Inf. 2017, 6, 393. [Google Scholar] [CrossRef] [Green Version]
  12. Losè, L.T.; Chiabrando, F.; Tonolo, F.G. Boosting the Timeliness of UAV Large Scale Mapping. Direct Georeferencing Approaches: Operational Strategies and Best Practices. ISPRS Int. J. Geo-Inf. 2020, 9, 578. [Google Scholar] [CrossRef]
  13. Sanz-Ablanedo, E.; Chandler, J.H.; Rodríguez-Pérez, J.R.; Ordóñez, C. Accuracy of Unmanned Aerial Vehicle (UAV) and SfM Photogrammetry Survey as a Function of the Number and Location of Ground Control Points Used. Remote Sens. 2018, 10, 1606. [Google Scholar] [CrossRef] [Green Version]
  14. McMahon, C.; Mora, O.; Starek, M. Evaluating the Performance of sUAS Photogrammetry with PPK Positioning for Infrastructure Mapping. Drones 2021, 5, 50. [Google Scholar] [CrossRef]
  15. Padró, J.-C.; Muñoz, F.-J.; Planas, J.; Pons, X. Comparison of four UAV georeferencing methods for environmental monitoring purposes focusing on the combined use with airborne and satellite remote sensing platforms. Int. J. Appl. Earth Obs. Geo-Inf. 2019, 75, 130–140. [Google Scholar] [CrossRef]
  16. Koska, B.; Křemen, T. The combination of laser scanning and structure from motion technology for creation of accurate exterior and interior orthophotos of St. Nicholas Baroque church. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-5/W1, 133–138. [Google Scholar] [CrossRef] [Green Version]
  17. Křemen, T. Advances and Trends in Geodesy, Cartography and Geoinformatics II. In Advances and Trends in Geodesy, Cartography and Geoinformatics II; CRC Press: London, UK, 2020; pp. 44–49. [Google Scholar]
  18. Blistan, P.; Jacko, S.; Kovanič, Ľ.; Kondela, J.; Pukanská, K.; Bartoš, K. TLS and SfM Approach for Bulk Density Determination of Excavated Heterogeneous Raw Materials. Minerals 2020, 10, 174. [Google Scholar] [CrossRef] [Green Version]
  19. Pukanská, K.; Bartoš, K.; Bella, P.; Gašinec, J.; Blistan, P.; Kovanič, Ľ. Surveying and High-Resolution Topography of the Ochtiná Aragonite Cave Based on TLS and Digital Photogrammetry. Appl. Sci. 2020, 10, 4633. [Google Scholar] [CrossRef]
  20. Mandlburger, G.; Pfennigbauer, M.; Schwarz, R.; Flöry, S.; Nussbaumer, L. Concept and Performance Evaluation of a Novel UAV-Borne Topo-Bathymetric LiDAR Sensor. Remote Sens. 2020, 12, 986. [Google Scholar] [CrossRef] [Green Version]
  21. Jakovljevic, G.; Govedarica, M.; Alvarez-Taboada, F.; Pajic, V. Accuracy Assessment of Deep Learning Based Classification of LiDAR and UAV Points Clouds for DTM Creation and Flood Risk Mapping. Geosciences 2019, 9, 323. [Google Scholar] [CrossRef] [Green Version]
  22. Balsi, M.; Esposito, S.; Fallavollita, P.; Melis, M.G.; Milanese, M. Preliminary Archeological Site Survey by UAV-Borne Lidar: A Case Study. Remote Sens. 2021, 13, 332. [Google Scholar] [CrossRef]
  23. Lin, Y.-C.; Cheng, Y.-T.; Zhou, T.; Ravi, R.; Hasheminasab, S.M.; Flatt, J.E.; Troy, C.; Habib, A. Evaluation of UAV LiDAR for Mapping Coastal Environments. Remote Sens. 2019, 11, 2893. [Google Scholar] [CrossRef] [Green Version]
  24. Shaw, L.; Helmholz, P.; Belton, D.; Addy, N. Comparison of UAV lidar and imagery for beach monitoring. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, XLII-2/W13, 589–596. [Google Scholar] [CrossRef] [Green Version]
  25. Zimmerman, T.; Jansen, K.; Miller, J. Analysis of UAS Flight Altitude and Ground Control Point Parameters on DEM Accuracy along a Complex, Developed Coastline. Remote Sens. 2020, 12, 2305. [Google Scholar] [CrossRef]
  26. Feroz, S.; Abu Dabous, S. UAV-Based Remote Sensing Applications for Bridge Condition Assessment. Remote Sens. 2021, 13, 1809. [Google Scholar] [CrossRef]
  27. Ren, H.; Zhao, Y.; Xiao, W.; Hu, Z. A review of UAV monitoring in mining areas: Current status and future perspectives. Int. J. Coal Sci. Technol. 2019, 6, 320–333. [Google Scholar] [CrossRef] [Green Version]
  28. Tan, J.; Zhao, H.; Yang, R.; Liu, H.; Li, S.; Liu, J. An Entropy-Weighting Method for Efficient Power-Line Feature Evaluation and Extraction from LiDAR Point Clouds. Remote Sens. 2021, 13, 3446. [Google Scholar] [CrossRef]
  29. Dihkan, M.; Mus, E. Automatic detection of power transmission lines and risky object locations using UAV LiDAR data. Arab. J. Geosci. 2021, 14, 567. [Google Scholar] [CrossRef]
  30. Salach, A.; Bakuła, K.; Pilarska, M.; Ostrowski, W.; Górski, K.; Kurczyński, Z. Accuracy Assessment of Point Clouds from LiDAR and Dense Image Matching Acquired Using the UAV Platform for DTM Creation. ISPRS Int. J. Geo-Inf. 2018, 7, 342. [Google Scholar] [CrossRef] [Green Version]
  31. Gomes Pereira, L.; Fernandez, P.; Mourato, S.; Matos, J.; Mayer, C.; Marques, F. Quality Control of Outsourced LiDAR Data Acquired with a UAV: A Case Study. Remote Sens. 2021, 13, 419. [Google Scholar] [CrossRef]
  32. Chen, J.; Zhang, Z.; Zhang, K.; Wang, S.; Han, Y. UAV-Borne LiDAR Crop Point Cloud Enhancement Using Grasshopper Optimization and Point Cloud Up-Sampling Network. Remote Sens. 2020, 12, 3208. [Google Scholar] [CrossRef]
  33. Torresan, C.; Berton, A.; Carotenuto, F.; Chiavetta, U.; Miglietta, F.; Zaldei, A.; Gioli, B. Development and Performance Assessment of a Low-Cost UAV Laser Scanner System (LasUAV). Remote Sens. 2018, 10, 1094. [Google Scholar] [CrossRef] [Green Version]
  34. Hu, T.; Sun, X.; Su, Y.; Guan, H.; Sun, Q.; Kelly, M.; Guo, Q. Development and Performance Evaluation of a Very Low-Cost UAV-Lidar System for Forestry Applications. Remote Sens. 2020, 13, 77. [Google Scholar] [CrossRef]
  35. Jon, J.; Koska, B.; Pospíšil, J. Autonomous airship equipped by multi-sensor mapping platform. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-5/W1, 119–124. [Google Scholar] [CrossRef] [Green Version]
  36. Fuad, N.A.; Ismail, Z.; Majid, Z.; Darwin, N.; Ariff, M.F.M.; Idris, K.M.; Yusoff, A.R. Accuracy evaluation of digital terrain model based on different flying altitudes and conditional of terrain using UAV LiDAR technology. IOP Conf. Series Earth Environ. Sci. 2018, 169, 012100. [Google Scholar] [CrossRef] [Green Version]
  37. Siwiec, J. Comparison of Airborne Laser Scanning of Low and High Above Ground Level for Selected Infrastructure Objects. J. Appl. Eng. Sci. 2018, 8, 89–96. [Google Scholar] [CrossRef] [Green Version]
  38. Pilarska, M.; Ostrowski, W.; Bakuła, K.; Górski, K.; Kurczyński, Z. The potential of light laser scanners developed for unmanned aerial vehicles—The review and accuracy. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLII-2/W2, 87–95. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) DJI Matrice 300 with a DJI Zenmuse P1 camera (b) DJI Zenmuse L1 in the transport case.
Figure 1. (a) DJI Matrice 300 with a DJI Zenmuse P1 camera (b) DJI Zenmuse L1 in the transport case.
Remotesensing 13 04811 g001
Figure 2. (a) The entire scanned area; (b) The test area with locations of: Red—photogrammetric black and white targets; blue—reflection targets; 4001–4005—points georeferenced using the GNSS total station, 5001—position of the total station.
Figure 2. (a) The entire scanned area; (b) The test area with locations of: Red—photogrammetric black and white targets; blue—reflection targets; 4001–4005—points georeferenced using the GNSS total station, 5001—position of the total station.
Remotesensing 13 04811 g002
Figure 3. Targets used as control points: (a) black-and-white photogrammetric target (0.4 × 0.4 m) (b) blue high reflection foil target (c) yellow high reflection foil target (d) red high reflection foil target (e) white high reflection foil target.
Figure 3. Targets used as control points: (a) black-and-white photogrammetric target (0.4 × 0.4 m) (b) blue high reflection foil target (c) yellow high reflection foil target (d) red high reflection foil target (e) white high reflection foil target.
Remotesensing 13 04811 g003
Figure 4. Flight paths (a) L1_50 m_1 flight (b) L1_50 m_2 flight (c) L1_70 m flight.
Figure 4. Flight paths (a) L1_50 m_1 flight (b) L1_50 m_2 flight (c) L1_70 m flight.
Remotesensing 13 04811 g004
Figure 5. Flight path for the acquisition of the imagery for SfM processing.
Figure 5. Flight path for the acquisition of the imagery for SfM processing.
Remotesensing 13 04811 g005
Figure 6. Detail of a point cloud color-coded according to the intensities; red dots indicate high reflectivity foils.
Figure 6. Detail of a point cloud color-coded according to the intensities; red dots indicate high reflectivity foils.
Remotesensing 13 04811 g006
Figure 7. Areas of evaluation: blue—flat areas; red—rugged areas; purple—vertical areas.
Figure 7. Areas of evaluation: blue—flat areas; red—rugged areas; purple—vertical areas.
Remotesensing 13 04811 g007
Figure 8. Comparison of a 20 cm wide point cloud profiles of photogrammetric data (blue) and lidar data (red), flight 50_1; (a) flat surface; (b) rugged surface; (c) vertical surface.
Figure 8. Comparison of a 20 cm wide point cloud profiles of photogrammetric data (blue) and lidar data (red), flight 50_1; (a) flat surface; (b) rugged surface; (c) vertical surface.
Remotesensing 13 04811 g008
Figure 9. Color information shifts for individual targets acquired during the 50 m_1 flight; the placement of all shifts in a single quadrant highlights the systematic character of the error.
Figure 9. Color information shifts for individual targets acquired during the 50 m_1 flight; the placement of all shifts in a single quadrant highlights the systematic character of the error.
Remotesensing 13 04811 g009
Figure 10. An example of the color information shift (blue target) compared to the reflection intensity (brown/yellow scale); the target centers differ by 0.25 m.
Figure 10. An example of the color information shift (blue target) compared to the reflection intensity (brown/yellow scale); the target centers differ by 0.25 m.
Remotesensing 13 04811 g010
Table 1. Basic characteristics of the UAV DJI Matrice 300.
Table 1. Basic characteristics of the UAV DJI Matrice 300.
WeightApprox. 6.3 kg (With One Gimbal)
Max. transmitting distance (Europe)8 km
Max. flight time55 min
Dimensions810 × 670 × 430 mm
Max. payload2.7 kg
Max. speed82 km/h
Table 2. Basic characteristics of the DJI Zenmuse L1 laser scanner.
Table 2. Basic characteristics of the DJI Zenmuse L1 laser scanner.
Dimensions152 × 110 × 169 mm
Weight930 ± 10 g
Maximum Measurement Distance 450 m at 80% reflectivity, 190 m at 10% reflectivity
Recording Speed Single return: max. 240,000 points/s; Multiple return: max. 480,000 points/s
System Accuracy (1σ)Horizontal: 10 cm per 50 m; Vertical: 5 cm per 50 m
Distance Measurement Accuracy (1σ)3 cm per 100 m
Beam Divergence0.28° (Vertical) × 0.03° (Horizontal)
Maximum Registered Reflections3
RGB camera Sensor Size1 in
RGB Camera Effective Pixels20 Mpix (5472 × 3078)
Table 3. Basic characteristics of the DJI Zenmuse P1 camera.
Table 3. Basic characteristics of the DJI Zenmuse P1 camera.
Weight787 g
Dimensions198 × 166 × 129 mm
CMOS Sensor size35.9 × 24 mm
Number of Effective Pixels45 Mpix
Pixel Size4 µm
Resolution8192 × 5460 pix
Table 4. RMSE residues after calculation—P1 flight.
Table 4. RMSE residues after calculation—P1 flight.
X [m]Y [m]Z [m]Total Error [m]
Camera positions0.0180.0220.0350.045
GCPs0.0030.0020.0060.007
CPs0.0020.0020.0110.011
Table 5. Numbers of points and resolutions of the point clouds.
Table 5. Numbers of points and resolutions of the point clouds.
FlightNumber of Points (Total)Number of Points (Cropped)Average Point Density/m2Resolution [mm]
P1274,363,65577,849,698655112
L1_50 m_185,362,02521,940,068184623
L1_50 m_2129,145,83334,263,928288219
L1_70 m214,350,91640,576,864341317
Table 6. RMSEs compared to checkpoints without and after transformations of different types.
Table 6. RMSEs compared to checkpoints without and after transformations of different types.
Flight L1Type of TransformationRMSE [m]RMSEX [m]RMSEY [m]RMSEZ [m]
50 m_1Original cloud0.0360.0540.0190.022
Translation0.0130.0160.0130.007
2.5D transformation0.0130.0160.0130.007
3D transformation0.0120.0160.0130.005
50 m_2Original cloud0.0250.0240.0200.030
Translation0.0150.0160.0170.012
2.5D transformation0.0150.0160.0160.012
3D transformation0.0140.0160.0160.010
70 mOriginal cloud0.0290.0350.0300.019
Translation0.0140.0190.0100.012
2.5D transformation0.0140.0190.0110.012
3D transformation0.0130.0190.0110.007
Table 7. Transformation parameters.
Table 7. Transformation parameters.
Flight L1Type of TransformationTX
[m]
TY
[m]
TZ
[m]
Rz
[°]
Rx
[°]
Ry
[°]
50 m_1Original cloud
Translation0.052−0.0140.021
2.5D transformation0.052−0.0140.021−0.0001
3D transformation0.052−0.0140.021−0.0001−0.0084−0.0080
50 m_2Original cloud
Translation0.018−0.0110.027
2.5D transformation0.018−0.0110.0270.0046
3D transformation0.018−0.0110.0270.0046−0.0135−0.0053
70 mOriginal cloud
Translation0.030−0.0280.015
2.5D transformation0.030−0.0280.015−0.0050
3D transformation0.030−0.0280.015−0.0050−0.0126−0.0212
Table 8. RMSE of lidar point clouds vs. SfM point cloud in individual types of areas.
Table 8. RMSE of lidar point clouds vs. SfM point cloud in individual types of areas.
Flight L1Flat SurfacesRugged SurfacesVertical Surfaces
50 m_10.0320.0380.038
50 m_20.0300.0380.027
70 m0.0440.0480.049
Table 9. Color information shifts for individual flights.
Table 9. Color information shifts for individual flights.
Flight L1ShiftX (m)Y (m)Z (m)Distance (m)
50 m_1Mean0.076−0.1960.0000.210
St. dev0.0280.0360.042
50 m_2Mean−0.271−0.163−0.1020.332
St. dev0.0690.0510.027
70 mMean−0.2210.335−0.1390.425
St. dev0.0540.0710.043
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Štroner, M.; Urban, R.; Línková, L. A New Method for UAV Lidar Precision Testing Used for the Evaluation of an Affordable DJI ZENMUSE L1 Scanner. Remote Sens. 2021, 13, 4811. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13234811

AMA Style

Štroner M, Urban R, Línková L. A New Method for UAV Lidar Precision Testing Used for the Evaluation of an Affordable DJI ZENMUSE L1 Scanner. Remote Sensing. 2021; 13(23):4811. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13234811

Chicago/Turabian Style

Štroner, Martin, Rudolf Urban, and Lenka Línková. 2021. "A New Method for UAV Lidar Precision Testing Used for the Evaluation of an Affordable DJI ZENMUSE L1 Scanner" Remote Sensing 13, no. 23: 4811. https://0-doi-org.brum.beds.ac.uk/10.3390/rs13234811

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop