Next Article in Journal
Paradigm of Geological Mapping of the Adıyaman Fault Zone of Eastern Turkey Using Landsat 8 Remotely Sensed Data Coupled with PCA, ICA, and MNFA Techniques
Next Article in Special Issue
How Image Acquisition Geometry of UAV Campaigns Affects the Derived Products and Their Accuracy in Areas with Complex Geomorphology
Previous Article in Journal
Assessment of Influencing Factors on the Spatial Variability of SOM in the Red Beds of the Nanxiong Basin of China, Using GIS and Geo-Statistical Methods
Previous Article in Special Issue
Spatiotemporal Dynamics of Suspended Sediments in the Negro River, Amazon Basin, from In Situ and Sentinel-2 Remote Sensing Data

Comparing High Accuracy t-LiDAR and UAV-SfM Derived Point Clouds for Geomorphological Change Detection

Laboratory of Mineralogy-Geology, Department of Natural Resources Management and Agricultural Engineering, Agricultural University of Athens, 75, Iera Odos str., 11855 Athens, Greece
Institute of Neotectonics and Natural Hazards, RWTH Aachen University, 52062 Aachen, Germany
Author to whom correspondence should be addressed.
Academic Editors: Wolfgang Kainz and Josef Strobl
ISPRS Int. J. Geo-Inf. 2021, 10(6), 367;
Received: 18 March 2021 / Revised: 4 May 2021 / Accepted: 26 May 2021 / Published: 29 May 2021
(This article belongs to the Special Issue GIS and Remote Sensing Applications in Geomorphology)


Analysis of two small semi-mountainous catchments in central Evia island, Greece, highlights the advantages of Unmanned Aerial Vehicle (UAV) and Terrestrial Laser Scanning (TLS) based change detection methods. We use point clouds derived by both methods in two sites (S1 & S2), to analyse the effects of a recent wildfire on soil erosion. Results indicate that topsoil’s movements in the order of a few centimetres, occurring within a few months, can be estimated. Erosion at S2 is precisely delineated by both methods, yielding a mean value of 1.5 cm within four months. At S1, UAV-derived point clouds’ comparison quantifies annual soil erosion more accurately, showing a maximum annual erosion rate of 48 cm. UAV-derived point clouds appear to be more accurate for channel erosion display and measurement, while the slope wash is more precisely estimated using TLS. Analysis of Point Cloud time series is a reliable and fast process for soil erosion assessment, especially in rapidly changing environments with difficult access for direct measurement methods. This study will contribute to proper georesource management by defining the best-suited methodology for soil erosion assessment after a wildfire in Mediterranean environments.
Keywords: Terrestrial Laser Scanning (TLS); Structure from Motion (SfM); drone; point cloud; soil erosion; wildfire; geoenvironment; remote sensing Terrestrial Laser Scanning (TLS); Structure from Motion (SfM); drone; point cloud; soil erosion; wildfire; geoenvironment; remote sensing

1. Introduction

Land-use change and wildfire events have been significant environmental issues for many countries over the last decades. Wildfires cause significant soil degradation in the Mediterranean region due to their increasing frequency and the resulting soil erosion [1,2,3,4,5]. Soil erosion in terms of soil degradation can affect the environment if no conservational practices are applied. During the soil erosion process, a grain-by-grain separation of the topsoil occurs, followed by transportation and deposition in downslope areas [2,3,6,7]. Water erosion is the primary erosional process in Mediterranean catchments, because of the mountainous geomorphology and the climatic conditions characterised by high-intensity precipitation events [3].
New tools such as terrestrial Light Detection And Ranging (tLiDAR) techniques over the past 15 years [4,8,9,10,11,12], along with the rapidly developed Unmanned Aerial Vehicles (UAV) based photogrammetry techniques over the past five years [13,14], are being widely employed in highlighting change detection or even forest canopy structure by producing high accuracy digital terrain models. Structure from Motion (SfM) method is a widespread technique that can be used for this purpose. UAV platforms with cameras can be utilised for large-scale photogrammetric surveys in non-approachable sites, while the Terrestrial Laser Scanning (TLS) method is an approximately 15-year-old well-established technique, producing models with high accuracy of the order of mm/cm.
The SfM method is a passive surface reconstruction method, while the TLS sensor is an active method, leading to sharp surface reconstruction, containing the target’s backscattered intensity signal values. This information is critical in some scientific applications that use roughness measurements (e.g., fault slip rates, trench walls analysis), as also denoted by [15,16,17,18], or even different material properties that can distinguish each material based on the reflectance of the laser beam impacting on its surface under certain incidence angle [19].
The LiDAR technology is mostly used for the collection of altitude data with high accuracy and density by using a laser beam scanning technique. This method results in high-resolution data, which provide excellent surface reconstruction under any light conditions. The LiDAR sensor uses the same principles as the common radar method. It sends and receives a laser pulse with specified characteristics against a defined target, calculating the beam’s return time into the source. Due to the notable low spectral wavelength (of energy) used by TLS, it is possible to measure small objects/targets. The beam is emitted by a rotating source defined by a specified angle range. The scanning process is being characterised by certain pulse frequency, which results in high-accuracy distance calculation by the emitting beam. Geomorphological mapping based on LiDAR method can provide a high-resolution DTM with small-scale features denoted [15,16,17,18,20,21,22,23,24,25,26].
Airborne Laser Scanning (ALS) and TLS have made a great impact on the DSM (Digital Surface Model) data collection since the early 2000s [27,28,29]. Many researchers agree that LiDAR technology is the most effective approach for data collection and high accuracy 3D surface reconstruction. Additionally, the LiDAR sensors have the advantage of immediate assignment of 3D coordinates in every point of the target area in a local or global coordinate system [30,31,32]. The TLS technique is a static approach compared to the dynamic form of the ALS. Additionally, the TLS has a horizontal perspective compared to the nadir direction of the ALS. Hence, the TLS method can achieve higher accuracy, while the ALS is less time consuming, resulting in a more cost-effective approach [33,34,35,36]. In particular, a possible combination of ALS and TLS could potentially raise the measurement accuracy, if overlapping datasets could minimize the drawbacks of each separate method.
Following the LiDAR innovation technique, the SfM photogrammetry via UAV imagery proved to be a valuable tool in topographic data acquisition especially in geomorphological applications [37,38]. Several researchers have demonstrated the significance of SfM methods to generate high-quality Digital Terrain Models (DTM), point clouds and surface reconstruction through UAV imagery [39,40]. A great number of studies cope with the Structure from Motion (SfM) algorithms and DTM comparison and their ability to be efficiently utilised: post-flood research [41,42,43], change detection [44,45,46,47], or even grain identification [48,49]. A comparison between SfM-derived and airborne LiDAR point clouds was also made by Mlambo et al. [50], resulting in a strong correlation between the SfM and LiDAR-derived DEM (pixel-to-pixel comparison). Sankey et al. [51] demonstrate the combined use of digital cameras and LiDAR, concluding that both of them are a great tool in geomorphological research with great accuracy results. More recently, Mateos et al. [52] described the combination of InSAR and UAV based photogrammetry methodologies for monitoring a landslide near the urban area of the Cármenes del Mar Resort.
The SfM technique utilises common features/targets in successive images taken from different positions. UAV can be used in non-easily accessible sites, such as remote, steep and unstable slopes. Successive aerial surveys provide multiple DTM which are compared, yielding measurable topsoil changes. For that reason, software that followa multi-view stereopsis workflow, using image feature identification and feature matching, are usually used [53].
The study of an area using UAV imagery can be really complex due to several factors such as dense vegetation, untextured surfaces, a variation of slopes, areas filled in water, etc. The most critical factor obscuring the clear ground view when applying UAV based photogrammetry methods is the vegetation. It is well established that UAV-derived DTM cannot reconstruct the vegetated areas as LiDAR does. Dandois and Ellis [54] supported that the DTM produced by SfM techniques are being characterised by significant inaccuracy compared to LiDAR-derived point clouds because of the complex canopy structure (low-quality ground points due to the canopy).
This study presents the combined application of these two methods in the same pilot sites, aiming to introduce the scientific community to a multi-source (TLS and UAV-derived) point cloud comparison at multitemporal perspective, especially under fast changing circumstances in terms of erosion and vegetation growth after a wildfire. We argue that both TLS and UAV photogrammetry provide valuable information on post-wildfire erosion. We first analyse each method separately, pointing out their differences. We then demonstrate the erosion rates related to each method, evaluating their utilisation, and discuss the best-fitted application.

2. Study Area

Two recently burned slopes were chosen as test sites S1 and S2, located in the Evia island (Greece), near Psachna town (Figure 1a). Both sites are characterised by the same Mediterranean climatic and geomorphological conditions (Figure 1b), and they considered to be suitable for detection of changes over a short temporal timescale, where soil movement was expected to occur. The study areas have relatively steep slopes (mean value: 30°, Figure 1b); they lack vegetation cover due to the recent wildfires and their lithology is susceptible to erosion (Figure 1a). The area of the S1 is about 420 m2 and S2 covers an area of 85 m2. Before the wildfire, both sites used to be covered by coniferous forest combined with a transition of woodland and shrubs (codes 312 and 324, according to the CORINE 2018 dataset). The mean elevation is about 80 m for S1 and 475 m for S2, above sea level. Regarding the site geology, the wider study area is considered to be part of the Subpelagonian unit [55,56]. The geological structure mostly comprises thick Ophiolite formations overlaid by Upper Cretaceous limestone. Under the Ophiolites, a mélange of schist, chert and shales with multiple lenses of limestone occurs. S1 is located within the ophiolitic complex of Upper Jurassic-Lower Cretaceous, consisting of serpentines, diabase and peridotites, covered by a weathered mantle of significant thickness. S2 slope is located within a small sub- basin, comprised of talus cones and alluvial deposits of great thickness. Bedrock geology in both study sites support the formation of significant soil thickness as confirmed by the land cover/vegetation characteristics (Figure 1).

3. Methodology

3.1. Pre-Fieldwork

Preliminary research included a first geomorphological study of the burnt area to select a suitable region of interest (ROI). Central Evia area was devastated during recent fires (August 2019), so we assumed that sub-basins within the burnt area would offer an excellent opportunity for soil loss assessment, taking into account the absence of vegetation. Both sites were chosen after applying the Difference Normalized Burn Ratio (dNBR) [57,58,59,60] index to delineate the burnt area by using Landsat 8 pre-fire and post-fire images (Bands 5 and 7), and after using geomorphological and geological data, supported by intensive field work.
The dNBR index was first calculated to define burn severity (Figure 2), which indicated areas where the vegetation would be almost absent. Two field campaigns were held in October 2019 (13/10/2019 and 19 October 2019) and one additional campaign took place in February 2020 (23 February 2020). The last survey was conducted in October (11 October 2020) to estimate and locate the annual erosion. The last survey took place only at S1, due to the outburst of a new vegetation cover at S2 (following a flooding event on 9 August 2020).

3.2. UAV Image Acquisition Technique

A DJI Phantom 4 was used during this research (Figure 3) with a 1/2.3″ 12.4 MP with FOV 94°, 20 mm (35 mm equivalent) camera sensor and 4000 × 3000 image size. For both sites, the operator took photos using a nadir camera, keeping the UAV in a still position while acquiring the images. No automated flight plan was preset because of the occurrence of electrical wires at S1 and significant tree obscurance at S2. The minimum image overlapping was 80% and the maximum forward and side overlap reached 95% in specific areas of high interest, at the highest elevations.The minimum flight altitude was 5 m above ground level (AGL) at the highest elevation of the study area. An additional flight was made on 11 October 2020, at 2 m AGL, to increase the accuracy along a channel that had already formed, running through the center of the study area. As a result, the flight height at S1 varied from 5 m AGL (reaching 2 m AGL for the channel on 11 October 2020, resulting to 0.4 mm/pixel Ground Sampling Distance—GSD) at the highest point, up to 15 m above the lowest point (2.7 mm/pixel GSD), while at S2 the images were captured from 5 m (1 mm/pixel GSD) up to 13 m AGL (2.3 mm/pixel GDD). In general, flight altitude was not possible to remain constant, due to obstacles such as the remaining tree stems and canopy, power lines and slope.
The flight surveys resulted in 282 photos (at S1) and 298 photos (at S2) captured on 19 October 2019, while 315 photos (at S1) and 217 (at S2) photos were captured on 23 February 2020. Different flight plans were conducted for each area. At site S1 there was no vegetation so we were able to fly at a nearly constant average height. In S2 we needed to fly most of the time under, or even within the canopy of the remaining trees, so we acquired more images than one would expect based on the site area. For S1 we used 7 rectangular aluminum plates as Ground Control Points (GCP), while for S2 we used 4 GCP. We installed more GCP in S1, because both areas have similar characteristics and we used similar image acquisition setup. As a result, the difference in the number of the GCP was based on the area extent. The aluminum plates were considered the best solution for using them for both techniques. By using aluminum targets, we assumed that there wouldn’t be any deformation on the targets, resulting in unmeasurable GCP. On the other hand, we achieved a good target reflectance for the TLS method. The GCP were equally spread across the study areas and were surveyed using an RTK GNSS, with a range accuracy of <1 cm in the horizontal plane and 1.7 cm in the vertical axis.
The flight altitude was selected based on each site’s characteristics (soil texture, gravel content, topography, and vegetation), focusing on areas of interest by increasing the image overlap during the flight. The acquired images were photogrammetrically processed using Agisoft PhotoScan Metashape Professional (v. 1.5.5). The complete study workflow is summarised in Figure 4.

3.3. TLS Technique

In this study, we focused on the point cloud creation, derived by the LiDAR method to describe the surface geometry. In our case, the small-scale topography changed since the wildfire event; as a result, small drainages were created due to the surface water overflow and the topsoil (surface mantle) properties. For this purpose, the Optech Ilris 3D LiDAR sensor was used (Figure 3, Table 1). The complete equipment consists of the Ilris sensor, the tripod, the batteries and the portable computer, which is connected via Ethernet with the sensor. Tilting of the sensor was avoided, to be completely parallel to the surface. The measurements were recorded and saved via the Ilris Controller (and Parser) software in a portable Laptop computer. The Ilris laser measurement order is being displayed in Figure 5. Φ is the angle of the laser beam (2φ = 40° laser beam scanning angle or field of view at Table 1), X is the mean distance and Y is the vertical distance of the scanning window). As a result, for a mean distance of 10 m, the Y distance would be 7.28 m.
Figure 5. Scanning window, according to Ilris 3D Optech terrestrial LiDAR system’s specifications and [61].
Figure 5. Scanning window, according to Ilris 3D Optech terrestrial LiDAR system’s specifications and [61].
Ijgi 10 00367 g005
The Controller software allowed constant visualisation during the measurement of the LiDAR sensor in real-time. Due to the fact that there were a lot of obstacles at both study areas, only one scanning position was selected at each site. It should be mentioned that only the last returns (pulse mode) were chosen in all scans, because the probable vegetation growth between the different scanning campaigns would change the point cloud parameters and affect the comparison process. By choosing the last pulse option, it was possible to avoid in some areas the vegetation reflectance, because the sensor estimated the distance of the target from the last pulse of the emitted beam. Each TLS dataset was registered with GNSS-measured points (GCP) and projected in EGSA ’87 coordinate system (EPSG 2100). Registration error of each dataset is mentioned in Section 4. The defined parameters of each scan are presented in Table 2.
Τhe mechanical error of the TLS Ilris 3D system, which is associated with the operation of the sensor, was calculated by scanning the same indoor area, twice in a row, without any change in the scanning parameters. The scanner was first set in a specific position; the first measurement was completed, then the scanner was restarted and operated again at the same position. After obtaining the second scan, we compared the scanned datasets, calculating an error of 2 cm.

3.4. SfM Processing—Agisoft Metashape Pro

The metadata of the acquired images included the coordinates derived by the UAV’s built-in GPS. These data had to be removed before the alignment process to avoid complications with the different projection systems during the georeferencing process. Over the next steps, a high-accuracy option was used for the dense cloud derivation, and the detailed surfaces (mesh and texture). As soon as the tiled model was developed, we added the GCP for the model’s georeference, and we extracted the point cloud and the DSM.
Important steps during processing, to be analysed further:
  • TIN (Triangulated Irregular Network) extraction. Agisoft Metashape Pro constructs an intermediate TIN file (irregular triangle net) related to a high-density point cloud (“dense point cloud”). This TIN can differentiate the point density and can define each point’s position according to the complex relief. As a result, a better display of the geomorphological characteristics (watersheds, etc.) is achieved. The extracted topographical points (points with altitude information) are then available to be exported as a mesh model.
  • Texture. The texture of the relief is very significant for the model display and the following analysis. The texture algorithm is related to the TIN construction. Agisoft Metashape uses the following methods (mapping mode) for the texture reconstruction: (a) generic, (b) adaptive orthophoto, (c) orthophoto, (d) spherical, (e) single camera, (f) keep uv. According to our test sites’ characteristics, we selected the generic approach.
  • Accuracy. The accuracy of the method is defined by the data source and the model construction method. During the data collection, for example, the radial distortion of the UAV built-in lens should be taken into consideration. Vegetation cover can also moderate the model’s accuracy, as a result, it should be removed before initiating the Agisoft modeling on soil’s change detection surveys.
The exact workflow comprises the following steps: (a) Remove metadata from photos, (b) Camera calibration, (c) Selection of all required photos, (d) Align photos and Build dense cloud (high quality and mild filtering definition), (e) Build mesh, texture and tiled model, (f) Add and check markers (Ground Control Points-GCP, check that the markers and the GCP have the same name/ Coordinate system definition), (g) Check the markers error, (h) Build DTM and (i) Export point Cloud (dense/sparse), DTM.
The low altitude flight and the significant image overlapping (>80%) resulted in high-resolution end products (see details in Table 3).

3.5. Point Cloud Processing—CloudCompare

CloudCompare is a software package that is being used for a wide range of geoscience applications, by displaying and processing 3D point clouds. Initially, it was first used as part of a CIFRE project in 2004 funded by Electricite de france (EDf website—CloudCompare user manual v2.6.1). It has been designed for analysing point clouds via the corresponding comparison between different clouds [62]. Along the process and the reference research, we concluded that the best approach for soil change detection is to compare point clouds instead of DTM. It was considered that point- to- point comparison would perform better in XYZ dimensions (3D point cloud processing), avoiding interpolation in 2D data that may prove inadequate for a thorough comparison in three dimensions. So, CloudCompare software (an open-source software) was considered to be the best solution for the UAV- TLS derived point clouds comparison. The cloud point plot displayed by CloudCompare makes the processing easier in a friendly user environment. For our study sites, we used the 3D shape reconstruction of the point clouds. This process requires a significant computation time, and some coverage issues on complex scenes needed to be dealt with. One notable characteristic of the software is the diminishing of the data volume during their processing (subsampling tool). This is necessary in some cases because of the extreme amount of data acquired by the TLS and UAV sensors. CloudCompare software allows the analysis of these extreme volume 3D data by using a specific Octree structure.
Some tools that were used, are the following:
Subsampling. This method reduces the total data volume by excluding points of the original point cloud. The points are being selected by the user, taking into consideration the sampling points and the percentage of points that will be used. As the number of points increases, the duration of the process also increases. More points result in better point correlation. We selected mm- resolution spacing in TLS measurements to examine what would be the accuracy of the extracted change detection results, as the mean distance of the targets was restricted because of obstacles (eg. river bank at S1 and trees at S1 and S2) In order to achieve a high-quality point cloud, a segmentation of the original point cloud was used instead of subsampling. So, we removed: (a) the noise of the cloud, (b) points outside the study area, (c) vegetation and (d) points that are placed incorrectly in the point cloud due to TLS sensor or UAV- SfM issues.
Registration. This processing step is considered to be one of the most important steps in point cloud analysis. During the registration, the different point clouds are being fine aligned by point- pair picking (align tool) in CloudCompare software. This procedure includes the registration of each point cloud (TLS or SfM-derived) with GNSS- derived GCP points. An additional process of picking common stable points (e.g., outcrops and trees) was selected, at both clouds, in order for the clouds to be identically registered to one another, outcrops was used only at S1, because of the extended weathered soil cover at S2, where only common tree points were available. At this point, we could achieve cm accuracy based on the GNSS usage, which leads to a high- quality model- to- model comparison. The registration error of each point cloud dataset is analysed in the Section 4.
Point cloud coloring. The Ilris 3D sensor that was used, does not include software for RGB values assignment to produce a true-color point cloud. As a result, we assigned different colors at each point, according to the intensity value.
Vegetation removal. Although at first scans we did not have significant vegetation, we had to remove burnt trees and small shrubs. For this reason, we used the Cloth Simulation Filter (CSF) algorithm, which extracts ground points in discrete return LiDAR point clouds [63] and UAV-derived clouds. Using CSF, we separated all ground points and extracted a new vegetation-free point cloud. Finally, we manually removed any vegetation-related points that remained in the new cloud.
During the CloudCompare processing, it was noted that by using the alignment technique of matched features in different point clouds the accuracy and the georeference in real world coordinates are enhanced. Features such as outcrops at S1 used for this purpose, resulted in an alignment accuracy less than 1 cm, while at S2, it was possible to find only trees as common stable features (same accuracy achieved), due to extended soil cover. As a result, the number of GCP used is considered adequate.
M3C2 (Multiscale Model to Model Cloud Comparison) distance calculation [64]. By using the M3C2 algorithm and computing the vertical normals we compared TLS to TLS and UAV to UAV (2019–2020) point clouds for each site. According to [64] the algorithm combines the local distance of two point clouds in correlation with the normal surface detection which tracks 3D variations in surface orientation. This algorithm has the advantage of operating directly to point clouds and estimating a confidence interval depending on point clouds characteristics and registration error. After experimental use of different values and the “guess params” option (provided by the software), we concluded to 0.20 m normal scale diameter and 0.10 m projection scale diameter (At S1 and S2, for both SfM and TLS technique) for the 19 October 2019–23 February 2020 calculations, while 0.30 m and 0.20 m were used respectively for 23 February 2020–11 October 2020 and (19 October 2019–11 October 2020) calculations. All points of the defined “point cloud #1” were used as core points for normal calculations.

4. Results

Both TLS and UAV techniques were compared for assessing and validating soil erosion through 3D modeling. The derived high quality point clouds appear to accurately simulate the micro-topography and texture. In this study we used a point-to-point comparison analysis. Erosion was assessed by using three datasets of two catchments acquired by two different methods (photogrammetry using images acquired bya UAV and TLS). Results show a more precise local erosion assessment by UAV derived point clouds usage compared to the TLS method. This can be predominantly explained by the data acquisition setup, (i.e., vertical angle capture of the photos) and the denser point cloud generated by the UAV data analysis, which minimised the shadow effect. The total slope erosion rate, on the other hand, is better represented by the TLS technique, due to small vegetation cover complicating the UAV-SfM analysis.
Annual local erosion at S1 was traced and quantified by the UAV photogrammetry approach, with high accuracy along the channel. Soil erosion reached 0–48 cm based on the UAV-SfM, while a range of 0–40 cm was extracted by the TLS. Figure 6 and Figure 7 and Table 4 and Table 5, show significant local erosion for S1, mostly localised along the channel. In Figure 6a, the red-circled area displays the maximum local erosion in each comparison we made. Except for the total annual erosion at S1 (Figure 6a,b, Table 4), erosion was measured also four months after the first fieldwork (19 October 2019–23 February 2020) at both sites (Figure 7 and Figure 8). During this period, only 3–4 cm of maximum local erosion was measured by UAV based photogrammetry and TLS techniques at S1 red circled area (Figure 7), while both methods estimate a mean erosion of about 1.5 cm at S2 (Figure 8, Table 4).
These observations are also similar in volume change and total slope erosion estimations. At S1 we considered areas that erosion could be estimated and vegetation would not affect our measurements. So, we selected specific areas (m2), where the TLS and photogrammetry estimations would be comparable. Both techniques yielded an annual channel volume change of about −3.00 m3 (see also Table 4b, volume change −3.00 m3 for TLS and −2.90 m3 for UAV-based SfM technique, measured in approximately 60 m2 along the channel). The measurements in 4-month and 8-month surveys are also comparable (Table 4). We separated the channel volume erosion (Table 4b) from slope erosion measurements (Table 4c). Table 4c demonstrates that both techniques yielded similar mean slope erosion. The 1 cm difference of total slope erosion estimation in SfM data compared to TLS data, is due to vegetation obscurance, resulting in total slope erosion of 0.005–0.01 m on 19 October 2019–23 February 2020 and 0.01 m, in terms of total annual erosion rate. In the TLS technique, a slight variance was observed. Erosion pattern was recognised at about ¾ of the total slope area in TLS data, revealing also slope wash occurrence (Figure 6 and Figure 7). As a result, 0.01–0.015 m of total erosion rate was assessed in a 4-month investigation (which is also comparable to 0.015 m total erosion rate at S2), while 0.02 m of total slope erosion rate was estimated in the 8-month and annual period (Table 4).
It is interesting to note that during the eight-month investigation (23 February 2020–11 October 2020) the major increase (UAV approach Figure 9a, TLS approach Figure 9b) in local soil erosion could be attributed to the rainfall intensity of 9 August 2020, when 299.6mm of precipitation were measured by the National Observatory of Athens weather station of Steni [65], near Psachna region, corresponding to the 14.3% of the area’s total annual precipitation.
Despite the better performance of the UAV-based photogrammetry in the annual local assessment, the intermediate temporal assessment displayed different results. In particular, the soil erosion pattern in the 4-month investigation seems to be better represented in S1 by the TLS technique (higher sensor distance), where for example slope wash is more visible in TLS analysis in contrast with S2, where the UAV-derived model seems to be more suitable. It is clear that both techniques yield similar results in S2, regarding soil deposition (red-circled areas in Figure 8b) and erosion (blue-circled areas in Figure 8b). In our study, the LiDAR sensor seems to operate more precisely in long distances (>40 m at S1). UAV-SfM method is considered more suitable for erosion delineation in S2 due to the line-of-sight angle of the TLS that causes a significant shadow effect. This issue is important at areas where different TLS scan positions are constrained due to obstacles, such as tree stems. The final outcome is also related to the slope inclination and the TLS line-of-sight angle, which ideally should be vertical to the slope. The flat regions with no vegetation coverage (scanned in 5–10 m height AGL) are more accurately reconstructed by the UAV-derived point clouds.
Low accuracy was observed, as expected, towards the boundaries for all UAV-derived point clouds. This is attributed to the lower image overlapping, vegetation, or the lack of GCP near the boundaries of the area. We calculated a mean error of 2 cm in each scan (at S1) in all XYZ directions by the GPS GNSS XYZ point differences (Table 5a), while the registration error of each point cloud during its registration with the GCP (CloudCompare processing) ranged from 0.02 m (UAV-derived point clouds) up to 0.03 m (TLS-derived point clouds, Table 5b). This GCP registration error was taken into account during the M3C2 distance’s parameter definition.
We used 7 GCPs (G1–G7) at S1, while at S2, we used the 4 GCPs (P1–P4, only for the registration analysis) (Figure 10). UAV and TLS-derived point clouds for each scan at S1 were georeferenced using five out of seven GCP. Two GCP (G2 and G5) were used for XYZ accuracy assessment (Table 5a) by comparing their XYZ coordinates extracted from the point cloud, with the actual coordinates, as measured in the field using the RTK GNSS. In October 2019, the GCP used in UAV-derived point cloud was the G7 instead of the G5 because of target G5 falling during the UAS flight, while in TLS technique we used the G5. During all other analysis, G2 and G5 GCP were used for georeferencing (Figure 10a). According to the XYZ GCP error analysis, a range of 0–0.02 m error was calculated at S1. As a result, an error of 2 cm is considered to be our measurement error.
At S2, due to the limited area, 4 GCP were used in order to align the point clouds (Figure 10b). GNSS was also used at this site, but due to the small amount of GCP and the restricted study area, it was not used for the error assessment. At this site (S2), the registration error of the alignment among the point clouds and the GPS, in correlation with the registration error based on features distributed at the study area (trees) was assessed as total erosion measurement error. This is estimated at 0.02 m for the SfM method, and 0.03 m for the TLS method (Table 5a) according to the registration/alignment technique described above. The distributed feature registration error was assessed less than 1 cm for both TLS and SfM (at S1 and S2). This difference of 1cm of GPS registration could be attributed to the shadow effect and the sparse point cloud of the TLS point cloud derivation.
The density of the point clouds (see also Table 5d) is higher in the UAV—derived data, because of the large number of points produced by the SfM algorithm, while the LiDAR technique represents a significantly lower number of points in both areas (Table 5c). The differences at the same study area between the same data collection technique is attributed to the parameters set during the data collection (e.g., TLS) and the manual operation of the UAV because of vegetation obscurance, which did not allow for an automated plan flight. Nevertheless, the TLS-derived point cloud is considered to be sufficient in both sites, with a better surface reconstruction in S1, due to its ability to cope with low grass. There are many similarities in both methods in terms of the erosion assessment. UAV-SfM seems to perform better in local (i.e., channel) erosion rates estimation, while the TLS method works better in total erosion estimation, due to the sward covering the whole area, which is impossible to erase correctly using the UAV-SfM derived point cloud. Also, the CSF algorithm for vegetation removal could operate better in UAV-derived point clouds when targeting local maximum erosion rates. This happens because it removes a significant portion of points, resulting in lower point density of the TLS-derived point clouds compared to UAV-derived point clouds.

5. Discussion

Point clouds are considered the baseline for the production of 3D models to reconstruct surfaces under a defined coordinate system (local or global). To our knowledge, no detailed soil erosion investigation has yet been conducted regarding the comparison of UAV and TLS-derived point clouds algorithms in fire-affected areas. In this study, erosion was assessed by using three datasets of two catchments. Results show a more precise local maximum erosion (along the channel) assessment by UAV based photogrammetry usage compared to TLS method. TLS performed better in total erosion rate computation, compared to UAV based photogrammetry, due to the existence of grass with a height of 1–2 cm.
This can be predominantly explained by the vertical angle capture of the photos and the denser point cloud generated by the UAV based photogrammetry, which minimised the shadow effect. Both techniques have advantages and limitations, which should always be taken into consideration when a survey is conducted for the best method to be selected, however, the derived high-quality point clouds appear to simulate the micro-topography and texture accurately. The point cloud analysis comprises a defined XYZ value for every point, followed by an intensity value (TLS method) or an RGB value (UAV method). This type of multi-vector analysis (sphere-by-sphere and point-by-point analysis) is currently a cutting-edge technology with significant potential in several geoscientific applications [13,49,66,67]. As Monserrat and Crosetto [68] indicate, DEM cannot fully represent the complexity of the surface due to the 2.5D data used (there is only one Z-value in a set of X, Y coordinates). Point cloud comparison is considered to be a more accurate procedure during data processing. For this purpose, we decided not to use interpolation during this research, neither to extract TIN data format files, to minimise the corresponding error. Many researchers seem to start utilising the differentiated maps or the M3C2 algorithm and point cloud comparison in landslide detection, flood events, or even in forests’ analysis [14,69,70,71]. There are applications, where high-quality texture reconstruction is needed, as in fault slip rates assessment or signal absorption-based surveys [15,16,17,18,19]. For these applications, the passive sensor of the UAV method is considered inadequate at the moment, and TLS method is widely used instead. The surface vector analysis proposed by Day et al. [61] is considered to be used in our future surveys to be compared with the point- to- point approach.
The relief inclination and the purpose of the research target will denote the appropriate methodology (Photogrammetry or TLS). UAV’s platform advantages include the small size and weight of the equipment, which allow for an easy-to-use portable system that can approach remote areas. Additionally, the scientific community could benefit from the low cost of the equipment compared to the TLS, resulting in extensive surveys of high accuracy. On the other hand, the weather conditions and the battery power could significantly minimise or obscure the UAV operation during a survey. The TLS sensors, on the contrary, also have great accuracy and a lower point cloud processing time. A potential disadvantage of this technique is the line-of-sight issues, whereby objects in the foreground during the data collection obscure objects in the background resulting in data gaps as also pointed out by other researchers [37]. Of course this can be solved by repeating the survey from different viewpoints, but this is not always feasible (e.g., like our case) due to terrain visibility constraints. The SfM method uses Multi-View Stereopsis (MVS) techniques to define the camera position and capture angle to derive the 3D point cloud. The MVS is being affected by untextured features, occlusions, shadows, or differences in light conditions and capture angle, restrictions that we also had to manage at our regions of interest. Nevertheless, because of the highly efficient procedure, the low cost and the high-quality of the export data, this method fulfilled our requirements and accuracy goals.
The removal of vegetation is challenging during data processing for both methods. Steeper slopes with a small amount of vegetation (e.g., our test Site 1) can be accurately reconstructed using the TLS method because the laser data acquisition system’s last pulse laser beam mode helps remove the vegetation cover faster, delineating the slope precisely. The slope should be as perpendicular as possible to the TLS laser beam’s line-of-sight for the beam to penetrate and reach the soil surface. Furthermore, the automated vegetation removal (e.g., CSF algorithm [72]) is sometimes not very useful in some applications in TLS approach (e.g., at S1 on 11 October 2020), due to the algorithm procedure and TLS survey viewpoints. On the other hand, the vegetation cover usually obscures the proper surface reconstruction in UAV point clouds. The UAV-SfM approach, due to the vertical perspective of the data collection, records a total surface cover, resulting in the total absence of data (in most cases terrain data) under obscuring vegetation. The point-to-point vegetation removal during post-processing is time-consuming, so the evolution of empirical algorithms would lead to a reduction of the data processing time. UAV based SfM approach provides more accurate results for local erosion analysis in mostly horizontal areas or areas with less vegetation as local erosion rates indicate, targeting channels and river banks at non-easily approachable regions (Site 1, for example). As a result, we managed to delineate the channel erosion at S1 and the erosion-prone areas at S2 via UAV method, a year after the wildfire in remarkable accuracy, as Hamshaw et al. [73] did in a river system. The additional analysis of total erosion rates indicates that TLS can better evaluate the slope erosion compared to UAV-SfM, due to low- height vegetation cover that is difficult to completely be removed from the SfM-derived data. It is outlined that both methodologies provide similar results regarding the spatial and temporal distribution of soil erosion providing confidence to our results.
Last but not least, we should mention the optimisation of error estimation. During the whole process, it is essential to take measures for error minimisation. There are errors associated with the mechanical error of the equipment used, GNSS errors, and alignment errors [61]. We estimated a maximum error of 2 cm at S1, which is associated also with the mechanical error of the TLS sensor (also described by the [61]) and the XYZ GCP’s error estimation according to the RTK GNSS reference. The error of 2–3 cm at S2 was estimated via the registration procedure between UAV and TLS-based point clouds with the GCP. The additional distributed feature registration error was estimated less than 1 cm at both sites, so our methodology for the error assessment is considered adequate. Our results’ low errors were achieved via GNSS RTK measurements of the GCP, low height of the flight, optimal overlapping, and vegetation removal precision. An error of about 0.1–0.15 m in UAV-derived models is mentioned also in other studies [66]. According to [74] an accuracy of 3 cm was achieved after a 70 m above ground level (AGL) height flight of a UAV (SfM technique) while monitoring soil erosion in Morocco. As Harwin et al. [75] note, a geometric accuracy of 2.5–4 cm (at a flight height of 40–50m) of the point clouds based on Real-Time Kinematic (RTK) DGPS and Total Station surveys of GCP could be achieved [21,75,76]. Additionally, 4 cm accuracy was also achieved by [53] in Antarctic moss beds measurements by using DGPS.
Further improvements in accuracy can also be achieved with the use of combined camera datasets, i.e., nadir and oblique images. Oblique images offer improved point matching and accuracy in complex terrain environments and steeper slopes [77]. In this paper we used nadir images for the following reasons. First, since we planned to conduct repeated flights and to compare the results, we needed to adopt as simple and constant flight and camera parameters as possible between the sequential flights. Second, we had a rather simple analyph pattern in both sites. Third, in S2, where trees were also present, oblique images would create larger no-data areas behind the tree stems. In addition, Agisoft has 2 tools to improve models accuracy. The first is the Update Tool which uses markers coordinates and relevant location to update the model, using an affine transformation. The second is the Camera Optimization tool, which allows for a more complex (non-linear) fitting of the model to the Markers. As a result, we preferred to emphasize in the flight plans consistency and we tried to increase the accuracy using the methods previously described.
Another issue we faced was the different Ground Sampling Distance (GSD) values between the highest and the lowest part of S1. Indeed, the UAV acquired images in 5 m AGL at the highest part of the slope, but staying in this altitude would cause a reduced GSD in the lowest parts of the area. Therefore, in each flight campaign we acquired additional images through the whole area, by flying in lower altitudes at the slope base.

6. Conclusions

This study introduces the scientific community to a multi-source (TLS and UAV-derived) point cloud comparison analysis at multitemporal perspective, especially under fast changing circumstances in terms of erosion and vegetation growth after a wildfire in Mediterranean environments. The advantages and disadvantages of two new technologies for point cloud development and exploitation are being outlined, tested in soil erosion analysis, following a wildfire event. The analysis is based on TLS and UAV based photogrammetric data collection after the devastating wildfire of August 2019 in Central Evia Island, Greece. Due to the burning severity and the intensity of the posterior rainfalls, the rapidly changing environment provides a great amount of short-term data. We confirm the hypothesis that UAV based photogrammetry is a more suitable, cost-effective technique when focusing on local erosion rates (representing sites of maximum channel erosion), whereas the assessment of the total volume due to erosion along the entire channel by both methods, provided similar results. When the analysis focuses on slope wash, the TLS approach appears to be more accurate. Change detection quantification demands detailed measurements and processing analysis, while the interpretation of the results is also challenging. For this purpose, the point-to-point direct comparison and M3C2 distance algorithm considered an appropriate methodology to estimate soil erosion in a fraction of the time previously needed. Furthermore, minimal errors, of the order of a few centimeters can be reached when (a) a decent number of GCP is used, (b) the mechanical error is minimised and (c) a fine registration between point clouds datasets is achieved. Vegetation proves to be an issue for both techniques, causing shadow effect and false surface reconstruction for TLS and UAV derived point clouds respectively.
These types of research along with the innovative tools would broaden our knowledge concerning change detection, erosion assessment and quantification. Our workflow could contribute to proper georesource management following a wildfire event in a Mediterranean catchment, by using the proper tool (UAV—SfM or TLS) for prompt soil erosion estimation.

Author Contributions

Conceptualization, Simoni Alexiou, Ioannis Papanikolaou, Georgios Deligiannakis and Klaus Reicherter; methodology, Simoni Alexiou, Georgios Deligiannakis, Aggelos Pallikarakis and Ioannis Papanikolaou; software, Simoni Alexiou, Georgios Deligiannakis and Emmanouil Psomiadis; validation, Simoni Alexiou, Georgios Deligiannakis, Emmanouil Psomiadis and Ioannis Papanikolaou; field work, Simoni Alexiou, Georgios Deligiannakis and Aggelos Pallikarakis; writing—original draft preparation, Simoni Alexiou, Georgios Deligiannakis, Emmanouil Psomiadis and Ioannis Papanikolaou; supervision, Ioannis Papanikolaou. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.


This research is co-financed by Greece and the European Union (European Social Fund—ESF) through the Operational Programme «Human Resources Development, Education and Lifelong Learning» in the context of the project “Strengthening Human Resources Research Potential via Doctorate Research” (MIS-5000432), implemented by the State Scholarships Foundation (ΙΚΥ)». Ijgi 10 00367 i001

Conflicts of Interest

The authors declare no conflict of interest.


  1. Dregne, H.E. Land degradation in the drylands. Arid Land Res. Manag. 2002, 16, 99–132. [Google Scholar] [CrossRef]
  2. Larsen, I.J.; MacDonald, L.H. Predicting postfire sediment yields at the hillslope scale: Testing RUSLE and Disturbed WEPP. Water Resour. Res. 2007, 43. [Google Scholar] [CrossRef]
  3. Shakesby, R.A. Post-wildfire soil erosion in the Mediterranean: Review and future research directions. Earth Sci. Rev. 2011, 105, 71–100. [Google Scholar] [CrossRef]
  4. Rengers, F.K.; Tucker, G.E.; Moody, J.A.; Ebel, B.A. Illuminating wildfire erosion and deposition patterns with repeat terrestrial LiDAR. J. Geophys. Res. Earth Surf. 2016, 121, 588–608. [Google Scholar] [CrossRef]
  5. Efthimiou, N.; Psomiadis, E. The Significance of Land Cover Delineation on Soil Erosion Assessment. Environ. Manag. 2018, 62, 383–402. [Google Scholar] [CrossRef]
  6. Morgan, R.P.C. Soil Erosion and Conservation, 3rd ed.; Blackwell Publishing: Oxford, UK, 2005; ISBN 1-4051-1781-8. [Google Scholar]
  7. Karamesouti, M.; Petropoulos, G.P.; Papanikolaou, I.D.; Kairis, O.; Kosmas, K. Erosion rate predictions from PESERA and RUSLE at a Mediterranean site before and after a wildfire: Comparison & implications. Geoderma 2016, 261, 44–58. [Google Scholar] [CrossRef]
  8. Gulyaev, S.A.; Buckeridge, J.S. Terrestrial methods for monitoring cliff erosion in an urban environment. J. Coast. Res. 2004, 20, 871–878. [Google Scholar] [CrossRef]
  9. Wawrzyniec, T.F.; McFadden, L.D.; Ellwein, A.; Meyer, G.; Scuderi, L.; McAuliffe, J.; Fawcett, P. Chronotopographic analysis directly from point-cloud data: A method for detecting small, seasonal hillslope change, Black Mesa Escarpment, NE Arizona. Geosphere 2007, 3, 550–567. [Google Scholar] [CrossRef]
  10. Hodge, R.; Brasington, J.; Richards, K. Analysing laser-scanned digital terrain models of gravel bed surfaces: Linking morphology to sediment transport processes and hydraulics. Sedimentology 2009, 56, 2024–2043. [Google Scholar] [CrossRef]
  11. Resop, J.P.; Hession, W.C. Terrestrial Laser Scanning for Monitoring Streambank Retreat: Comparison with Traditional Surveying Techniques. J. Hydraul. Eng. 2010, 136, 794–798. [Google Scholar] [CrossRef]
  12. O’Neal, M.A.; Pizzuto, J.E. The rates and spatial patterns of annual riverbank erosion revealed through terrestrial laser-scanner surveys of the South River, Virginia. Earth Surf. Process. Landforms 2011, 36, 695–701. [Google Scholar] [CrossRef]
  13. Neugirg, F.; Stark, M.; Kaiser, A.; Vlacilova, M.; Della Seta, M.; Vergari, F.; Schmidt, J.; Becht, M.; Haas, F. Erosion processes in calanchi in the Upper Orcia Valley, Southern Tuscany, Italy based on multitemporal high-resolution terrestrial LiDAR and UAV surveys. Geomorphology 2016, 269, 8–22. [Google Scholar] [CrossRef]
  14. Roşca, S.; Suomalainen, J.; Bartholomeus, H.; Herold, M. Comparing terrestrial laser scanning and unmanned aerial vehicle structure from motion to assess top of canopy structure in tropical forests. Interface Focus 2018, 8. [Google Scholar] [CrossRef]
  15. Wiatr, T.; Reicherter, K.; Papanikolaou, I.; Fernández-Steeger, T.; Mason, J. Slip vector analysis with high resolution t-LiDAR scanning. Tectonophysics 2013, 608, 947–957. [Google Scholar] [CrossRef]
  16. Wiatr, T.; Papanikolaou, I.; Fernández-Steeger, T.; Reicherter, K. Reprint of: Bedrock fault scarp history: Insight from t-LiDAR backscatter behaviour and analysis of structure changes. Geomorphology 2015, 237, 119–129. [Google Scholar] [CrossRef]
  17. Schneiderwind, S.; Mason, J.; Wiatr, T.; Papanikolaou, I.; Reicherter, K. 3-D visualisation of palaeoseismic trench stratigraphy and trench logging using terrestrial remote sensing and GPR – A multiparametric interpretation. Solid Earth 2016, 7, 323–340. [Google Scholar] [CrossRef]
  18. Mason, J.; Schneiderwind, S.; Wiatr, T.; Reicherter, K.; Pallikarakis, A.; Papanikolaou, I.; Mechernich, S.; Wiatr, T. Fault structure and deformation rates at the Lastros-Sfaka Graben, Crete. Tectonophysics 2016, 683, 216–232. [Google Scholar] [CrossRef]
  19. Carrea, D.; Abellan, A.; Humair, F.; Matasci, B.; Derron, M.; Jaboyedoff, M. Correction of terrestrial LiDAR intensity channel using Oren–Nayar reflectance model: An application to lithological differentiation. ISPRS J. Photogramm. Remote Sens. 2016, 113, 17–29. [Google Scholar] [CrossRef]
  20. Baltsavias, E.P. A comparison between photogrammetry and laser scanning. ISPRS J. Photogramm. Remote Sens. 1999, 54, 83–94. [Google Scholar] [CrossRef]
  21. Hodgson, M.E.; Bresnahan, P. Accuracy of airborne LiDAR-derived elevation: Empirical assessment and error budget. Photogramm. Eng. Remote Sens. 2004, 70, 331–339. [Google Scholar] [CrossRef]
  22. Deng, Y.; Wilson, J.P.; Bauer, B.O. DEM resolution dependencies of terrain attributes across a landscape. Int. J. Geogr. Inf. Sci. 2007, 21, 187–213. [Google Scholar] [CrossRef]
  23. Reuter, H.I.; Hengl, T.; Gessler, P.; Soille, P. Preparation of DEMs for Geomorphometric Analysis; Elsevier Ltd.: Amsterdam, The Netherlands, 2009; Volume 33. [Google Scholar]
  24. Tian, B.; Wang, L.; Koike, K. Spatial statistics of surface roughness change derived from multi-scale digital elevation models. In Procedia Environmental Sciences; Elsevier: Amsterdam, The Netherlands, 2011; Volume 7, pp. 252–257. [Google Scholar]
  25. Yang, P.; Ames, D.P.; Fonseca, A.; Anderson, D.; Shrestha, R.; Glenn, N.F.; Cao, Y. What is the effect of LiDAR-derived DEM resolution on large-scale watershed model results? Environ. Model. Softw. 2014, 58, 48–57. [Google Scholar] [CrossRef]
  26. Lindsay, J.B.; Francioni, A.; Cockburn, J.M.H. LiDAR DEM Smoothing and the Preservation of Drainage Features. Remote Sens. 2019, 11, 1926. [Google Scholar] [CrossRef]
  27. Lohani, B.; Mason, D.C. Application of airborne scanning laser altimetry to the study of tidal channel geomorphology. ISPRS J. Photogramm. Remote Sens. 2001, 56, 100–120. [Google Scholar] [CrossRef]
  28. Rosser, N.J.; Petley, D.N.; Lim, M.; Dunning, S.A.; Allison, R.J. Terrestrial laser scanning for monitoring the process of hard rock coastal cliff erosion. Q. J. Eng. Geol. Hydrogeol. 2005, 38, 363–375. [Google Scholar] [CrossRef]
  29. Heritage, G.L.; Hetherington, D. Towards a protocol for laser scanning in fluvial geomorphology. Earth Surf. Process. Landforms 2007, 32, 66–74. [Google Scholar] [CrossRef]
  30. Wehr, A.; Lohr, U. Airborne laser scanning—An introduction and overview. ISPRS J. Photogramm. Remote Sens. 1999, 54, 68–82. [Google Scholar] [CrossRef]
  31. Ventura, G.; Vilardo, G.; Terranova, C.; Sessa, E.B. Tracking and evolution of complex active landslides by multi-temporal airborne LiDAR data: The Montaguto landslide (Southern Italy). Remote Sens. Environ. 2011, 115, 3237–3248. [Google Scholar] [CrossRef]
  32. Daehne, A.; Corsini, A. Kinematics of active earthflows revealed by digital image correlation and DEM subtraction techniques applied to multi-temporal LiDAR data. Earth Surf. Process. Landforms 2013, 38, 640–654. [Google Scholar] [CrossRef]
  33. Mitasova, H.; Overton, M.; Harmon, R.S. Geospatial analysis of a coastal sand dune field evolution: Jockey’s Ridge, North Carolina. Geomorphology 2005, 72, 204–221. [Google Scholar] [CrossRef]
  34. Young, A.P.; Olsen, M.J.; Driscoll, N.; Flick, R.E.; Gutierrez, R.; Guza, R.T.; Johnstone, E.; Kuester, F. Comparison of airborne and terrestrial LiDAR estimates of seacliff erosion in southern California. Photogramm. Eng. Remote Sens. 2010, 76, 421–427. [Google Scholar]
  35. Bremer, M.; Sass, O. Combining airborne and terrestrial laser scanning for quantifying erosion and deposition by a debris flow event. Geomorphology 2012, 138, 49–60. [Google Scholar] [CrossRef]
  36. Baughman, C.A.; Jones, B.M.; Bodony, K.L.; Mann, D.H.; Larsen, C.F.; Himelstoss, E.; Smith, J. Remotely sensing the morphometrics and dynamics of a cold region dune field using historical aerial photography and airborne LiDAR data. Remote Sens. 2018, 10, 792. [Google Scholar] [CrossRef]
  37. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. “Structure-from-Motion” photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef]
  38. Moudrý, V.; Gdulová, K.; Fogl, M.; Klápště, P.; Urban, R.; Komárek, J.; Moudrá, L.; Štroner, M.; Barták, V.; Solský, M. Comparison of leaf-off and leaf-on combined UAV imagery and airborne LiDAR for assessment of a post-mining site terrain and vegetation structure: Prospects for monitoring hazards and restoration success. Appl. Geogr. 2019, 104, 32–41. [Google Scholar] [CrossRef]
  39. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  40. Niethammer, U.; James, M.R.; Rothmund, S.; Travelletti, J.; Joswig, M. UAV-based remote sensing of the Super-Sauze landslide: Evaluation and results. Eng. Geol. 2012, 128, 2–11. [Google Scholar] [CrossRef]
  41. Hervouet, A.; Dunford, R.; Piégay, H.; Belletti, B.; Trémélo, M.-L. Analysis of Post-flood Recruitment Patterns in Braided-Channel Rivers at Multiple Scales Based on an Image Series Collected by Unmanned Aerial Vehicles, Ultra-light Aerial Vehicles, and Satellites. GIScience Remote Sens. 2011, 48, 50–73. [Google Scholar] [CrossRef]
  42. Diakakis, M.; Andreadakis, E.; Nikolopoulos, E.I.; Spyrou, N.I.; Gogou, M.E.; Deligiannakis, G.; Katsetsiadou, N.K.; Antoniadis, Z.; Melaki, M.; Georgakopoulos, A.; et al. An integrated approach of ground and aerial observations in flash flood disaster investigations. The case of the 2017 Mandra flash flood in Greece. Int. J. Disaster Risk Reduct. 2019, 33, 290–309. [Google Scholar] [CrossRef]
  43. Andreadakis, E.; Diakakis, M.; Vassilakis, E.; Deligiannakis, G.; Antoniadis, A.; Andriopoulos, P.; Spyrou, N.I.; Nikolopoulos, E.I. Unmanned Aerial Systems-Aided Post-Flood Peak Discharge Estimation in Ephemeral Streams. Remote Sens. 2020, 12, 4183. [Google Scholar] [CrossRef]
  44. Tamminga, A.D.; Eaton, B.C.; Hugenholtz, C.H. UAS-based remote sensing of fluvial change following an extreme flood event. Earth Surf. Process. Landforms 2015, 40, 1464–1476. [Google Scholar] [CrossRef]
  45. Miřijovský, J.; Langhammer, J. Multitemporal Monitoring of the Morphodynamics of a Mid-Mountain Stream Using UAS Photogrammetry. Remote Sens. 2015, 7, 8586–8609. [Google Scholar] [CrossRef]
  46. Marteau, B.; Vericat, D.; Gibbins, C.; Batalla, R.J.; Green, D.R. Application of Structure-from-Motion Photogrammetry to River Restoration. Earth Surface Processes and Landforms; John Wiley and Sons Ltd.: Chichester, UK, 2017; Volume 42, pp. 503–515. [Google Scholar]
  47. Cook, K.L. An evaluation of the effectiveness of low-cost UAVs and structure from motion for geomorphic change detection. Geomorphology 2017, 278, 195–208. [Google Scholar] [CrossRef]
  48. Langhammer, J.; Lendzioch, T.; Miřijovský, J.; Hartvich, F. UAV-Based Optical Granulometry as Tool for Detecting Changes in Structure of Flood Depositions. Remote Sens. 2017, 9, 240. [Google Scholar] [CrossRef]
  49. Woodget, A.S.; Austrums, R. Subaerial gravel size measurement using topographic data derived from a UAV-SfM approach. Earth Surf. Process. Landforms 2017, 42, 1434–1443. [Google Scholar] [CrossRef]
  50. Mlambo, R.; Woodhouse, I.H.; Gerard, F.; Anderson, K. Structure from motion (SfM) photogrammetry with drone data: A low cost method for monitoring greenhouse gas emissions from forests in developing countries. Forests 2017, 8, 68. [Google Scholar] [CrossRef]
  51. Sankey, T.; Donager, J.; McVay, J.; Sankey, J.B. UAV LiDAR and hyperspectral fusion for forest monitoring in the southwestern USA. Remote Sens. Environ. 2017, 195, 30–43. [Google Scholar] [CrossRef]
  52. Mateos, R.M.; Azañón, J.M.; Roldán, F.J.; Notti, D.; Pérez-Peña, V.; Galve, J.P.; Pérez-García, J.L.; Colomo, C.M.; Gómez-López, J.M.; Montserrat, O.; et al. The combined use of PSInSAR and UAV photogrammetry techniques for the analysis of the kinematics of a coastal landslide affecting an urban area (SE Spain). Landslides 2017, 14, 743–754. [Google Scholar] [CrossRef]
  53. Lucieer, A.; Turner, D.; King, D.H.; Robinson, S.A. Using an unmanned aerial vehicle (UAV) to capture micro-topography of antarctic moss beds. Int. J. Appl. Earth Obs. Geoinf. 2014, 27, 53–62. [Google Scholar] [CrossRef]
  54. Dandois, J.P.; Ellis, E.C. Remote sensing of vegetation structure using computer vision. Remote Sens. 2010, 2, 1157–1176. [Google Scholar] [CrossRef]
  55. Anastopoylos, I.; Kanaris, I. Psachna-Pilion Geological Map, 1:50.000 scale, Pilion Sheet, H.S.G.M.E. 1962. [Google Scholar]
  56. Katsikatsos, G.; Koukis, G.; Fytikas, M. Psachna-Pilion Geological Map, 1:50.000 scale, Pilion Sheet, H.S.G.M.E. 1968. [Google Scholar]
  57. Keeley, J.E. Fire intensity, fire severity and burn severity: A brief review and suggested usage. Int. J. Wildl. Fire 2009, 18, 116–126. [Google Scholar] [CrossRef]
  58. Chen, X.; Vogelmann, J.E.; Rollins, M.; Ohlen, D.; Key, C.H.; Yang, L.; Huang, C.; Shi, H. Detecting post-fire burn severity and vegetation recovery using multitemporal remote sensing spectral indices and field-collected composite burn index data in a ponderosa pine forest. Int. J. Remote Sens. 2011, 32, 7905–7927. [Google Scholar] [CrossRef]
  59. Ireland, G.; Petropoulos, G.P. Exploring the relationships between post-fire vegetation regeneration dynamics, topography and burn severity: A case study from the Montane Cordillera Ecozones of Western Canada. Appl. Geogr. 2015, 56, 232–248. [Google Scholar] [CrossRef]
  60. Mallinis, G.; Mitsopoulos, I.; Chrysafi, I. Evaluating and comparing sentinel 2A and landsat-8 operational land imager (OLI) spectral indices for estimating fire severity in a mediterranean pine ecosystem of Greece. GIScience Remote Sens. 2018, 55, 1–18. [Google Scholar] [CrossRef]
  61. Day, S.S.; Gran, K.B.; Belmont, P.; Wawrzyniec, T. Measuring bluff erosion part 1: Terrestrial laser scanning methods for change detection. Earth Surf. Process. Landforms 2013, 38, 1055–1067. [Google Scholar] [CrossRef]
  62. Girardeau-Montaut, D.; Marc, R.; Bey, A. Documentation CloudCompare Version 2.1.eng. Available online: (accessed on 10 October 2020).
  63. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  64. Lague, D.; Brodu, N.; Leroux, J. Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (N-Z). ISPRS J. Photogramm. Remote Sens. 2013, 82, 10–26. [Google Scholar] [CrossRef]
  65. Lagouvardos, K.; Kotroni, V.; Bezes, A.; Koletsis, I.; Kopania, T.; Lykoudis, S.; Mazarakis, N.; Papagiannaki, K.; Vougioukas, S. The automatic weather stations NOANN network of the National Observatory of Athens: Operation and database. Geosci. Data J. 2017, 4, 4–16. [Google Scholar] [CrossRef]
  66. Turner, D.; Lucieer, A.; Watson, C. An automated technique for generating georectified mosaics from ultra-high resolution Unmanned Aerial Vehicle (UAV) imagery, based on Structure from Motion (SFM) point clouds. Remote Sens. 2012, 4, 1392–1410. [Google Scholar] [CrossRef]
  67. Andaru, R.; Cahyono, B.K.; Riyadi, G.; Istarno; Djurdjani; Ramadhan, G.R.; Tuntas, S. The combination of terrestrial LiDAR and UAV photogrammetry for interactive architectural heritage visualization using unity 3D game engine. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2019, 42, 39–44. [Google Scholar] [CrossRef]
  68. Monserrat, O.; Crosetto, M. Deformation measurement using terrestrial laser scanning data and least squares 3D surface matching. ISPRS J. Photogramm. Remote Sens. 2008, 63, 142–154. [Google Scholar] [CrossRef]
  69. Stumpf, A.; Malet, J.P.; Allemand, P.; Pierrot-Deseilligny, M.; Skupinski, G. Ground-based multi-view photogrammetry for the monitoring of landslide deformation and erosion. Geomorphology 2015, 231, 130–145. [Google Scholar] [CrossRef]
  70. Kowalski, A.; Wajs, J.; Kasza, D. Monitoring of anthropogenic landslide activity with combined UAV and LiDAR-derived dems—A case study of the czerwony wąwóz landslide (SW Poland, western sudetes). Acta Geodyn. Geomater. 2018, 15, 117–129. [Google Scholar] [CrossRef]
  71. Cao, L.; Liu, H.; Fu, X.; Zhang, Z.; Shen, X.; Ruan, H. Comparison of UAV LiDAR and digital aerial photogrammetry point clouds for estimating forest structural attributes in subtropical planted forests. Forests 2019, 10, 145. [Google Scholar] [CrossRef]
  72. Kociuba, W. Different Paths for Developing Terrestrial LiDAR Data for Comparative Analyses of Topographic Surface Changes. Appl. Sci. 2020, 10, 7409. [Google Scholar] [CrossRef]
  73. Hamshaw, S.D.; Engel, T.; Rizzo, D.M.; O’Neil-Dunne, J.; Dewoolkar, M.M. Application of unmanned aircraft system (UAS) for monitoring bank erosion along river corridors. Geomat. Nat. Hazards Risk 2019, 10, 1285–1305. [Google Scholar] [CrossRef]
  74. D’Oleire-Oltmanns, S.; Marzolff, I.; Peter, K.D.; Ries, J.B. Unmanned aerial vehicle (UAV) for monitoring soil erosion in Morocco. Remote Sens. 2012, 4, 3390–3416. [Google Scholar] [CrossRef]
  75. Harwin, S.; Lucieer, A. Assessing the Accuracy of Georeferenced Point Clouds Produced via Multi-View Stereopsis from Unmanned Aerial Vehicle (UAV) Imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar] [CrossRef]
  76. Höhle, J.; Höhle, M. Accuracy assessment of digital elevation models by means of robust statistical methods. ISPRS J. Photogramm. Remote Sens. 2009, 64, 398–406. [Google Scholar] [CrossRef]
  77. Nesbit, P.R.; Hugenholtz, C.H. Enhancing UAV–SfM 3D Model Accuracy in High-Relief Landscapes by Incorporating Oblique Images. Remote Sens. 2019, 11, 239. [Google Scholar] [CrossRef]
Figure 1. (a) Geological map of the study area including the locations of sites S1 and S2 in Central Evia, Greece (scale 1:50,000 [55,56]. Fire event boundaries provided by EFFIS). S2 is located on top of Quaternary deposits, which are non-visible in this scale. (b) Topographic map showing the burned area and the two research areas.
Figure 1. (a) Geological map of the study area including the locations of sites S1 and S2 in Central Evia, Greece (scale 1:50,000 [55,56]. Fire event boundaries provided by EFFIS). S2 is located on top of Quaternary deposits, which are non-visible in this scale. (b) Topographic map showing the burned area and the two research areas.
Ijgi 10 00367 g001
Figure 2. Burn severity map, after the wildfires of August 2019 in Central Evia (processed with Google Earth Engine). Burnt area boundaries are provided by EFFIS.
Figure 2. Burn severity map, after the wildfires of August 2019 in Central Evia (processed with Google Earth Engine). Burnt area boundaries are provided by EFFIS.
Ijgi 10 00367 g002
Figure 3. UAV and LiDAR equipment used during the fieldwork, S1 (a), S2 (b).
Figure 3. UAV and LiDAR equipment used during the fieldwork, S1 (a), S2 (b).
Ijgi 10 00367 g003
Figure 4. Complete survey workflow.
Figure 4. Complete survey workflow.
Ijgi 10 00367 g004
Figure 6. S1 total annual erosion (m) of (a) UAV and (b) TLS-derived data.
Figure 6. S1 total annual erosion (m) of (a) UAV and (b) TLS-derived data.
Ijgi 10 00367 g006
Figure 7. S1 M3C2-difference (m) point cloud of UAV (a) and TLS-derived (b) data in 4 month analysis (19 October 2019–23 February 2020).
Figure 7. S1 M3C2-difference (m) point cloud of UAV (a) and TLS-derived (b) data in 4 month analysis (19 October 2019–23 February 2020).
Ijgi 10 00367 g007
Figure 8. S2 M3C2-difference (m) point cloud of UAV (a) and TLS-derived (b) data in 4 month analysis (19 October 2019–23 February 2020).
Figure 8. S2 M3C2-difference (m) point cloud of UAV (a) and TLS-derived (b) data in 4 month analysis (19 October 2019–23 February 2020).
Ijgi 10 00367 g008
Figure 9. S1 M3C2-difference (m) point cloud of UAV (a) and TLS-derived (b) data in 8 month analysis (23 February 2020–11 October 2020).
Figure 9. S1 M3C2-difference (m) point cloud of UAV (a) and TLS-derived (b) data in 8 month analysis (23 February 2020–11 October 2020).
Ijgi 10 00367 g009aIjgi 10 00367 g009b
Figure 10. GCP’s location at S1 (a) and S2 (b).
Figure 10. GCP’s location at S1 (a) and S2 (b).
Ijgi 10 00367 g010
Table 1. Optech Ilris 3D technical specifications.
Table 1. Optech Ilris 3D technical specifications.
Optech Ilris 3D
Range 80% reflectivity1700 m
Range 10% reflectivity650 m
Field of view40° × 40°
Raw range accuracy 7 mm at 100 m
Rotational speed0.001 to 20°/sec
Beam diameter (1/e)22 mm at 100 m
Laser wavelength1535 nm
Laser class1
Integrated camera3.1 MP
Size (L × W × H)320 × 320 × 220 mm
Weight13 kg
Operating temperature0 to 40 °C
Table 2. LiDAR settings during the 19 October 2019, 23 February 2020 and 11 October 2020 scans.
Table 2. LiDAR settings during the 19 October 2019, 23 February 2020 and 11 October 2020 scans.
19 October 2019
Mean distance (m)45.398.22
Beam width (mm)1414
Pulse modelastlast
Spacing (mm)16.37.1
23 February 2020
Mean distance (m)41.9510
Beam width (mm)1414
Pulse modelastlast
Spacing (mm)12.610
11 October 2020
Mean distance (m)44-
Beam width (mm)14-
Pulse modelast-
Spacing (mm)10.6-
Table 3. SfM-derived model resolution.
Table 3. SfM-derived model resolution.
Resolution mm/pix
Product19 October 201923 February 202011 October 2020
S1 tiled model5.794.552.24
S1 DEM11.69.14.49
S1 orthomosaic5.794.552.24
S2 tiled model3.682.35-
S2 DEM7.374.7-
S2 orthomosaic3.682.35-
Table 4. (a) Range of local erosion (cm), (b) Volume of erosion in channel (m3) and (c) Mean slope derived erosion (cm).
Table 4. (a) Range of local erosion (cm), (b) Volume of erosion in channel (m3) and (c) Mean slope derived erosion (cm).
(a) Range of Local Erosion (cm)19 October 2019–11 October 2020
(Annual Values)
19 October 2019–23 February 202023 February 2020–11 October 2020
S1 UAV-SfM- derived0–480–40–44
S1 (TLS-derived)0–400–30–37
S2 UAV-SfM- derived-0–1.5-
S2 (TLS-derived)-0–1.5-
(b) Volume of Erosion in channel (m3)
S1 UAV-SfM channel volume change (m3)2.900.402.50
S1 TLS channel volume change (m3)3.000.702.30
(c) Mean slope derived erosion (cm)
S1 UAV-SfM mean slope derived erosion (cm)111
S1 TLS mean slope derived erosion (cm)212
Table 5. (a) Mean XYZ error (m), calculated by the GCPs (GNSS technique), (b) GNSS- registration error (m) of each point cloud (c) Total amount of points observed in each scan and (d) Density of the derived point clouds, demonstrating the mean number of points per m2 of each point cloud.
Table 5. (a) Mean XYZ error (m), calculated by the GCPs (GNSS technique), (b) GNSS- registration error (m) of each point cloud (c) Total amount of points observed in each scan and (d) Density of the derived point clouds, demonstrating the mean number of points per m2 of each point cloud.
Dates19 October 201923 February 202011 October 2020
XYZ Error (m)
S1 SfM0.020.020.02
S1 TLS0.020.020.02
GCP registration error (m)
S1 SfM cloud0.020.020.02
S1 TLS cloud0.030.030.025
S2 SfM cloud0.020.02-
S2 TLS cloud0.030.03-
Number of points
S1 SfM3,572,8116,124,33631,535,718
S1 TLS730,2601,052,4291,566,387
S2 SfM1,472,6803,594,027-
S2 TLS453,124128,632-
Surface density(mean points/m2)
S1 SfM 767613,23467,881
S1 TLS 203729244714
S2 SfM 19,21446,969-
S2 TLS 21,68811,955-
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop