Next Article in Journal
Non-Stationary Modeling of Microlevel Road-Curve Crash Frequency with Geographically Weighted Regression
Next Article in Special Issue
DEM-Based UAV Flight Planning for 3D Mapping of Geosites: The Case of Olympus Tectonic Window, Lesvos, Greece
Previous Article in Journal
Marker-Less UAV-LiDAR Strip Alignment in Plantation Forests Based on Topological Persistence Analysis of Clustered Canopy Cover
Previous Article in Special Issue
Boosting the Timeliness of UAV Large Scale Mapping. Direct Georeferencing Approaches: Operational Strategies and Best Practices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Digital Terrain Models Generated with Low-Cost UAV Photogrammetry: Methodology and Accuracy

1
National Institute for Forestry Agriculture and Livestock Research of Mexico—National Center for Disciplinary Research on Water-Soil-Plant-Atmosphere Relationship (CENID-RASPA), Gómez Palacio 35079, Durango, Mexico
2
Agricultural Engineering Graduate Program, University of Chapingo, Chapingo 56230, Texcoco, Mexico
3
Biological and Agricultural Engineering Department, Texas A&M AgriLife Research, Weslaco, TX 78596, USA
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2021, 10(5), 285; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10050285
Submission received: 15 February 2021 / Revised: 9 April 2021 / Accepted: 26 April 2021 / Published: 29 April 2021
(This article belongs to the Special Issue Unmanned Aerial Systems and Geoinformatics)

Abstract

:
Digital terrain model (DTM) generation is essential to recreating terrain morphology once the external elements are removed. Traditional survey methods are still used to collect accurate geographic data on the land surface. Given the emergence of unmanned aerial vehicles (UAVs) equipped with low-cost digital cameras and better photogrammetric methods for digital mapping, efficient approaches are necessary to allow rapid land surveys with high accuracy. This paper provides a review, complemented with the authors’ experience, regarding the UAV photogrammetric process and field survey parameters for DTM generation using popular commercial photogrammetric software to process images obtained with fixed-wing or multicopter UAVs. We analyzed the quality and accuracy of the DTMs based on four categories: (i) the UAV system (UAV platforms and camera); (ii) flight planning and image acquisition (flight altitude, image overlap, UAV speed, orientation of the flight line, camera configuration, and georeferencing); (iii) photogrammetric DTM generation (software, image alignment, dense point cloud generation, and ground filtering); (iv) geomorphology and land use/cover. For flat terrain, UAV photogrammetry provided a horizontal root mean square error (RMSE) between 1 to 3 × the ground sample distance (GSD) and a vertical RMSE between 1 to 4.5 × GSD, and, for complex topography, a horizontal RMSE between 1 to 7 × GSD and a vertical RMSE between 1.5 to 5 × GSD. Finally, we stress that UAV photogrammetry can provide DTMs with high accuracy when the photogrammetric process variables are optimized.

1. Introduction

Many applications require the generation of digital terrain models (DTMs), generated by the interpolation of points belonging to the bare land surface [1] from altimetric data produced from conventional or advance survey methods. Among the suitable quality methods, those based on total stations (TS) or Global Navigation Satellite Systems (GNSS) help collect accurate geographic data on the land surface. However, collecting high-resolution field data using these methods is often time-consuming and costly [2,3].
With the development and deployment of Laser Imaging Detection and Ranging (LiDAR) systems and terrestrial laser scanners (TLS), field survey data acquisition has been streamlined, as information can be obtained with higher spatial resolutions, and surveyed surfaces are better represented [4]. However, the main disadvantage of LiDAR technology is that it is still not cost-efficient [5].
The cost of terrestrial methodologies is relatively high, and, in general, high-resolution topographic surveying is associated with high capital and logistical costs [6]. Therefore, in the last decade, unmanned aerial vehicles (UAVs) equipped with digital cameras in conjunction with photogrammetric image processing techniques, known as structure-from-motion (SfM), became a viable alternative technology to collect accurate points to generate DTMs with high spatial resolutions [3,7,8]. UAV photogrammetry is a more economical, low-cost platform, and commercial RGB cameras can be used [6,9,10].
Aerial photogrammetry is an appealing field of remote sensing, offering various options and many new applications and requiring rigorous processing to provide controlled results [11]. With digital photogrammetry, 3D topographic products, such as digital elevation models (DEMs), either DTMs or digital surface models (DSMs), contour lines, textured 3D models, vector data, etc., can be produced in a reasonably automated way. UAV photogrammetry’s main advantage is its capacity for direct, rapid, and detailed image capture of a study area with minimum fieldwork. These characteristics lead to cost reductions and much faster project completion, in addition to the possibility to remotely survey complex, inaccessible, and hazardous objects and areas—e.g., [10,12].
UAV photogrammetry has been used for different purposes: to obtain mountain topography [13], geomorphic features of quarries [14], glacial and periglacial geomorphology [15], and digital models of coastal environments [3], for the continuous mapping of surface elevation changes [16], land leveling [17], slope mapping [18], for landslide mapping and characterization [19], foredune mapping [20,21], the quantification of soil erosion based on the elevation comparison of two different dates, monitoring changes in river topography [22], to support the design of terrace drainage networks [7], the observation of ice sheet dynamics [8], for urban flood damage assessment [23], and the mapping of marine litter concentrations in coastal zones [24].
These, and many other studies, show that UAV-based aerial photogrammetry can be competitive in terms of the accuracy, spatial resolution, automation, and costs compared with other techniques, such as LiDAR [25], TLS [2], TS [18], or GNSS [26], for certain applications, as long as specific procedures and survey parameter optimization are followed.
The positional accuracy and quality of the DTM that UAV photogrammetry can provide is of interest in various studies. The positional accuracy represents the nearness of those values to the entity’s “true” position in that system. The positional accuracy requirements for a DTM are directly related to its intended use [27]. The Federal Geographical Data Committee [28] indicates the accuracy threshold value recommendations for various types of projects that can be used as a reference frame. However, governmental agencies can establish limits for their product specifications and applications and contracting purposes.
In this sense, the accuracy of the DTM is one of the more pressing concerns, and it highly depends on the quality of the field survey. Several authors have studied the parameter DEMs accuracy obtained with UAV photogrammetry—e.g., [29,30]. The spatial resolution can be a few centimeters (<10 cm/pixel). Carrying out well-established flight planning and fieldwork strategies, using proper camera settings or establishing the appropriate number of ground control points (GCPs) can help obtain a high accuracy and quality DEM [2,31]. Otherwise, the quality of the DEM may deteriorate.
DTM generation appears to be a simple process; however, high DTM accuracy demands the optimization of photogrammetric procedures and survey parameters and follows essential UAV flight rules [3,5]. Therefore, best-practice guidelines are essential for professional UAV operators, who may be under pressure to optimize their time and minimize their costs to execute multiple jobs in a row. Under such circumstances, rigorous accuracy checking is likely to be impractical for every situation.
Experience and previous work must be used to obtain high accuracy in the final topographic products [32]. Although there are several papers regarding field survey data quality, it is necessary to integrate the dispersed knowledge available in the literature focusing on low-cost UAV platforms to obtain accurate DTMs.
In this way, the goal of this study is to present recent advances, complemented with the authors’ experience, regarding aerial photogrammetric procedures and the quality of field survey parameters that must be taken into account to improve DTM generation with low-cost UAV platforms. More than 70 studies were analyzed where the DTM accuracy was related, based on various variables related to the terrain, UAV flight, camera sensor, georeferencing, and post-processing.
This work is divided into four sections. In the first section, we discuss UAV platforms and the camera’s role in the quality and accuracy of DTMs. In the second section, we analyze the role of each of the flight planning variables (flight altitude, image overlap, UAV speed, orientation of the flight lines, and camera configuration), the control points (number and configuration), and the processing of the images (software and ground filtering) in the accuracy and resolution of the DTM. In the third section, we analyze whether UAV photogrammetry can be used on different ground cover types (bare lands, vegetation, and water bodies). In the remaining sections, it is presented on accuracy assessment. In the Conclusion, our primary recommendations to obtain high quality DTM, as derived from the work, are presented.

2. UAV Data Collection Systems

2.1. UAV Platforms

The selection of the type of UAV platform (fixed-wing or multicopters) depends on the specific application, the necessary resolution in the 3D point cloud, the area and location of the study site, and the weather conditions. The 3D point cloud’s accuracy appears to be independent of the UAV platform—e.g., [33,34]. Ruggles et al. [34] found that the point cloud resolution improved when using multicopter UAVs instead of fixed-wing UAVs.However, this also depends on the camera used to acquire images. Gómez-Gutiérrez and Gonçalves [33] found that a point cloud obtained using the multicopter detected smaller changes than with a point cloud produced by the fixed-wing. They concluded that the fixed-wing might be a better alternative to the multicopter when exploring vertical features with similar or lower slope gradients (<52°). Multicopters can often carry a greater payload, allowing for the installation of more advanced and complex sensing systems. Fixed-wing UAVs are more suitable for capturing images of larger areas.
Due to the ability to fly at low altitudes, multicopters are more suitable when finer surface details are required, and they are commonly used to capture oblique aerial images. They also can take off and land in a small area. However, the coverage area is limited due to the relatively low flight speed and high battery drain [35] and tends to be more negatively impacted by environmental factors, such as extreme temperatures. On sites that are not easily accessible, a platform with compact size and weight, preferably suitable to carry in a backpack, is recommended; in this situation, a multicopter is more suitable than a fixed wing [10].

2.2. Camera Calibration

Camera calibration has traditionally been and continues to be the single most significant factor determining the accuracy potential and, to a large extent, the reliability of close-range photogrammetric measurements [36]. UAVs are generally equipped with non-metric RGB digital cameras, and are typically not designed explicitly for photogrammetric surveying.
Non-metric RGB digital cameras are a popular choice due to their light weight and low cost. However, the type of selected camera and image resolution can influence the final product accuracy—e.g., [34]. These cameras have good radiometric quality but low geometric quality, which is caused by lens distortion. Therefore, it is highly recommended to perform calibration to obtain reliable photogrammetric measurements.
Camera calibration can be performed with two strategies: either performed independently of aerial acquisitions (pre-calibration) or included in the bundle block adjustment (self-calibration). The pre-calibration is often performed in-lab using convergent images and varying scene depth [37]. While most commercial software includes camera self-calibration, this can also be realized using software (e.g., Agisoft Lens or Photomodeler) and predetermined calibration sheets.
Self-calibration has greatly simplified the calibration task and is likely to remain the most applied method within different studies. Luhmann et al. [36] described self-calibration rules for minimizing observation errors and providing more accurate calibration parameter estimates. These rules include incorporating oblique images in the project or fixed zoom/focus and aperture settings with no lens change or adjustments during image acquisition [36]. Following these well-proven rules for self-calibration can allow for reliable measurements from almost any camera.

3. Flight Planning and Image Acquisition

Flight planning is likely the most complex and most important part of fieldwork. It involves many considerations that have a significant influence on the quality and accuracy of the DTM. It is also not easy to go back and acquire new data due to planning or logistical problems, such as flight authorization and weather.
Based on the necessary characteristics in the final DTM (the expected resolution and accuracy), certain planning parameters are defined before the flight (Figure 1): altitude, image overlap (front and side overlap), UAV speed, parameters related to the orientation of the flight lines, and the number of ground control points (GCPs) and checkpoints (CPs). To define these parameters, it is also essential to know the platforms’ operating restrictions in the country or province where the flight will occur. Despite the importance of these parameters, not enough processing details were provided to fully understand the causes of variability in many works.

3.1. Flight Altitude above Ground Level (AGL)

One of the most critical parameters in a UAV flight is altitude. The altitude determines the spatial resolution in registered images, flight duration, the number of images per unit area, and the area covered. Flight altitude is influenced by the value of the ground sample distance (GSD) and the camera sensor’s internal parameters. Equations (1) and (2) are used to calculate the AGL, and the smallest value resulting from both equations is chosen [38].
A G L 1 = f G S D H R S W
A G L 2 = f G S D V R S H
where AGL is the flight altitude above ground level (AGL) (m); f is the focal distance (mm); GSD (m/pixel); HR and VR are the horizontal and vertical resolutions of the sensor (px); SW is the sensor width (mm); SH is the sensor height (mm).
Most scientific studies capture imagery with GSD values between <0.01 and 0.50 m and altitudes between 5 and 250 m [39]. A low flight altitude indicates high spatial resolutions but covers a limited area on the ground and increases a particular area’s flight duration and processing time. A high flight altitude (>120 m) can cause the GCPs to not be distinguished in the images. In many countries, this is also commonly regulated (Table 1). High spatial resolutions do not necessarily imply high accuracy in the generated DTM. The problem associated with high spatial resolution is related to the computing power needed, since an increase in resolution means a significant increase in the data volume [40].
Table 1. Maximum allowed altitude for UAV flights (Prepared by the authors with information from Reger et al. [41]).
Table 1. Maximum allowed altitude for UAV flights (Prepared by the authors with information from Reger et al. [41]).
CountryMaximum Altitude UAV Flight
Mexico122 m (400 ft)
Germany100 m
European Union120 m
U.S.122 m (400 ft)
Japan150 m
For mapping, if the terrain is flat or almost flat, the usual method of capturing the terrain with a UAV is to fly horizontally at a constant height above the mean sea level (MSL). In the abruptly changing terrain, the flight altitude must adapt to the ground’s height in each flight line instead of maintaining a constant height above the MSL.
The previous recommendation is because, when an UAV is flying at a constant height above the MSL, researchers found that the vertical root mean square error (RMSE) values were larger in areas with complex morphology compared with in flat areas [42,43]. In complex morphology, the distance between the sensor and the ground is not constant, and the overlap is reduced and could become critically low in very steep areas, which causes fewer images to overlap in steeper areas compared with in low areas (Figure 2).
Different authors studied the DTM accuracy with respect to different AGL values (Table 2). Gómez-Candón et al. [44] found that the RMSE of the DEM increased 1 cm with increasing flight altitude from 30 to 60 m, while at altitudes from 60 to 100 m, the RMSE was almost constant. Agüera-Vega et al. [45] indicated that, when GSD increased, the vertical accuracy tended to decrease, and the horizontal accuracy was not influenced by the flight altitude; they found that vertical RMSE increased at 50 to 80 m, and, at altitudes from 80 to 120 m, it was almost constant. These studies showed that the higher the flight altitude, the greater the RMSE in the DEM; however, it reached an altitude (>60 m) where the RMSE was almost constant.
Rock et al. [46] found that the indirect sensor orientation accuracy decreased with the increasing flight altitude; however, they reported the best accuracies for intermediate flight altitudes (100 to 150 m). Yurtseven [47] found that the low altitude data were affected by the phenomenon called the “doming effect”, which is considered an imperfection of the 3D reconstruction algorithm for photogrammetric processes. The results reported by Zimmerman et al. [48] showed that flying at higher altitudes (>90 m) produced a more accurate DEM. Yurtseven [47] found similar results at lower altitudes (<50) where the error in the DEM increased.
Previous studies indicated that both low and high flight altitudes affected the accuracy of the DEM. The doming effect that occurs at low altitudes could be corrected by increasing the number of GCPs [47]; however, this would increase the capital and logistics costs. Instead of using a large number of GCPs, the problem can also be solved by using a GNSS RTK (Real Time Kinematic)-equipped platform, as shown in [49] and subsequently in [50]. The results shown by Gómez-Candón et al. [44] can be explained by the use of a large number of GCPs.
In this sense, it is necessary to define, for efficiency and time, a minimum altitude at which a GSD value is guaranteed to detect the desired surface details. In addition, the type of platform must be taken into account. Singh and Frazier [39] considered that the minimum mapping unit (MMU) should be considered in decisions regarding spatial resolution. The MMU may help researchers balance the data volume and processing costs to determine the most appropriate GSD for the output products. According to Table 2, the optimal flight altitude should be between 70 to 150 m; thus, an average vertical RMSE of 2 × GSD was reported. Regarding the maximum altitude, an altitude must be selected at which the quality of the 3D point cloud is not lost and the maximum flight altitude allowed in the country is taken into account.

3.2. Image Overlap

In conventional photogrammetric flights, a front overlap of 55 to 60% and a side overlap of 15 to 25% is typically recommended. However, UAV images must have a high percentage of overlap so that the photogrammetric processing of images can potentially benefit from the resulting redundancy, which would still allow the generation of high-quality 3D point clouds from dense multi-image matching [51].
There is a positive relationship between the image overlap and the accuracy of digital elevation models (DEMs). The accuracy is increased with the increased overlap percentage, and the object’s shape is optimized [52]. Photogrammetric software, such as Agisoft Metashape recommends that UAV images be acquired almost with 80% front overlap and 60% side overlap [53]. Pix4D suggests at least 75% front overlap and 60% side overlap [54]; generally, the front overlap is equal to or greater than the side overlap.
However, with exaggerated overlaps, stereoscopic vision is lost in the photogrammetric reconstruction, and the processing time is increased without improving the quality of the final products. Overlap greater than 90% can generate deformations in the digital model—e.g., [11]. In this sense, it is recommended for topographic surveys and DTM generation to use front overlaps between 70% and 90% and side overlaps of 60% to 80%. The lower the AGL, the closer the overlap should be to the upper limit.

3.3. UAV Speed

The UAV flight speed is a crucial user-defined parameter because it affects the image quality and power consumption of the UAV. However, studies have not considered UAV flight speed in sufficient detail, given its importance in determining the DEM accuracy. Some research has been carried out to determine the optimal speed according to the unit distance energy consumption [55], and others added the wind’s effect to select the optimal speed [56].
Several variables define the selection of UAV flight speed. Among them are the maximum UAV flight speed recommended by the manufacturer, the wind speed and direction, the camera’s shutter speed, and the operating restrictions of the country. Wind conditions substantially affect 3D point clouds and DEMs that have not been studied; high wind speeds tilt the UAV drastically and lead to large pitch and roll angles. They will also cause the UAV to use more power during flight and generally reduce the UAV stability [57].
Therefore, the flight speed must be programmed, considering the maximum wind speed at which the platform is sensitive. The shutter speed is closely related to the flight speed; Roth et al. [58] determined that the wrong shutter speed settings are a significant cause of motion blur. A long shutter time, combined with a fast flight speed, may force motion blur. An option to reduce the motion blur is if the UAV stops to take the image. However, this would cause greater energy consumption and, therefore, the survey area for each battery use would be reduced.
The flight speed demanded by the user should be connected to the expected quality of the images. Therefore, Roth et al. [58] proposed Equation (3) where the flight speed is chosen based on the maximum tolerable motion blur and recommended keeping the motion blur (usually denoted as a percentage of the size of a pixel) as low as possible, but at least <50%. For the limits of this equation, the document should be consulted. However, the other variables indicated above must also be considered.
S = G S D δ l t
where S is the UAV speed (m/s); δ is the maximum motion blur (px); and lt is the shutter speed (s).

3.4. Orientation of the Flight Lines and Camera Configuration

Generally, the flight plans are designed as parallel flight lines (patterns, such as back-and-forth and spiral) at a stable altitude with consistent overlap, and a nadir-facing camera angle to achieve regular along-flight-line stereoscopic coverage. This configuration has traditionally been considered as the most effective to acquire, particularly in time and simplicity. It can be automatically generated by specifying a few basic flight parameters in flight planning software. However, the single look-direction, gridded image blocks typically do not capture enough detail or geometric information in more complex scenes and cause the resultant point cloud to contain artificial doming due to error accumulation in the SfM process—e.g., [59].
Therefore, various flight configurations have been studied for reducing systematic dome errors and for increasing the accuracy of the DEMs, such as single grid missions supplemented with oblique images or with the arc flight plan, double grid missions (with the acquisition of vertical or oblique images or both) or different flight altitudes in the same flight plan. Ali and Abed [60] used two types of flight configuration (single grid and double grid mission) to acquire vertical images in two different altitudes (100 and 120 m) and found a higher RMSEz in the DEM that was generated with the double grid mission.
James and Robson [61] indicated that augmenting the image block with an additional set of flight lines at a different azimuth heading did not significantly reduce the systematic DEM deformation. Sanz-Ablanedo et al. [62] indicated that only intermediate results are obtained when different flight designs that included only vertical imagery were mixed. The DEM accuracy is better when oblique and vertical imageries are integrated compared with when using only vertical imagery [52], as the estimation of the exterior and interior (according to [50]) orientation parameters of the airborne imagery in self-calibration is improved.
Nesbit and Hugenholtz [63] found that incorporating oblique images with 15–35° tilt angles generally increase the accuracy, and single-angle image sets at higher-oblique angles (30–35°) could produce reliable results if combination datasets were not possible. The use of oblique images is particularly appropriate in hilly terrain with rugged topography and overhangs or for surveying subvertical walls [14,42].
Therefore, the images should be acquired in preprogrammed flight using continuous automatic shoot mode [11]. Flight lines must be added to traditional flight plans (patterns, such as back-and-forth and spiral) to capture oblique images. It is true that the combination of vertical (nadir) and oblique images requires even more processing time; however, the result is a 3D point cloud with higher quality.
The orientation of the flight lines must be based on the terrain morphology. On rectangular surfaces, it is most convenient for the flight direction to be parallel to the longest side of the rectangle (Figure 3). The characteristics of the camera are shown in Table 3.

3.5. Georeferencing, GCPs, and CPs

To guarantee a certain degree of accuracy in digital models using UAV photogrammetry, it might be necessary to collect GCPs. These points can be either permanent ground features or reference targets scattered on the ground, which must be surveyed to obtain their precise coordinates and ensure that they are identifiable on the raw images [64]. In addition, the numbers of surveyed GCPs should also include additional check points (CPs), which will be used to assess the resulting data quality. The GCPs are used for georeferencing the 3D point cloud and to improve the estimation of the internal and external orientation parameters in the SfM process. At the same time, the DEM accuracies will be evaluated by comparing the values of the coordinates of the CPs as computed in the aerial triangulation solution to the coordinates of the surveyed CPs.
One of the problems that commonly arises with UAV photogrammetry is the number of GCPs that must be established to achieve the desired accuracy. It is widely recognized that the more GCPs used, the better the resulting accuracy will be. However, when increasing the number of GCPs until a specific density of GCPs is reached, the accuracy can increase asymptotically [46,65]. In addition, establishing large numbers of points is time-consuming and may erode many of the cost advantages of surveying using UAV [32].
In practice, many more GCPs than the minimum required are usually established, and different recommendations for the number of GCPs are reported in various works (Table 4). Tahar [66] found that it is necessary to establish at least seven GCPs on a given surface. The author used between 4 to 12 GCPs, and the vertical RMSE in digital models decreased after using seven or more GCPs. Jiménez-Jiménez et al. [67] found that at least five GCPs distributed throughout the study area are essential. It is necessary to establish one GCP for every 3 ha to obtain vertical RMSE values close to 3 × GSD. The study land area was about 37 ha in that research and was approximately rectangular-shaped.
Coveney and Roberts [68] reported, in a study carried out on 29 ha of urban parkland with flat terrain morphology, that errors (vertical RMSE) of about 2 × GSD could be achieved when using one GCP for every 2 ha of ground area and utilizing more GCPs produced identical results while, in a study site of 17 ha whose morphology included a wide range of slope values, Martínez-Carricondo et al. [69] found that a vertical RMSE of about 1.6 × GSD could be achieved when using one GCP with stratified distribution for each ha. Santise et al. [70] found that a vertical RMSE of about 1.3 × GSD could be achieved with approximately 1 GCP/ha (28 GCPs in 25 ha).
These recommendations range from 0.3–1.0 GCP/ha to obtain a vertical RMSE between 1–3 × GSD. The above data and other studies reported in the literature [39] showed that the number of GCPs that must be established per unit area is not yet clear, at least for all types of morphology and area size. Therefore, different studies have been generated with a different approach. Sanz-Ablanedo et al. [30] related the number of GCPs per 100 images acquired with the UAV and found that vertical RMSE values of 2 × GSD could be achieved with two or more GCPs per 100 images acquired. Vertical RMSE values could also be improved toward 1.5 × GSD by using four GCPs per 100 images. They also found horizontal RMSE values similar to ± one GSD with approximately 2.5 to 3 GCPs per 100 images. These criteria to define the number of GCPs to use appears to be a good estimator as they involve the AGL, image overlap, orientation of the flight lines, and camera configuration.
The distribution of GCPs also influences the DEM accuracy. The accuracy may decrease slightly when increasing the number of GCPs when the GCPs are not well distributed. Different distributions of GCPs have been studied to try to optimize the products obtained by UAV photogrammetry (Table 5). Harwin and Lucieer (2012) [72] recommended that the GCPs be distributed throughout the focus area and adapted to the relief, resulting in more GCPs in steeper terrains.
Rangel et al. [9] conducted a study on 270 ha using thirteen different configurations of the number and distribution of the GCPs and concluded that the insertion of GCPs in the central part of the block did not significantly contribute to an increase in the horizontal accuracy of the geospatial products. To achieve optimal results regarding the planimetry, GCPs must be placed on the edge of the study area with a horizontal separation of 7 to 8 ground base units (horizontal distance between the centers of two consecutive images). Similar results were reported in Martínez-Carricondo et al. [69]. GCPs need to be added in the central part of the area with a horizontal separation of 3 to 4 ground base units with a stratified distribution to increase altimetric accuracy, according to Martínez-Carricondo et al. [69].
The referred studies were developed on square-shaped or rectangular-shaped terrain. However, more specific studies may be necessary to obtain the site topography (DTM) where one dimension is much larger than another, such as roads, linear power distribution, pipelines, and channels. Thus, it cannot be guaranteed that the conclusions drawn from the studies cited above can be applied to these sites. For these types of sites, Ferrer-González et al. [64] recommend using 4 to 5 GCPs/km distributed alternatively on both sides of the linear work in an offset or a zigzagging pattern, with a pair of GCPs at each end.
GCPs are not the only option for georeferencing. In recent years, an alternative to georeferencing using GCPs (indirect georeferencing) emerged as direct georeferencing using a platform with a survey-grade GNSS/RTK receiver (RTK UAV). Hugenholtz et al. [29] used these two types of georeferencing to achieve a similar horizontal RMSE; however, the vertical RMSE values were two to three times greater with direct georeferencing. They concluded that, in applications requiring a vertical RMSE better than ±0.12 m, GCPs should be used rather than a GNSS/RTK platform. Štroner at al. [49] indicated that, with these platforms, the vertical accuracy can improve up to a level of accuracy of 1–2 × GSD using a small number of GCPs (at least one). Taddia et al. [73] found that, when using vertical and oblique images in the photogrammetric block (without a GCP), it was possible to obtain accuracies similar to the DTMs referenced with GCPs. It is unclear whether direct georeferencing will supersede GCPs to become the standard referencing technique for UAV blocks. However, with the emergence of low-cost platforms (e.g., DJI Phantom 4 RTK), the use will increase, especially in large or with difficult-to-access areas, or where the survey of GCPs is complicated.
In the case of CPs, there is no consensus regarding the sample size. However, the National Map Accuracy Standard (NMAS) and National Standard for Spatial Data Accuracy (NSSDA) standard recommend a minimum of 20 CPs. In contrast, the ASPRS Positional Accuracy Standards for Digital Geospatial Data recommended the number of CPs based on area and indicated that in no case shall a non-vegetated terrain vertical accuracy be based on less than 25 CPs. CPs should be distributed more densely, close to essential features, and more sparingly in areas of little or no interest.

4. Photogrammetric DTM Generation

SfM algorithms facilitate the production of detailed topographic models from images collected with UAVs. The primary product of the SfM process is a 3D point cloud of identifiable features present in the input images. Later, a DEM (DSM or DTM) and a georeferenced orthomosaic can be generated.

4.1. Software

There is a range of software packages using the SfM approach that are currently powerful and efficient enough to work with a large set of images and automatically provide results in a relatively short time. They are included as desktop packages, such as Agisoft MetaShape (formerly PhotoScan), Pix4D, PhotoModeler, SimActive CORRELATOR3D, Inpho UASMaster, MicMac, VisualSfM, Bundler, CMVS, as well as the online-processing solutions, such as DroneDeploy, etc.
These software follow a general workflow with some phases of data processing (Figure 4). The phases include (1) importing the images into software, (2) alignment between overlapping images, (3) georeferencing images using GCPs to optimize the camera position and orientation, (4) dense point cloud generation of a 3D mesh, (5) ground filtering with or without above ground object points, (6) eliminating or keeping all-natural (vegetation) or built (building, houses, etc.) above-ground objects from the dense point cloud, (7) if the above objects are eliminated a mesh, a DTM is created, and (8) if the above objects are kept in the dense point cloud, a DSM and orthmosaic are created.
Even if the software can automatically provide results, operator intervention is necessary for certain phases of the data processing, especially to check the alignment accuracy and to remove points belonging to aboveground objects to retrieve ground points for generating DTMs [5].
Different studies compared the performance of different photogrammetric software. Casella et al. [74] compared five software packages. They found similar results (RMSE) in the horizontal and vertical components for PhotoScan, Pix4D, UAS Master, and MicMac, while ContextCapture showed similar results only for the horizontal component. Sanz-Ablanedo et al. [62] evaluated the performance of PhotoScan and Pix4D and found similar results for different flight designs. Jaud et al. [75] found that MicMac and Photoscan (Metashape) provided similar horizontal and vertical errors within a control region (GCPs delimited). PhotoScan reconstructed topographic details better than MicMac, especially on surfaces with substantial slope changes outside of the control region. Sona et al. [76] found that PhotoScan provided good performance, especially in flat areas and in the presence of shadows. Professionals most commonly employ desktop software, such as PhotoScan, Pix4D, and Photomodeler because they are more straightforward to use. However, most of the processing is done in a black-box model. Many users use open-source software (e.g., MicMac, ColMap or AliceVision) as they are more flexible but are recommended for experienced users.

4.2. Image Alignment and Dense Point Cloud Generation

In the first data processing step, the images are imported. To reduce the processing time involved with images in DNG format, Alfio et al. [77] found that the best type of dataset to preserve the photogrammetric process’s quality (obtained with images in DNG format) was using JPEG images with a compression level of 12.
In the next step, SfM aligns the imagery solving the collinearity equations in an arbitrarily scaled coordinate system without any initial requirements of external information (camera location and attitude or GCPs) [74]. Software packages typically automatically generate key points in each image. The number of key points in an image is primarily dependent on the image texture and resolution, such that complex images at high resolutions will return the most results [6]. Later, matching key points are identified, and inconsistent matches are removed.
A bundle-adjustment algorithm is used to simultaneously solve the 3D geometry of the scene, the different camera positions, and the camera parameters [74]. This step’s output is a sparse point cloud generated in a relative ‘image-space’ coordinate system. The number of overlapping images that result after alignment is not constant throughout the area because, near the edges, there are fewer overlapping images compared with in the central area (Figure 5). This misalignment causes the measurements made in these areas to be less accurate than those made in the central areas; therefore, a wider area must be covered compared with the actual area of interest. This misalignment is shown in the blurring and overlapping edges of the houses shown in the upper picture of Figure 3.
Subsequently, the GCPs coordinates are imported and are manually identified in the images. Currently, it is also possible to use automatic identification of normal and code targets. Code targets are not widely used in DTM generation, because the targets must be very large to recognize the pattern (e.g., metashape rounded coded targets, Pix4D QR codes). The GCPs coordinates are used to transform SfM image-space coordinates into an absolute coordinate system [6].
Later, multi-view stereo image matching algorithms are applied to increase sparse point cloud densities and generate a dense 3D point cloud (Figure 6). Generally, different cloud quality parameters are available in photogrammetry software to build a dense cloud. This parameter affects the final DEM accuracy—e.g., [65]—and the resolution—e.g., [23]. The lower the quality, the lower the spatial resolution and accuracy of the DEM. Therefore, if high quality and accuracy are required, high quality input is recommended. However, this requires more processing time.
High densities (points/m2) in a dense point cloud can be obtained with UAV photogrammetry. The type of platform and camera, the flight planning parameters, and the quality of image processing influences this density of points. These point densities may be similar or lower to those generated by TLS. However, for many applications, the slightly lower point densities generated by UAV photogrammetry may outweigh the tremendous cost of TLS systems [78]. The point densities generated with UAV photogrammetry could hardly be achieved at the same time with a traditional ground survey using a TS—e.g., [18].

4.3. Ground Filtering and Generation of the DTM

DTM generation is essential in many applications that recreate the shape of the land surface once the external elements are removed, such as vegetation and buildings. Therefore, to derive the DTM, point clouds from the digital surface model (DSM) should be filtered to remove non-ground points, which is called ground filtering [5]. Ground filtering is a critical step in the restitution process for an accurate representation of the land surface topographic features and, in commercial software, is becoming a standard function [79].
Ground filtering is performed after a dense 3D point cloud has been generated and the points are classified into ground points and points belonging to above ground objects (Figure 7). After that, the DTM (Figure 8) is generated by interpolating the ground points belonging to the bare earth surface. After the dense point cloud classification, noise points can be found, and they must be manually removed. Typically, the noise points are much higher or lower than expected and do not represent any actual ground features.
Many commercial and non-commercial software programs have a tool to classify the dense point cloud and perform ground filtering. For example, Agisoft PhotoScan Professional performs ground filtering using the adaptive triangulated irregular network algorithm, and Pix4D software utilizes a variational raster-based approach. However, both can induce errors in the DTM by misclassifying the ground cover vegetation [80] or confusing soil surface for an object surface. In general, filtering approaches tend to commit more errors in terrains with many aboveground objects; therefore, ground filtering must be monitored and often corrected manually.
It has been reported that cloth simulation filtering is one of the most accurate algorithms to automate ground filtering on 3D point clouds obtained from photogrammetry [1]. The efficiency of this and other existing algorithms has been improved, and new algorithms have been proposed that provide more reliable and accurate results.
With technological and knowledge advancement, the efficiencies of these algorithms are expected to continue to be improved so that higher quality and more accurate DTMs can be obtained automatically to reduce the time consumed in monitoring and correcting ground filtering. Ground filtering allows for DTM generation to be done automatically and makes photogrammetry an alternative to avoid high costs when using technologies, such as LiDAR, in specific surveying applications [81].
UAV photogrammetry can also produce georeferenced orthomosaics, where terrain details can be observed. These orthomosaics can provide additional information to the topographic survey. An orthomosaic offers several advantages in topographic surveys that have not been analyzed in detail.

5. Geomorphology and Land Use/Cover

Topography and land-use (land-cover) patterns are the main characteristics of the physical environment that define the vertical accuracy and quality of a DTM. For DTM generation, UAV photogrammetry has competitive advantages in survey areas of bare lands or those with isolated or sparse vegetation, in projects of quantification of fill volumes and excavation of earthworks, in estimating the ground slope or monitoring elevation changes, in local area applications, and especially if repetitive data collection is needed [11]. However, the presence of vegetation can decrease the vertical accuracy and quality of the DTM.
This situation can be easily explained by the passive nature of the optical sensor in images that cannot penetrate the vegetation. Thus, the vegetation’s impact on the created model cannot be removed even with ground filtering algorithms. This consideration is important, particularly in areas with complex morphology (Figure 9) where the resulting point cloud will have very few ground points under the vegetation, and a high-quality DTM cannot be generated. Salach et al. [79] found a gradual increase in the error of the DTM, observing a decrease in the vertical accuracy of 0.10 m for every 20 cm of vegetation height. On surfaces with complex vegetation, it may not be possible to obtain ground points that allow for ground point triangulation to generate a suitable DTM.
In addition, it is noted that the photogrammetric method did not perform properly in areas of homogenous texture, resulting in voids, artifacts, or sparse areas in the point cloud [2]. Elevation errors may also arise on other types of surfaces; for example, on land where there are buildings closer to hills. In these areas, elevation at the house’s base is interpolated with the hills, causing the intermediate pixels to be falsely assigned a higher elevation value. This situation is mainly due to the error associated with the DTM generation.
There are increasing numbers of examples with different methodologies to show that bathymetry can be successfully extracted from aerial images regarding bodies of water, channels, or rivers. These methodologies are applicable under certain conditions, such as for clear water or shallow water bodies. Westaway et al. (2001) [82] proposed a method to obtain bathymetry in clear water using aerial images. This methodology has been adopted with UAV images in different works—e.g., [83], although larger errors have been observed as increases depth. However, in the future, UAV photogrammetry may be a viable option for bathymetric LiDAR, whose costs are still high and where the resolution is relatively thick.

6. Accuracy Assessment

The quality and accuracy of the DTM results from many variables that can be grouped into four categories. The first category is related to the size of the area and its morphology [45], the types of ground coverage [79], lighting conditions (e.g., cloudy), and the color contrast of the objects [84]. The second category is related to UAV data collection systems and their characteristics, the camera and its calibration [36], and the type of platform (multicopter or fixed wing) that can be a platform with a survey-grade GNSS/RTK receiver [29].
The data acquisition and flight parameters can be grouped into another category, including the flight altitude [47] and its configuration [42,43], image overlap [51,52], the UAV flight speed [58], the flight path pattern (single or double grid) [60,61], and the acquisition of images from the nadir or oblique [14,62,63], in addition to the number of GCPs and its distribution [30,69]. The last category is related to SfM approaches and the algorithms to automate ground filtering from the 3D point cloud [1].
Evaluating the accuracy of a 3D point cloud can be done in three different ways, generally, the data are compared to a more accurate independent source. The first involves analyzing the residuals from the bundle adjustment once the 3D model is rotated and scaled. Another method is to compare the coordinates of the 3D model with CPs. A further method is by analyzing the residuals of the 3D model compared to a reference surface that can be obtained using another technique (e.g., TSL).
In the first case, as this method does not require nor use independent measurements, the measure should be analyzed in terms of internal precision rather than accuracy [30]. The third case would be the most expensive and can be used to compare the techniques in a certain application—e.g., [21,59]. The second case is the most used and is the one that will be used in this section; CPs must be different from GCPs, since the 3D model adapts to GCPs and, consequently, the lowest residuals will always be achieved at these points—e.g., [46].
In UAV photogrammetry, the horizontal accuracy is widely recognized to be slightly better than the vertical accuracy, except for extreme topography in a near-vertical cut-slope—e.g., [12]. Various studies observed that the accuracy, measured in GSD values, was lower in flat surfaces compared with in complex topography.
For flat terrain, a horizontal RMSE between 1 × GSD to 3 × GSD and a vertical RMSE between 1 × GSD to 4.5 × GSD have been reported in various studies (Table 6). For complex topography, a horizontal RMSE between 1 × GSD to 7 × GSD and a vertical RMSE between 1.5 × GSD to 5 × GSD have been reported in various studies (Table 7). In Table 5 and Table 6, the RMSE is indicated as a multiple of GSD, that is, these last three columns represent the accuracy achieved in relation to GSD; thus, it can be more useful to compare studies with different GSD.
The geometric accuracy in DEMs derived from UAV photogrammetry and evaluated in CPs is commonly related to the RMSE values (Equations (4)–(7)); theoretically, the lower the RMSE value is, the more accurate the DEM. However, in different studies, other accuracy indicators have been used, such as the standard deviation (e.g., [48,67]), mean error (e.g., MA, [47,52]), mean absolute error, or linear regression.
James et al. [85] indicated that the error’s spatial variability must be evaluated when using the RMSE or when the systematic error and random error cannot be identified and cannot be adequately managed. Therefore, the authors recommended including error metrics that describe the bias or accuracy (e.g., the mean error and the difference between the average of measurements and the true value) and those that describe precision (e.g., the standard deviation of error).
R M S E x = i n ( x c i x v i ) 2 n
R M S E y = i n ( y c i y v i ) 2 n
R M S E z = i n ( z c i z v i ) 2 n
R M S E r = R M S E x 2 + R M S E y 2
where RMSEx, RMSEy, and RMSEz are the root-mean-square error in x, y and z, respectively; RMSEr is the horizontal root-mean-square error; xci, yci, and zci are the coordinates of the ith CP in the dataset; xvi, yvi, and zvi are the coordinates of the ith CP in the independent source of higher accuracy; n is the number of check points tested; and i is an integer ranging from 1 to n.
In this sense, to evaluate the DEM accuracy, various accuracy assessment methodologies have been used [27]. ASPRS standard [86] has reached a wide diffusion and acceptance and has been used in various UAV photogrammetry studies [9,32]. ASPRS is also one of the most recent. This standard defines horizontal accuracy classes (Equation (8)) in terms of their RMSEx and RMSEy values, and the vertical accuracy is computed using RMSEz statistics in non-vegetated terrain and 95th percentile statistics in vegetated terrain.
The accuracy is given at a 95% confidence level and it is assumed that the dataset errors are normally distributed and that any significant systematic errors or biases have been removed. This accuracy means that 95% of the positions in the dataset will have an error to the true ground position that is equal to or smaller than the reported accuracy value, and 66.7% of the data will have the maximum errors of the RMSE. Corresponding estimates of accuracy at the 95% confidence level values are computed using NSSDA methodologies (Equations (8)–(10)).
Accuracyr = 1.7308 × (RMSEr)
Accuracyz [NVA] = 1.96 × (RMSEz)
Accuracyz [VVA] = 3 × (RMSEz)
where Accuracyr is the horizontal accuracy at the 95% confidence level; Accuracyz is the vertical accuracy at the 95% confidence level; NVA means non-vegetated terrain; and VVA means vegetated terrain (VVA).

7. Conclusions

UAV photogrammetry is an appealing method to generate DTMs due to the less stringent requirements regarding the image acquisition geometry and the high level of automation of the geometric solution and camera calibration. UAV photogrammetry allows for obtaining DTMs with high accuracy and spatial resolution at low cost.
The main conclusions and recommendations derived from this work are mentioned below.

7.1. UAV Data Collection Systems

(a) UAV Platform: Commonly, the platform is the one that is acquired first. However, it is advisable to choose a platform based on the desired application. The type of platform has no influence on the DEM accuracy but does influence the point cloud quality. To select the type of platform, the kind of terrain, accessibility to the site, and weather conditions, among other influences, must be considered.
(b) Camera calibration: it is recommended to use camera self-calibration and follow the specifications described by Luhmann et al. [36] to estimate the calibration parameters more accurately.

7.2. Flight Planning and Image Acquisition

(a) Flight altitude: several studies indicated that both low and high flight altitudes affected the accuracy and quality of the DEM. According to the works cited in the document section, the optimal flight altitude should be between 70 and 150 m. If a smaller GSD is desired, to obtain the highest DEM accuracy, more GCPs may be added or vertical and oblique images may be combined to counteract the doming effect. In addition, it is recommended that the flight altitude must adapt to the ground height in each flight line instead of maintaining a constant height above the MSL.
(b) Image overlap: for non-metric RGB, digital cameras are recommended to use front overlaps between 70% and 90% and side overlaps of 60% to 80%. The lower the AGL, the closer the overlap should be to the upper limit.
(c) Flight speed: this variable influences the quality of the captured images. Therefore, it is necessary to estimate the speed base with the camera’s configuration and the maximum tolerable motion blur, as presented in Equation (3).
(d) Orientation of the flight lines and camera configuration: the use of only vertical images is not recommended. The flight lines should not be planned only as parallel flight lines (patterns, such as back-and-forth and spiral); rather, other flight patterns should be added combined with vertical images with oblique 15–35° tilt angles. The orientation must be based on the terrain morphology. A combination of vertical and oblique images improves the accuracy in the DTM.
(e) Georeferencing: for flat terrain with a surface area of less than 50 ha, one GCP can be used for every 3 hectares. The minimum number that should be used for a particular surface is five GCPs. For complex topography or efficiency reasons, it is recommended to use two GCPs per 100 images. The GCPs must be distributed in a stratified manner both at the edge and in the center part of the block with a separation of 3 to 4 ground base units. If RTK UAV platforms are used, it is necessary to add a minimum number of GCPs (at least one) or to combine vertical and oblique images to obtain accuracies similar to georeferenced DTMs only with GCPs.
(f) CPs: CPs should be at least three times more accurate than the required DTM accuracy [86]. At the least 25 CPs must be established, more densely, and close to essential features.

7.3. Photogrammetric DTM Generation

(a) Software: researchers observed that photogrammetric software did not influence the DEM accuracy. Therefore, the software should be selected based on its cost and the user’s skills.
(b) DTM generation: this is also recommended to perform ground filtering automatically. When the ground filtering is done automatically, it must be monitored and often manually corrected.

7.4. Geomorphology and Land Use/Cover

Using UAV photogrammetry, it is not possible to obtain DTMs from all types of surfaces. Ground points must be observed in the point cloud for triangulation and for generation of the DTM. Vegetation and water are the main limitations.

7.5. Accuracy Assessment

Generally, a vertical RMSE in the range of one to five GSDs was reported in different studies. Estimation of the accuracy only in terms of the RMSE is not recommended, as the spatial variation cannot be observed. In any case, the ASPRS standard could also be used. Governmental agencies can establish accuracy limits for their product specifications and applications and contracting purposes.
According to what was previously expressed, UAVs complement existing survey methodologies, since several limitations appear with the exclusive use of a UAV in DTM generation. Despite these limitations, UAV photogrammetry has great potential in a wide range of application areas.

Author Contributions

Conceptualization, Sergio Jiménez-Jiménez and Waldo Ojeda-Bustamante; methodology, Sergio Jiménez-Jiménez, Waldo Ojeda-Bustamante, Mariana Marcial-Pablo, and Juan Enciso; software, Sergio Jiménez-Jiménez and Mariana Marcial-Pablo; literature review and investigation, Sergio Jiménez-Jiménez, Waldo Ojeda-Bustamante, Mariana Marcial-Pablo, and Juan Enciso; writing—review and editing, Sergio Jiménez-Jiménez, Waldo Ojeda-Bustamante, Mariana Marcial-Pablo, and Juan Enciso; visualization, Sergio Jiménez-Jiménez; supervision, Waldo Ojeda-Bustamante. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the USDA-NIFA grant “Diversifying the Water Portfolio for Agriculture in the Rio Grande”, award 2017-68007-26318.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Serifoglu Yilmaz, C.; Yilmaz, V.; Güngör, O. Investigating the performances of commercial and non-commercial software for ground filtering of UAV-based point clouds. Int. J. Remote Sens. 2018, 39, 5016–5042. [Google Scholar] [CrossRef]
  2. Mora, O.E.; Suleiman, A.; Chen, J.; Pluta, D.; Okubo, M.H.; Josenhans, R. Comparing sUAS Photogrammetrically-Derived Point Clouds with GNSS Measurements and Terrestrial Laser Scanning for Topographic Mapping. Drones 2019, 3, 64. [Google Scholar] [CrossRef] [Green Version]
  3. Mancini, F.; Dubbini, M.; Gattelli, M.; Stecchi, F.; Fabbri, S.; Gabbianelli, G. Using Unmanned Aerial Vehicles (UAV) for High-Resolution Reconstruction of Topography: The Structure from Motion Approach on Coastal Environments. Remote Sens. 2013, 5, 6880–6898. [Google Scholar] [CrossRef] [Green Version]
  4. Kociuba, W. Assessment of sediment sources throughout the proglacial area of a small Arctic catchment based on high-resolution digital elevation models. Geomorphology 2017, 287, 73–89. [Google Scholar] [CrossRef]
  5. Serifoglu Yilmaz, C.; Gungor, O. Comparison of the performances of ground filtering algorithms and DTM generation from a UAV-based point cloud. Geocarto Int. 2018, 33, 522–537. [Google Scholar] [CrossRef]
  6. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. “Structure-from-Motion” photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  7. Pijl, A.; Tosoni, M.; Roder, G.; Sofia, G.; Tarolli, P. Design of Terrace Drainage Networks Using UAV-Based High-Resolution Topographic Data. Water 2019, 11, 8147. [Google Scholar] [CrossRef] [Green Version]
  8. Chudley, T.R.; Christoffersen, P.; Doyle, S.H.; Abellan, A.; Snooke, N. High-accuracy UAV photogrammetry of ice sheet dynamics with no ground control. Cryosphere 2019, 13, 955–968. [Google Scholar] [CrossRef] [Green Version]
  9. Rangel, J.M.G.; Gonçalves, G.R.; Pérez, J.A. The impact of number and spatial distribution of GCPs on the positional accuracy of geospatial products derived from low-cost UASs. Int. J. Remote Sens. 2018, 39, 7154–7171. [Google Scholar] [CrossRef]
  10. Ewertowski, M.W.; Tomczyk, A.M.; Evans, D.J.A.; Roberts, D.H.; Ewertowski, W. Operational Framework for Rapid, Very-high Resolution Mapping of Glacial Geomorphology Using Low-cost Unmanned Aerial Vehicles and Structure-from-Motion Approach. Remote Sens. 2019, 11, 65. [Google Scholar] [CrossRef] [Green Version]
  11. Rosnell, T.; Honkavaara, E. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera. Sensors 2012, 12, 453–480. [Google Scholar] [CrossRef] [Green Version]
  12. Agüera-Vega, F.; Carvajal-Ramírez, F.; Martínez-Carricondo, P.; Sánchez-Hermosilla López, J.; Mesas-Carrascosa, F.J.; García-Ferrer, A.; Pérez-Porras, F.J. Reconstruction of extreme topography from UAV structure from motion photogrammetry. Meas. J. Int. Meas. Confed. 2018, 121, 127–128. [Google Scholar] [CrossRef]
  13. Watson, C.; Kargel, J.; Tiruwa, B. UAV-Derived Himalayan Topography: Hazard Assessments and Comparison with Global DEM Products. Drones 2019, 3, 18. [Google Scholar] [CrossRef] [Green Version]
  14. Rossi, P.; Mancini, F.; Dubbini, M.; Mazzone, F.; Capra, A. Combining nadir and oblique UAV imagery to reconstruct quarry topography: Methodology and feasibility analysis. Eur. J. Remote Sens. 2017, 50, 211–221. [Google Scholar] [CrossRef] [Green Version]
  15. Śledź, S.; Ewertowski, M.W.; Piekarczyk, J. Applications of unmanned aerial vehicle (UAV) surveys and Structure from Motion photogrammetry in glacial and periglacial geomorphology. Geomorphology 2021, 378, 107620. [Google Scholar] [CrossRef]
  16. Lizarazo, I.; Angulo, V.; Rodríguez, J. Automatic mapping of land surface elevation changes from UAV-based imagery. Int. J. Remote Sens. 2017, 38, 2603–2622. [Google Scholar] [CrossRef]
  17. Enciso, J.; Jung, J.; Chang, A.; Chavez, J.; Yeom, J.; Landivar, J.; Cavazos, G. Assessing land leveling needs and performance with unmanned aerial system. J. Appl. Remote Sens. 2018, 12, 1–8. [Google Scholar] [CrossRef] [Green Version]
  18. Yeh, F.H.; Huang, C.J.; Han, J.Y.; Ge, L. Modeling Slope Topography Using Unmanned Aerial Vehicle Image Technique. In Proceedings of the Third International Conference on Sustainable Infrastructure and Built Environment (SIBE), MATEC Web of Conferences, Bandung, Indonesia, 26–27 September 2018. [Google Scholar] [CrossRef] [Green Version]
  19. Rossi, G.; Tanteri, L.; Tofani, V.; Vannocci, P.; Moretti, S.; Casagli, N. Multitemporal UAV surveys for landslide mapping and characterization. Landslides 2018, 15, 1045–1052. [Google Scholar] [CrossRef] [Green Version]
  20. Rotnicka, J.; Dłużewski, M.; Dąbski, M.; Rodzewicz, M.; Włodarski, W.; Zmarz, A. Accuracy of the UAV-Based DEM of Beach–Foredune Topography in Relation to Selected Morphometric Variables, Land Cover, and Multitemporal Sediment Budget. Estuaries Coasts 2020, 43, 1939–1955. [Google Scholar] [CrossRef]
  21. Gonçalves, G.R.; Pérez, J.A.; Duarte, J. Accuracy and effectiveness of low cost UASs and open source photogrammetric software for foredunes mapping. Int. J. Remote Sens. 2018, 39, 5059–5077. [Google Scholar] [CrossRef]
  22. Watanabe, Y.; Kawahara, Y. UAV Photogrammetry for Monitoring Changes in River Topography and Vegetation. Procedia Eng. 2016, 154, 317–325. [Google Scholar] [CrossRef] [Green Version]
  23. Jiménez-Jiménez, S.I.; Ojeda-Bustamante, W.; Ontiveros-Capurata, R.E.; Marcial-Pablo, M.D.J. Rapid urban flood damage assessment using high resolution remote sensing data and an object-based approach. Geomatics, Nat. Hazards Risk 2020, 11, 906–927. [Google Scholar] [CrossRef]
  24. Papakonstantinou, A.; Batsaris, M.; Spondylidis, S.; Topouzelis, K. A Citizen Science Unmanned Aerial System Data Acquisition Protocol and Deep Learning Techniques for the Automatic Detection and Mapping of Marine Litter Concentrations in the Coastal Zone. Drones 2021, 5, 6. [Google Scholar] [CrossRef]
  25. Polat, N.; Uysal, M. An experimental analysis of digital elevation models generated with lidar data and UAV photogrammetry. J. Indian Soc. Remote Sens. 2018, 46, 1135–1142. [Google Scholar] [CrossRef]
  26. Uysal, M.; Toprak, A.S.; Polat, N. DEM generation with UAV Photogrammetry and accuracy analysis in Sahitler hill. Meas. J. Int. Meas. Confed. 2015, 73, 539–543. [Google Scholar] [CrossRef]
  27. Ariza López, F.J.; Atkinson Gordo, A.D. Analysis of Some Positional Accuracy Assessment Methodologies. J. Surv. Eng. 2008, 134, 45–54. [Google Scholar] [CrossRef]
  28. FGDC (Federal Geographical Data Committee). Geospatial Positioning Accuracy Standards PART 4: Standards for Architecture, Engineering, Construction (A/E/C) and Facility Management; U.S. Geological Survey: Reston, VA, USA, 2002; pp. 15–23.
  29. Hugenholtz, C.; Brown, O.; Walker, J.; Barchyn, T.; Nesbit, P.; Kucharczyk, M.; Myshak, S. Spatial Accuracy of UAV-Derived Orthoimagery and Topography: Comparing Photogrammetric Models Processed with Direct Geo-Referencing and Ground Control Points. Geomatica 2016, 70, 21–30. [Google Scholar] [CrossRef]
  30. Sanz-Ablanedo, E.; Chandler, J.H.; Rodríguez-Pérez, J.R.; Ordóñez, C. Accuracy of Unmanned Aerial Vehicle (UAV) and SfM Photogrammetry Survey as a Function of the Number and Location of Ground Control Points Used. Remote Sens. 2018, 10, 1606. [Google Scholar] [CrossRef] [Green Version]
  31. Eltner, A.; Kaiser, A.; Castillo, C.; Rock, G.; Neugirg, F.; Abellán, A. Image-based surface reconstruction in geomorphometry—Merits, limits and developments. Earth Surf. Dyn. 2016, 4, 359–389. [Google Scholar] [CrossRef] [Green Version]
  32. Whitehead, K.; Hugenholtz, C.H. Applying ASPRS Accuracy Standards to Surveys from Small Unmanned Aircraft Systems (UAS). Photogramm. Eng. Remote Sens. 2015, 81, 787–793. [Google Scholar] [CrossRef]
  33. Gómez-Gutiérrez, Á.; Gonçalves, G.R. Surveying coastal cliffs using two UAV platforms (multirotor and fixed-wing) and three different approaches for the estimation of volumetric changes. Int. J. Remote Sens. 2020, 41, 8143–8175. [Google Scholar] [CrossRef]
  34. Ruggles, S.; Clark, J.; Franke, K.W.; Wolfe, D.; Reimschiissel, B.; Martin, R.A.; Okeson, T.J.; Hedengren, J.D. Comparison of SfM computer vision point clouds of a landslide derived from multiple small UAV platforms and sensors to a TLS-based model. J. Unmanned Veh. Syst. 2016, 4, 246–265. [Google Scholar] [CrossRef]
  35. Anders, N.; Masselink, R.; Keesstra, S.; Suomalainen, J. High-Res Digital Surface Modeling using Fixed-Wing UAV-based Photogrammetry. Geomorphometry 2013, 2013. [Google Scholar] [CrossRef] [Green Version]
  36. Luhmann, T.; Fraser, C.; Maas, H.G. Sensor modelling and camera calibration for close-range photogrammetry. ISPRS J. Photogramm. Remote Sens. 2016, 115, 37–46. [Google Scholar] [CrossRef]
  37. Zhou, Y.; Rupnik, E.; Meynard, C.; Thom, C.; Pierrot-Deseilligny, M. Simulation and Analysis of Photogrammetric UAV Image Blocks—Influence of Camera Calibration Error. Remote Sens. 2020, 12, 22. [Google Scholar] [CrossRef] [Green Version]
  38. SPH Engineering. UgCS User Manual v.3.4; SPH Engineering: Riga, Latvia, 2016. [Google Scholar]
  39. Singh, K.K.; Frazier, A.E. A meta-analysis and review of unmanned aircraft system (UAS) imagery for terrestrial applications. Int. J. Remote Sens. 2018, 39, 5078–5098. [Google Scholar] [CrossRef]
  40. El Meouche, R.; Hijazi, I.; Poncet, P.A.; Abunemeh, M.; Rezoug, M. UAV photogrammetry implementation to enhance land surveying, comparisons and possibilities. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences—ISPRS Archives, Athens, Greece, 20–21 October 2016. [Google Scholar] [CrossRef] [Green Version]
  41. Reger, M.; Bauerdick, J.; Bernhardt, H. Drones in Agriculture: Current and future legal status in Germany, the EU, the USA and Japan. Landtechnik 2018, 73, 62–79. [Google Scholar] [CrossRef]
  42. Kozmus Trajkovski, K.; Grigillo, D.; Petrovič, D. Optimization of UAV Flight Missions in Steep Terrain. Remote Sens. 2020, 12, 1293. [Google Scholar] [CrossRef] [Green Version]
  43. Thomas, A.F.; Frazier, A.E.; Mathews, A.J.; Cordova, C.E. Impacts of Abrupt Terrain Changes and Grass Cover on Vertical Accuracy of UAS-SfM Derived Elevation Models. Pap. Appl. Geogr. 2020, 6, 1–16. [Google Scholar] [CrossRef]
  44. Gómez-Candón, D.; De Castro, A.I.; López-Granados, F. Assessing the accuracy of mosaics from unmanned aerial vehicle (UAV) imagery for precision agriculture purposes in wheat. Precis. Agric. 2014, 15, 44–56. [Google Scholar] [CrossRef] [Green Version]
  45. Agüera-Vega, F.; Carvajal-Ramírez, F.; Martínez-Carricondo, P. Accuracy of Digital Surface Models and Orthophotos Derived from Unmanned Aerial Vehicle Photogrammetry. J. Surv. Eng. 2017, 143, 04016025. [Google Scholar] [CrossRef]
  46. Rock, G.; Ries, J.B.; Udelhoven, T. Sensitivity analysis of UAV-photogrammetry for creating digital elevation models (DEM). In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Zurich, Switzerland, 14–16 September 2011; pp. 1–5. [Google Scholar] [CrossRef] [Green Version]
  47. Yurtseven, H. Comparison of GNSS-, TLS- and Different Altitude UAV-Generated Datasets on The Basis of Spatial Differences. ISPRS Int. J. Geo-Inf. 2019, 8, 175. [Google Scholar] [CrossRef] [Green Version]
  48. Zimmerman, T.; Jansen, K.; Miller, J. Analysis of UAS Flight Altitude and Ground Control Point Parameters on DEM Accuracy along a Complex, Developed Coastline. Remote Sens. 2020, 12, 2305. [Google Scholar] [CrossRef]
  49. Štroner, M.; Urban, R.; Reindl, T.; Seidl, J.; Brouček, J. Evaluation of the Georeferencing Accuracy of a Photogrammetric Model Using a Quadrocopter with Onboard GNSS RTK. Sensors 2020, 20, 2318. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Štroner, M.; Urban, R.; Seidl, J.; Reindl, T.; Brouček, J. Photogrammetry Using UAV-Mounted GNSS RTK: Georeferencing Strategies without GCPs. Remote Sens. 2021, 13, 1336. [Google Scholar] [CrossRef]
  51. Haala, N.; Cramer, M.; Rothermel, M. Quality of 3D point clouds from highly overlapping UAV imagery. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 183–188. [Google Scholar] [CrossRef] [Green Version]
  52. Sadeq, H.A. Accuracy assessment using different UAV image overlaps. J. Unmanned Veh. Syst. 2019, 7, 175–193. [Google Scholar] [CrossRef]
  53. Agisoft LLC. Agisoft Metashape User Manual—Professional Edition, Version 1.5; Agisoft LLC: St. Petersburg, Russia, 2019. [Google Scholar]
  54. Pix4D. Pix4DMapper 4.1 User Manual; Pix4D SA: Lausanne, Switzerland, 2017. [Google Scholar]
  55. Di Franco, C.; Buttazzo, G. Coverage Path Planning for UAVs Photogrammetry with Energy and Resolution Constraints. J. Intell. Robot. Syst. Theory Appl. 2016, 83, 445–462. [Google Scholar] [CrossRef]
  56. Liu, C.; Akbar, A.; Wu, H. Dynamic Model Constrained Optimal Flight Speed Determination of Surveying UAV under Wind Condition. In Proceedings of the International Conference on Geoinformatics, Kunming, China, 28–30 June 2018. [Google Scholar] [CrossRef]
  57. Dandois, J.P.; Olano, M.; Ellis, E.C. Optimal Altitude, Overlap, and Weather Conditions for Computer Vision UAV Estimates of Forest Structure. Remote Sens. 2015, 7, 13895–13920. [Google Scholar] [CrossRef] [Green Version]
  58. Roth, L.; Hund, A.; Aasen, H. PhenoFly Planning Tool: Flight planning for high-resolution optical remote sensing with unmanned areal systems. Plant Methods 2018, 14, 1–21. [Google Scholar] [CrossRef] [Green Version]
  59. Meinen, B.U.; Robinson, D.T. Streambank topography: An accuracy assessment of UAV-based and traditional 3D reconstructions. Int. J. Remote Sens. 2020, 41, 1–18. [Google Scholar] [CrossRef] [Green Version]
  60. Ali, H.H.; Abed, F.M. The impact of UAV flight planning parameters on topographic mapping quality control. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2019; pp. 1–11. [Google Scholar] [CrossRef]
  61. James, M.R.; Robson, S. Mitigating systematic error in topographic models derived from UAV and ground-based image networks. Earth Surf. Process. Landf. 2014, 39, 1413–1420. [Google Scholar] [CrossRef] [Green Version]
  62. Sanz-Ablanedo, E.; Chandler, J.H.; Ballesteros-Pérez, P.; Rodríguez-Pérez, J.R. Reducing systematic dome errors in digital elevation models through better UAV flight design. Earth Surf. Process. Landforms 2020, 45, 2134–2147. [Google Scholar] [CrossRef]
  63. Nesbit, P.; Hugenholtz, C. Enhancing UAV–SfM 3D Model Accuracy in High-Relief Landscapes by Incorporating Oblique Images. Remote Sens. 2019, 11, 239. [Google Scholar] [CrossRef] [Green Version]
  64. Ferrer-González, E.; Agüera-Vega, F.; Carvajal-Ramírez, F.; Martínez-Carricondo, P. UAV Photogrammetry Accuracy Assessment for Corridor Mapping Based on the Number and Distribution of Ground Control Points. Remote Sens. 2020, 12, 2447. [Google Scholar] [CrossRef]
  65. Gindraux, S.; Boesch, R.; Farinotti, D. Accuracy Assessment of Digital Surface Models from Unmanned Aerial Vehicles’ Imagery on Glaciers. Remote Sens. 2017, 9, 186. [Google Scholar] [CrossRef] [Green Version]
  66. Tahar, K.N. An evaluation on different number of ground control points in unmanned aerial vehicle photogrammetric block. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences—ISPRS Archives, Istanbul, Turkey, 27–29 November 2013. [Google Scholar]
  67. Jiménez-Jiménez, S.I.; Ojeda-Bustamante, W.; Ontiveros-Capurata, R.E.; Flores-Velázquez, J.; Marcial-Pablo, M.D.J.; Robles-Rubio, B.D. Quantification of the error of digital terrain models derived from images acquired with UAV. Ing. Agrícola Biosist. 2017, 9, 85–100. [Google Scholar] [CrossRef]
  68. Coveney, S.; Roberts, K. Lightweight UAV digital elevation models and orthoimagery for environmental applications: Data accuracy evaluation and potential for river flood risk modelling. Int. J. Remote Sens. 2017, 38, 3159–3180. [Google Scholar] [CrossRef] [Green Version]
  69. Martínez-Carricondo, P.; Agüera-Vega, F.; Carvajal-Ramírez, F.; Mesas-Carrascosa, F.J.; García-Ferrer, A.; Pérez-Porras, F.-J. Assessment of UAV-photogrammetric mapping accuracy based on variation of ground control points. Int. J. Appl. Earth Obs. Geoinf. 2018, 72, 1–10. [Google Scholar] [CrossRef]
  70. Santise, M.; Fornari, M.; Forlani, G.; Roncella, R. Evaluation of DEM generation accuracy from UAS imagery. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL-5, 529–536. [Google Scholar] [CrossRef] [Green Version]
  71. Agüera-Vega, F.; Carvajal-Ramírez, F.; Martínez-Carricondo, P. Assessment of photogrammetric mapping accuracy based on variation ground control points number using unmanned aerial vehicle. Meas. J. Int. Meas. Confed. 2017, 98, 221–227. [Google Scholar] [CrossRef]
  72. Harwin, S.; Lucieer, A. Assessing the Accuracy of Georeferenced Point Clouds Produced via Multi-View Stereopsis from Unmanned Aerial Vehicle (UAV) Imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar] [CrossRef] [Green Version]
  73. Taddia, Y.; Stecchi, F.; Pellegrinelli, A. Coastal Mapping Using DJI Phantom 4 RTK in Post-Processing Kinematic Mode. Drones 2020, 4, 9. [Google Scholar] [CrossRef] [Green Version]
  74. Casella, V.; Chiabrando, F.; Franzini, M.; Manzino, A.M. Accuracy Assessment of a UAV Block by Different Software Packages, Processing Schemes and Validation Strategies. ISPRS Int. J. Geo-Inf. 2020, 9, 164. [Google Scholar] [CrossRef] [Green Version]
  75. Jaud, M.; Passot, S.; Le Bivic, R.; Delacourt, C.; Grandjean, P.; Le Dantec, N. Assessing the Accuracy of High Resolution Digital Surface Models Computed by PhotoScan® and MicMac® in Sub-Optimal Survey Conditions. Remote Sens. 2016, 8, 465. [Google Scholar] [CrossRef] [Green Version]
  76. Sona, G.; Pinto, L.; Pagliari, D.; Passoni, D.; Gini, R. Experimental analysis of different software packages for orientation and digital surface modelling from UAV images. Earth Sci. Inform. 2014, 7, 97–107. [Google Scholar] [CrossRef]
  77. Alfio, V.S.; Costantino, D.; Pepe, M. Influence of Image TIFF Format and JPEG Compression Level in the Accuracy of the 3D Model and Quality of the Orthophoto in UAV Photogrammetry. J. Imaging 2020, 6, 30. [Google Scholar] [CrossRef]
  78. Fonstad, M.A.; Dietrich, J.T.; Courville, B.C.; Jensen, J.L.; Carbonneau, P.E. Topographic structure from motion: A new development in photogrammetric measurement. Earth Surf. Process. Landf. 2013, 38, 421–430. [Google Scholar] [CrossRef] [Green Version]
  79. Salach, A.; Bakuła, K.; Pilarska, M.; Ostrowski, W.; Górski, K.; Kurczyński, Z. Accuracy Assessment of Point Clouds from LiDAR and Dense Image Matching Acquired Using the UAV Platform for DTM Creation. ISPRS Int. J. Geo-Inf. 2018, 7, 342. [Google Scholar] [CrossRef] [Green Version]
  80. Simpson, J.E.; Smith, T.E.L.; Wooster, M.J. Assessment of Errors Caused by Forest Vegetation Structure in Airborne LiDAR-Derived DTMs. Remote Sens. 2017, 9, 1101. [Google Scholar] [CrossRef] [Green Version]
  81. Birdal, A.C.; Avdan, U.; Türk, T. Estimating tree heights with images from an unmanned aerial vehicle. Geomatics, Nat. Hazards Risk 2017, 8, 1144–1156. [Google Scholar] [CrossRef] [Green Version]
  82. Westaway, R.M.; Lane, S.N.; Hicks, D.M. Remote sensing of clear-water, shallow, gravel-bed rivers using digital photogrammetry. Photogramm. Eng. Remote Sens. 2001, 67, 1271–1281. [Google Scholar]
  83. Tamminga, A.; Hugenholtz, C.; Eaton, B.; Lapointe, M. Hyperspatial Remote Sensing of Channel Reach Morphology and Hydraulic Fish Habitat Using an Unmanned Aerial Vehicle (UAV): A First Assessment in the Context of River Research and Management. River Res. Appl. 2014, 31, 379–391. [Google Scholar] [CrossRef]
  84. Wierzbicki, D.; Kedzierski, M.; Fryskowska, A. Assesment of the influence of UAV image quality on the orthophoto production. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Toronto, Canada, 30 August–2 September 2015. [Google Scholar] [CrossRef] [Green Version]
  85. James, M.R.; Chandler, J.H.; Eltner, A.; Fraser, C.; Miller, P.E.; Mills, J.P.; Noble, T.; Robson, S.; Lane, S.N. Guidelines on the use of structure-from-motion photogrammetry in geomorphic research. Earth Surf. Process. Landforms 2019, 44, 2081–2084. [Google Scholar] [CrossRef]
  86. ASPRS. ASPRS Positional Accuracy Standards for Digital Geospatial Data. Photogramm. Eng. Remote Sens. 2015, 81, 1–26. [Google Scholar] [CrossRef]
Figure 1. Flight planning parameters.
Figure 1. Flight planning parameters.
Ijgi 10 00285 g001
Figure 2. (a) Decreased overlap in hilly terrain, flight at a constant height above the mean sea level (MSL), (b) digital terrain model (DTM) generated from images acquired at a constant height above the MSL, and (c) the number of overlapping images with fewer overlapping images in the highest areas.
Figure 2. (a) Decreased overlap in hilly terrain, flight at a constant height above the mean sea level (MSL), (b) digital terrain model (DTM) generated from images acquired at a constant height above the MSL, and (c) the number of overlapping images with fewer overlapping images in the highest areas.
Ijgi 10 00285 g002
Figure 3. Flight plans with different orientations of flight lines: (a) 0°, (b) 45°, and (c) 120° and image overlap of 75%, realized with UgCS software.
Figure 3. Flight plans with different orientations of flight lines: (a) 0°, (b) 45°, and (c) 120° and image overlap of 75%, realized with UgCS software.
Ijgi 10 00285 g003
Figure 4. Workflow to generate DTMs.
Figure 4. Workflow to generate DTMs.
Ijgi 10 00285 g004
Figure 5. The location and superposition of images after photogrammetric restitution. Upper picture: misaligned picture; and lower picture: aligned picture.
Figure 5. The location and superposition of images after photogrammetric restitution. Upper picture: misaligned picture; and lower picture: aligned picture.
Ijgi 10 00285 g005
Figure 6. Point cloud: (a) sparse and (b) dense.
Figure 6. Point cloud: (a) sparse and (b) dense.
Ijgi 10 00285 g006
Figure 7. Dense point cloud classification.
Figure 7. Dense point cloud classification.
Ijgi 10 00285 g007
Figure 8. (a) Digital surface model and (b) digital terrain model [16].
Figure 8. (a) Digital surface model and (b) digital terrain model [16].
Ijgi 10 00285 g008
Figure 9. DTM for (a) cleared areas, (b) isolated vegetation, (c) urban areas, and (d) dense vegetation and complex morphology.
Figure 9. DTM for (a) cleared areas, (b) isolated vegetation, (c) urban areas, and (d) dense vegetation and complex morphology.
Ijgi 10 00285 g009
Table 2. Studies where the flight altitude above ground level (AGL) vs. accuracy is analyzed.
Table 2. Studies where the flight altitude above ground level (AGL) vs. accuracy is analyzed.
AuthorApplicationPlatformAGL (GSD cm/pixel)GCPsArea (ha)Vertical Root Mean Square Error (RMSE) (m)
Gómez-Candón et al. [44]Agriculture—flat terrainMulticopter30 m (0.74)
60 m (1.48)
100 m (2.47)
451 ha × (2 plots)3D RMSE
0.015
0.026
0.025
Agüera-Vega et al. [45]Five topographic surfaces (from flat to very rugged)Multicopter50 m (1.2)
80 m (1.9)
100 m (2.4)
120 m (2.9)
10Average of 2.5 ha × (5 plots)0.032–0.06
0.07–0.080
0.06–0.075
0.06–0.08
Rock et al. [46]Includes flat areas, several piles of debris and steep facesFixed wing70 m
100 m
150 m
200 m
300 m
550 m
Different combinations depending on the AGL. N/A70 m: 0.17–1.50
100 m: 0.13–0.72
150 m: 0.19–0.50
200 m: 0.20–1.00
300 m: 0.20–1.20
550 m: 0.50–1.39
Zimmerman et al. [48]Flat and complex morphologyMulticopter67 m (1.67),
91 m (2.27),
116 m (2.89)
15250.038,
0.030,
0.021
Yurtseven [47] Flat terrainMulticopter25 m (1),
50 m (2),
120 m (4.8)
150.60.74,
0.39,
0.09
Fixed wing350 m (9.4)0.11
Note: N/A: not available.
Table 3. Characteristics of the digital camera used for flight plans.
Table 3. Characteristics of the digital camera used for flight plans.
BrandSony
Focal distance16 mm
Resolution 24 megapixels (6000 × 4000 pixels)
Sensor Size23.4 × 15.6 mm
Table 4. Recommendations regarding the number of GCPs reported in different investigations.
Table 4. Recommendations regarding the number of GCPs reported in different investigations.
Author Area (ha)Platform Average GSD (cm/pixel)Terrain MorphologyGCPs usedRecommendations
Tahar [66] 4–12Establish at least seven GCPs
Jiménez-Jiménez et al. [67] 37Multicopter2.0Relative flat terrain4–11Establish at least five GCPs, one GCP for every 3 ha to obtain vertical RMSE values close to 3 × GSD
Coveney and Roberts [68] 29Fixed wing 3.5Flat terrain0–61Using one GCP for every 2 ha to obtain vertical RMSE values close to 2 × GSD
Santise et al. [70]25Multicopter4.0Relative flat terrain9–28Using nine GCPs for 25 ha (about one GCP for every 3 ha) to obtain vertical RMSE values close to 2.5 × GSD
28 GCPs for 25 ha (about one GCP/ha) to obtain vertical RMSE values close to 1.3 × GSD
Martínez-Carricondo et al. [69] 17Multicopter 3.3Included a wide range of slope values4–36Using one GCP/ha a vertical RMSE of about 1.6 × GSD can be achieved
Agüera-Vega et al. [71] 17.64Multicopter 3.3Combinations of several terrain morphologies4–20In a 17.64 ha survey, 15 GCPs are necessary to achieve optimal results (vertical RMSE~2 × GSD)
Sanz-Ablanedo et al. [30] 1.225Fixed wing6.86Varying topography and approximately square-shaped3–101Using four GCPs per 100 images to obtain vertical RMSE values close to 1.5 × GSD
Table 5. Recommendations on the distributions of GCPs as reported in different investigations.
Table 5. Recommendations on the distributions of GCPs as reported in different investigations.
AuthorArea (ha)Terrain MorphologyGCPs Configuration NumberRecommendations
Zimmerman et al. [48]25Flat and complex morphology159An optimal configuration of the GCPs should cover all four corners of the site, the highest and lowest elevations, and with sufficient site coverage
Harwin and Lucieer [72] Low and high slopes5–276The GCPs be distributed throughout the focus area and adapted to the relief, resulting in more GCPs in steeper terrains
Rangel et al. [9]270Abrupt changes of slope and flat areas6–5413Establish a GCPs: On the central part the block with a horizontal separation of 3 to 4 ground base units for the altimetric component. On the periphery of the block with a separation of 7 to 8 ground base units for the planimetric component
Martínez-Carricondo et al. [69]17Included a wide range of slope values4–36300The best accuracies were achieved by placing GCPs around the edge of the study area; however, it was also essential to place GCPs inside the area with a stratified distribution to optimize the vertical accuracy.
Table 6. The RMSE reported in different UAV photogrammetry studies for flat terrain.
Table 6. The RMSE reported in different UAV photogrammetry studies for flat terrain.
Author (Software)Area (ha)MORPHOLOGY TerrainImage TypeOverlap (Front, Side, %)PlatformGCP, CPGSD (cm/pixel)RMSE (GSD)
xyz3D
Mora et al. [2]—Pix4D1.7Relatively flat terrainVertical (nadir)85, 75Multicopter5, 190.752.64.35.1
Ewertowski et al. [10]—PhotoScan100.0Relatively flat terrain and 20 m maximum elevation difference (MED)Nadir and oblique80,80Multicopter30, 151.9N/AN/A4.2
Yurtseven [47]—(Agisoft PhotoScan)0.6Flat terrainNadirN/AMulticopter15, 3604.82.21.93.0
Yurtseven [47]—(Agisoft PhotoScan)0.6Flat terrainNadirN/AFixed wing15, 3609.41.21.21.7
Jimenez et al. [67]—PhotoScan37.4Relatively flat terrainNadir75, 75Multicopter11, 1221.83.0.3.5
Sadeq [52]—Pix4D9.1Flat terrainNadir and oblique90,45Multicopter6, 411.40.92.62.8
Rossi et al. [14]9.0Relatively flat terrain and 35 m MEDNadir90, 90Multicopter18, 512.12.02.9
Rossi et al. [14]9.0Relatively flat terrain and 35 m MEDNadir and oblique (60°)90, 90Multicopter18, 512.02.02.9
Ferrer-González et al. [64]40.0Relatively flat terrain and 30 m MEDNadir80,60Multicopter18, 291.751.63.23.5
Santise et al. [70]25Relatively flat terrainNadir80,40Multicopter35, 1014.01.61.12.0
Mean1.82.33.2
Minimum0.91.11.7
Maximum2.64.35.1
N/A: not available.
Table 7. The RMSE reported in different UAV photogrammetry studies for complex topography.
Table 7. The RMSE reported in different UAV photogrammetry studies for complex topography.
Author (Software)Area (ha)Morphology TerrainImage TypeOverlap (Front, Side, %)PlatformGCP, CPGSD (cm/pixel)RMSE (GSD)
xyz3D
Mancini et al. [3]—(Agisoft PhotoScan)2.75very complex topographyNadirN/AMulticopter18, 1260.6N/AN/A18
Agüera-Vega et al. [12]—(Pix4D)0.65complex topography and 70 m MEDHorizontal90, 60Multicopter5, 181.866.34.37.6
Oblique (45° tilted)7.35.49.1
Horizontal and oblique4.73.35.7
Lizarazo et al. [16]5.0complex topography and 100 m MEDNadir80, 60Multicopter5, 304N/A5.1N/A
Agüera-Vega et al. [45]—(Agisoft
PhotoScan)
2.1Average slope of 30% and 25 m MEDNadir90,80Multicopter10, 151.93.24.55.5
Sanz-Ablanedo et al. [30]—(Agisoft
PhotoScan)
1.225varying topography Nadir75, 60Multicopter1016.861.01.51.8
Mean3.83.56.7
Minimum1.01.51.8
Maximum7.35.418
N/A: not available.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiménez-Jiménez, S.I.; Ojeda-Bustamante, W.; Marcial-Pablo, M.d.J.; Enciso, J. Digital Terrain Models Generated with Low-Cost UAV Photogrammetry: Methodology and Accuracy. ISPRS Int. J. Geo-Inf. 2021, 10, 285. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10050285

AMA Style

Jiménez-Jiménez SI, Ojeda-Bustamante W, Marcial-Pablo MdJ, Enciso J. Digital Terrain Models Generated with Low-Cost UAV Photogrammetry: Methodology and Accuracy. ISPRS International Journal of Geo-Information. 2021; 10(5):285. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10050285

Chicago/Turabian Style

Jiménez-Jiménez, Sergio Iván, Waldo Ojeda-Bustamante, Mariana de Jesús Marcial-Pablo, and Juan Enciso. 2021. "Digital Terrain Models Generated with Low-Cost UAV Photogrammetry: Methodology and Accuracy" ISPRS International Journal of Geo-Information 10, no. 5: 285. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi10050285

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop