Next Article in Journal
In Planta Analysis of the Radial Movement of Minerals from Inside to Outside in the Trunks of Standing Japanese Cedar (Cryptomeria japonica D. Don) Trees at the Cellular Level
Next Article in Special Issue
Tree Height Measurements in Degraded Tropical Forests Based on UAV-LiDAR Data of Different Point Cloud Densities: A Case Study on Dacrydium pierrei in China
Previous Article in Journal
Effect of Thermal Modification Treatment on Some Physical and Mechanical Properties of Pinus oocarpa Wood
Previous Article in Special Issue
Using GatorEye UAV-Borne LiDAR to Quantify the Spatial and Temporal Effects of a Prescribed Fire on Understory Height and Biomass in a Pine Savanna
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Influence of Agisoft Metashape Parameters on UAS Structure from Motion Individual Tree Detection from Canopy Height Models

Department of Forest and Rangeland Stewardship, Colorado State University, Fort Collins, CO 80523, USA
*
Author to whom correspondence should be addressed.
Submission received: 2 February 2021 / Revised: 15 February 2021 / Accepted: 17 February 2021 / Published: 22 February 2021
(This article belongs to the Special Issue Forestry Applications of Unmanned Aerial Vehicles (UAVs) 2020)

Abstract

:
Applications of unmanned aerial systems for forest monitoring are increasing and drive a need to understand how image processing workflows impact end-user products’ accuracy from tree detection methods. Increasing image overlap and making acquisitions at lower altitudes improve how structure from motion point clouds represents forest canopies. However, only limited testing has evaluated how image resolution and point cloud filtering impact the detection of individual tree locations and heights. We evaluate how Agisoft Metashape’s build dense cloud Quality (image resolution) and depth map filter settings influence tree detection from canopy height models in ponderosa pine forests. Finer resolution imagery with minimal filtering provided the best visual representation of vegetation detail for trees of all sizes. These same settings maximized tree detection F-score at >0.72 for overstory (>7 m tall) and >0.60 for understory trees. Additionally, overstory tree height bias and precision improve as image resolution becomes finer. Overstory and understory tree detection in open-canopy conifer systems might be optimized using the finest resolution imagery that computer hardware enables, while applying minimal point cloud filtering. The extended processing time and data storage demands of high-resolution imagery must be balanced against small reductions in tree detection performance when down-scaling image resolution to allow the processing of greater data extents.

1. Introduction

The monitoring of forest structure through remotely sensed individual tree observations has rapidly expanded through advancements in airborne light detection and ranging (LiDAR) [1,2] and unmanned aerial system (UAS) photogrammetry [3,4]. Modern UAS structures from motion (SfM) algorithms are proving capable of producing higher density point clouds (100 s points m−2) for characterizing forest canopy structure than current airborne LiDAR technology (10 s points m−2) [3]. This increased point cloud density could improve fine resolution details within canopy height models (CHMs), potentially allowing for more accurate use of individual tree detection (ITD) algorithms that extract tree metrics by searching CHMs for local-maximums within a moving search window [5]. These individual tree techniques have been demonstrated across a range of conifer forest types to accurately characterize overstory tree locations and heights [1,6,7]. When combined with tree height-based allometries, these techniques are able to characterize second-order properties such as basal area and timber volume per hectare [8,9]. Such techniques could be maximized in open-canopy systems, such as ponderosa pine (Pinus ponderosa var. scopulorum Dougl. Ex Laws.) dominated forests, as these forests are characterized by conical crowns resulting in a singular treetop and a matrix of tree clumps where neighbors are generally similar in height [10]. Ponderosa pine forests are ideal for ITD monitoring due to their relatively open nature and tendency towards single vertical stratums [4]. Despite the rapidly expanding use of ITD methods with UAS-SfM derived CHMs, there has been limited testing of how photogrammetric processing parameters influence point clouds and their derived products, such as CHMs and subsequent estimates of tree locations and heights.
There are many decisions in collecting and processing UAS imagery for monitoring forest settings (e.g., UAS platforms, image sensors, photogrammetry software). However, only limited investigation has evaluated how these decisions impact image alignment and point cloud completeness, let alone end-user products such as CHMs or detected tree attributes. Improved image alignment and point density in forested environments have been found by increasing UAS acquisition image overlap [11,12]. Additionally, incorporating oblique images into UAS-SfM modeling has been found to improve spatial accuracy and reduce point cloud data gaps [13]. However, this must be balanced with efficiency as greater image density requires increased acquisition and processing time. Comparisons of commercially available UAS-SfM software have pointed to Agisoft Metashape (formerly PhotoScan) as providing increased image alignment rates, finer orthomosaic resolution, and improved SfM ground sample distances [14,15] over software such as Pix4D. However, when it comes to UAS-SfM processing software decisions, less is understood surrounding how processing parameter selection impacts end-user products such as detected tree locations and heights.
Processing workflows should be evaluated for specific end-user products and the forest structures being characterized to ensure the documentation of clear best-use practices and quantitative evidence of factors affecting data quality before transitioning these technologies as operationalized management tools. Limited testing has evaluated how image resolution and point cloud filtering impact the detection and accuracy of individual trees and their heights. Image resolution is a standard parameter in SfM software, where images can be processed at the original resolution or downscaled to coarser resolutions to save on processing time and data storage requirements. Some studies point to finer image resolutions as providing increased data density and improved vertical representation within point clouds [16], but retaining original image resolution is known to increase processing time [14] substantially. The evaluation of UAS-SfM data for ITD has also suggested that retaining the original image resolution provides greater detail within the reconstructed forest canopy [17]. However, this study also highlights how too fine of an image resolution can generate greater numbers of outlier points, depending on image blurring due to canopy movement in windy conditions.
Decisions during image acquisition and processing directly impact the reliability of point locations, with the depth of forest vegetation causing unique challenges that are not often considered in other applications. Forest settings often have greater than 10 m of vertical relief, resulting in substantially different ground sampling distances (i.e., pixel size) at the top of a tree and the ground level [18]. At the same time, having different distances from the UAS camera to the top of the canopy and ground surface leads to reduced image overlap at the top of the canopy compared to the ground level. The planned image overlap percentage typically used in flight planning software applies to the distance between the UAS and the ground, and does not consider the vegetation height. Beyond how forest structure complicates flight planning, atmospheric stability plays an important role in image matching and point cloud fidelity, as the movement of the forest canopy by wind can substantially increase the number of noise or outlier points created in SfM point clouds [11,12]. While not all SfM software programs provide point cloud outlier filtering capabilities, SfM processing routines in forested environments are beginning to implement some level of filtering to remove outlier data points that can occur due to poor image alignment or the movement of vegetation between images because of wind [17]. The approaches include a range of depth map agreement strategies [19], SfM point confidence filtering based on the number of images generating a point [20], and more traditional local point density filters [21]. Despite these early insights, there has not been a comprehensive evaluation of how processing decisions related to image resolution and point cloud filtering impact metrics such as omission, commission, and tree height accuracy from ITD methods.
This study evaluates the influence of image resolution and intensity of point cloud filtering on Metashape UAS-SfM point cloud generation and individual tree detection in a ponderosa pine forest. Specifically assessing (1) how image resolution and point cloud filtering affect the overall and ground point return density, along with total processing time and subsequent file storage size; and (2) how image resolution and point cloud filtering influence omission, commission, and tree height accuracy for trees detected from SfM-derived CHMs.

2. Materials and Methods

2.1. Study Area

This study was conducted in a 2 ha area (120 m × 166.7 m) of ponderosa pine-dominated forest at the Manitou Experimental Forest within the Pike-San Isabel National Forest of Colorado (Figure 1). The location has an average elevation of 2500 m and slopes mildly (<5%) to the southeast. The area is an all-aged ponderosa pine forest, with minor regeneration of Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco var. glauca (Beissn.) Franco) and blue spruce (Picea pungens Engelm.). The understory is sparse, comprised of native grasses and low growing woody shrubs.
The site was stem mapped in July 2018, including inventory of trees >1.37 m tall for species, diameter at breast height (1.37 m; DBH), and height, hereafter referred to as survey trees (Figure 1). Tree heights were estimated using a Laser Technology TruPulse 200 laser rangefinder, with similar instruments known to provide ~10% precision [22] and resulting in overestimation of taller tree heights [23]. Based on repeated inventorying of this site, average tree height growth is estimated at 0.2 m year−1, or well within field measurement precision potential. Survey trees were located based on distance and angle from survey points established by closed transect with a total station. The survey points were geolocated with a Trimble GeoXT using differential correction to an accuracy of 0.60 m. The inventory observed 1391 trees, providing a density of 695 trees hectare−1 and 24.86 m2 hectare−1 of basal area. Of the surveyed trees, 458 were >7 m tall, roughly corresponding to a 10 cm DBH, a common division between overstory and sapling monitoring in the region, leaving 933 saplings in the study area (Figure 1).

2.2. UAS Data Collection

UAS imagery was collected using a DJI Phantom 4 Pro multirotor aircraft (Dá-Jiang Innovations Science and Technology Co. Ltd., Shenzhen, China) with a 20-megapixel (5472 × 3648 pixels; 13.2 × 8.8 mm) ceramic metal oxide semiconductor red-green-blue sensor at a fixed 8.8 mm focal length. The DJI Phantom 4 Pro was used due to its low cost and wide recognition as a high-performance, entry-level UAS for photogrammetry across various disciplines. UAS flight planning and mission execution were conducted using Altizure version 4.6.8.139 (Shenzen, China) for Apple IOS, which utilizes pre-programmed automated flight plans at the desired altitude, image overlap, flight speed, and camera capture settings. A single flight in June 2019 over the study area was conducted at 95 m above ground level, with a nadir camera angle, a forward and side photo overlap of 90%, a flight speed of 4 m s−1, and a minimum interval of 2 s between image captures. The flight took 7 min, capturing 154 images in jpeg format with an aperture of 5.6 (F-stop), ISO value of 100, and a shutter speed of 1/400th of a second using a manual focus set to infinity with image dimensions set to 3:2. The aircraft recorded geolocation (x, y, and z) and camera parameter values for each image to a manufacturer-stated vertical accuracy of ±0.5 m and horizontal accuracy of ±1.5 m. The UAS imagery had a ground sampling density of 2.6 cm and was inspected in the field to ensure image quality and saved to an external hard drive. The flight was conducted within three hours of solar noon to maintain a minimum solar angle of 50° from the horizon. The flight was conducted within the remote pilot’s line of sight using a visual observer to comply with Part 107 of Unites States Federal Aviation Administration guidelines.

2.3. UAS Data Processing

The UAS imagery was processed using Agisoft Metashape version 1.6.4 (Agisoft LLC., St. Petersburg, Russia) to generate dense SfM point clouds. The Metashape SfM process includes two major components: (1) image alignment by calculating camera position and orientation using key point detection and matching across images, and (2) dense point cloud generation using depth maps calculated from stereo matching. All Metashape processing was performed using the Agisoft Cloud Service, utilizing a 2.7 Ghz Intel Xeon E5 2686 V4 computer processor unit with two NVIDIA Tesla M60 graphics processing units and 240 gigabytes of random-access memory. The first stage of data processing followed the common workflow of (1) import images into Metashape, (2) align images, (3) add ground control points and disable photo geolocations, and (4) optimize camera locations. The authors utilized preconfigured image alignment settings from the Agisoft Metashape Professional User Manual version 1.6.5 (Table 1). Next, the model bounding box was adjusted to ensure the sparse point cloud was fully contained. Then, ten ground control points collected with a Trimble GeoXT GNSS were loaded into Metashape and located in a minimum of 15 photos each. After the direct georeferencing step, ground control points with high error values were removed, leaving six final ground control points. The geolocation information from the UAS images was then disabled, and the georectified sparse point cloud was optimized using the focal length (f), principal point coordinates (cx, cy), radial distortion coefficients (k1, k2, k3), and tangential distortion coefficients (p1, p2) as calibration parameters, with adaptive camera fitting enabled. After optimization, Metashape reported an x error of 0.203 m, y error of 0.156 m, z error of 0.050 m, and xy error of 0.256 m, resulting in 0.2611 m of total error.
To investigate how dense cloud generation parameters (“Quality” and depth maps filtering) impact the resulting UAS point clouds and tree detection rates and accuracy, other processing parameters (image alignment, ground control points/accuracy) were held constant. After optimization, the georectified sparse point cloud was duplicated 20 times to generate separate Metashape projects for processing during the dense cloud generation. The 20 sparse point clouds were processed under all possible combinations of the Quality and depth map filter parameters in the dense cloud generation step.
These combinations consisted of five build dense cloud settings (Lowest, Low, Medium, High, Ultra High) and four depth filter settings (Disabled, Mild, Moderate, and Aggressive). The build dense cloud settings are referred to as “Quality” in Metashape and impact the image resolution, where Ultra High processes the original images (20 MP; 2.6 cm/pixel), and subsequent settings downscale the images by increasing factors of 4. High downscales images by a factor of 4 (2× on each side; 5.2 cm/pixel), Medium by a factor of 16 (4× on each side; 10.4 cm/pixel), Low by a factor of 64 (8× on each side; 20.8 cm/pixel), and Lowest by a factor of 256 (16× on each side; 41.6 cm/pixel). Similar control over image resolution within the Pix4Dmapper SfM software for the dense point cloud generation step is available using the Image Scale parameter.
The depth maps filter settings help remove outlier point observations resulting from poor input imagery (primarily alignment and focus issues). Depth map filtering in Metashape evaluates pairwise depth maps for matched images using a connected component filter, which analyzes segmented depth maps based on the distance of a pixel from the camera. The various filter settings control the maximum size of connected components that are discarded in the filter process. The Metashape User Manual suggests the Mild depth filter mode for retaining small details, Moderate filtering as an intermediate approach between Mild and Aggressive, Aggressive depth filtering for scenes with few small details, and that Disabled depth filtering can lead to extremely noisy dense point clouds. The suggestions for depth filtering are not clear or well documented regarding the application or optimization in forested environments. Once dense cloud reconstruction was complete, models were downloaded to local hardware and inspected for processing errors. Following the inspection, point clouds were exported as LAZ files in the North American Datum 1983 UTM zone 13 (NAD 83, UTM 13) coordinate system, along with Metashape processing reports.
The exported point clouds were processed using LAStools version 12 (rapidlasso GmbH, Gilching, Germany). This processing included: tiling the point clouds with buffers, ground classification by fitting a spline through identified block minimum points, height normalization of non-ground points, and generation of CHMs at 10 cm resolution using block maximum normalized point heights. Overall and ground point density, along with file size, were summarized for each model. Additionally, SfM point cloud processing time was extracted from the Metashape processing reports.
The 20 CHMs were imported to RStudio using the lidR package [24] for conducting ITD using the local maximum variable window function [25] in the ForestTools package [26]. The local maximum function was parameterized based on previous research in the same forest system [27] using Equation (1) and set to detect a minimum tree height of 1.37 m. The variable function allows for adaptive sizing of the search window based on CHM values, which previous studies have found to improve single tree detection rates in similar forest systems [3]. The detected trees were exported with their tree ID, x and y coordinates, and height estimates for comparison with the survey trees.
V a r i b l e   W i n d o w   R a d i u s = 0.3 + C H M   V a l u e × 0.09
Each of the detected tree outputs was matched with survey tree locations through an iterative process. Iteratively, a detected tree was selected, and all survey trees within a 3 m radius and 10% height of the detected tree were identified. If a survey tree met both the location and height precision requirements, it was considered a true positive (TP) detection, and both the survey and detected trees were removed from further matching. However, if no match was made, the detected tree was considered a commission (Co) and removed from further matching. This process was repeated until all detected trees were classified as true positive or commission, with all unmatched survey trees classified as omission (Om). Overall tree detection performance was described using the F-score metric (Equation (2)). F-score incorporates true positive, commission, and omission rates to determine how well the detected trees represent the survey trees. For successfully matched trees, the detected tree heights were compared to survey tree heights to determine the mean error and root mean square error (RMSE) to understand how SfM processing parameters impacted detected tree height bias and precision.
F - s c o r e = 2 × ( T P T P + O m × T P T P + C o ) ( T P T P + O m   +   T P T P + C o )

2.4. Data Comparison

To understand the impacts of the build dense cloud and depth map filter settings, 30 m × 15 m subsets of the 20 point clouds are visually presented for qualitative interpretation. This is reinforced through two-way analysis of variance (ANOVA) with Tukey’s honestly significant difference (HSD) test to determine the sensitivity of point cloud density metrics, processing time, and file size to variations in the processing settings. To evaluate how processing parameters impacted the derived CHM and subsequent tree detection, subsets of the 20 CHMs are visually presented for qualitative visual interpretation. This interpretation is reinforced through a series of two-way ANOVAs with Tukey’s HSD to determine how processing settings impact tree detection F-score and true positive, omission, and commission rates. Additionally, two-way ANOVAs with Tukey’s HSD was used to evaluate how the processing settings impacted the mean error and root mean squared error of detected tree heights. All statistical tests of the detected trees were split into overstory (≥7 m tall) and understory (<7 m tall) using a threshold corresponding to a regional breakpoint (~10 cm DBH) commonly used for inventorying and informing management decisions within ponderosa pine forests.

3. Results

3.1. Point Cloud Comparison

The inspection of 20 UAS-SfM-derived point clouds reveals a greater representation of vertical vegetation structural variability for the Disabled and Mild depth map filter settings and the Ultra High and High Quality settings compared to the other parameter combinations (Figure 2). The Moderate and Aggressive depth filters provided fewer points within the Lower canopy and understory trees (<7 m tall), while the Disabled and Mild settings appeared visually similar. Additionally, the visual detail of overstory tree crowns and the representation of understory trees improve with increasing Quality/image resolution; this improvement is less noticeable between High and Ultra High Quality.
For the 20 UAS-SfM point clouds, both overall and ground point density increased with the Quality settings (image resolution) from Lowest to Ultra High, with no significant impact by depth map filter (Table 2). This trend’s only deviation was an almost indistinguishable difference in ground point density for the Medium, High, and Ultra High Quality. Processing time significantly increased from 0.03 to 1.49 h ha−1 going from the Lowest to Ultra High Quality settings, with no difference between the Low and Lowest settings or any depth map filter settings. A nearly identical trend was seen for the filtered point cloud file size that increased from 0.002 to 0.856 GB ha−1 along the Quality setting gradient.

3.2. CHM, Tree Detection, and Height Accuracy Comparison

Visual comparison of the 20 SfM CHMs reveals increased detail in crown locations and vertical structure when moving from Aggressive to Disabled depth map filter settings and Lowest to Ultra High Quality settings (Figure 3). A number of visual data gaps and surface model tinning errors apparent for the two lower resolution Quality settings (Lowest and Low) were removed when using the three higher Quality settings (Medium, High, and Ultra High). Additionally, within the individual Quality settings, understory tree representation improved for the Disabled and Mild depth map filter settings compared to the Moderate and Aggressive settings.
All measures of detected overstory tree characteristics compared to the survey trees significantly improved with finer image resolution (Quality; Table 3). Similar improvements in detection performance occurred for all levels of depth map filter compared to the Aggressive setting, with the only exception being Commission rate, which was not significantly impacted by filtering. Specifically, F-score improved with Quality for overstory trees from 0.172 to 0.717, but with no significant difference between the Medium, High, and Ultra High Quality settings (Figure 4). Simultaneously, the two-way ANOVA indicates that the Aggressive depth map filter setting significantly reduced F-score compared to other filter settings when controlling for the Quality levels.
In terms of detected overstory tree height reliability, both the bias (mean error) and precision (RMSE) of detected heights significantly improved along a gradient from coarsest to finest image resolution (Figure 5; Table 3). Height underestimation bias ranged from −0.83 to −0.49 m going from Lowest to Ultra High Quality. All precision estimates were less than 7.5% of the tree height. The depth map filter parameters did not significantly impact detected overstory tree height bias or precision (Table 3).
All understory tree detection measures significantly improved as image resolution became finer (Quality; Table 4), except Commission rate, which had no significant differences. Similarly, tree detection performance in the understory improved as the depth map filter shifted from Aggressive toward Disabled. However, the metrics were less responsive than they were to the Quality settings. F-scores for detected understory trees significantly improved from Low to Ultra High Quality settings (from coarser to finer image resolution; Figure 4). Depth map filtering had a more pronounced influence on understory tree detection than the overstory. The Disabled and Mild filter settings proved better representation of the understory trees.
The accuracy of detected understory tree heights was not significantly impacted by either build dense cloud processing parameter, with mean underestimation biases of −0.03 to −0.19 m (Table 4). However, understory tree height precision improved from ~33% to 24% with the shift from Lowest (coarsest image resolution) to Ultra High Quality (finest image resolution; Figure 5), but with the differences not being significant from Low to Ultra High Quality. Depth map filtering had no impact on detected understory tree height precision (Table 4).

4. Discussion

4.1. Influence of Processing Parameters

Overstory and understory tree detection performance was significantly impacted by the selection of Metashape build dense cloud Quality and depth map filter settings (Table 3 and Table 4). These impacts are logical given the visual differences in the Quality and depth map filter settings on the completeness of different size trees (Figure 2). The reduced completeness with the more aggressive depth filters and coarser-resolution Quality settings is apparent in subsequent CHMs that experienced data gaps, reduced/missing detail for shorter vegetation, and tinning errors during CHM generation (Figure 3). This improvement in vegetation representation with finer resolution imagery during processing (Quality) supports previous recommendations of retaining the original image resolution to provide greater detail within reconstructed forest canopies [17]. The High and Ultra High Quality settings’ influence on vegetation representation also translated to substantially improved point cloud data density, number of correctly detected trees, and tree height accuracy and precision, which was initially suggested by Jayathunga et al. [16]. Across all processing settings, we saw consistently small tree height underestimation biases that are in line with previous UAS studies [4,28]. This consistent negative UAS height bias across studies could be explained by the known overestimation of tree heights in the field by laser rangefinders [23], potentially indicating that the UAS-SfM tree heights have a bias close to zero.
Excluding the Aggressive depth map filter, the Ultra High Quality setting provided overstory F-scores >0.711 and >0.525 in the understory. This performance was mirrored by the High Quality setting’s F-scores of >0.717 and >0.520 for the overstory and understory, respectively. The only exception to this was modeling the understory with High Quality combined with the Moderate depth map filter, which resulted in an F-score of 0.398. Nearly identical overstory and understory F-scores have been produced in similar ponderosa pine forests using High Quality with the Mild depth map filter [3]. This study detected 83–87% of overstory trees using High and Ultra High Quality when the Aggressive depth map filter is excluded. Belmonte et al. [4] successfully detected ~64% of overstory trees in a ponderosa pine forest in Arizona, USA, using Agisoft Metashape’s High Quality and Aggressive depth map filter setting. Since our High and Ultra High Quality models with the Aggressive depth map filter had a 6 to 10% reduction in overstory tree detection rates compared to the other filters, Belmonte et al. [4] may have seen improved results by using one of the other depth map filter settings.
Overall, the Agisoft Metashape depth map filter had limited impact on the detection of individual trees and their heights (Figure 4 and Figure 5), with the only significant decrease in performance being the Aggressive filter setting compared to the other levels. This filter’s use of connected component size within depth maps does not seem to reliably represent the highly irregular structures of tree crowns. Other studies have also found that using anything stronger than the Mild depth map filter removes the vertical representation of trees [29], suggesting it should be only applied at the Disabled or Mild levels. Future work should evaluate other outlier removal tools such as point confidence filters that are specific to SfM processing or point density filters that have been widely applied in the aerial and terrestrial laser scanning literature.

4.2. Implications on Forest Monitoring

Across multiple UAS studies evaluating individual tree detection in open-canopy coniferous forest systems, consistently 80–100% of overstory trees have been correctly detected [3,4,28,30]. Along with this, decreasing tree detection performance for shorter, partially occluded understory trees has been seen in a range of forest systems [3,5,28]. These contrasting performance levels point to different levels of data confidence that need to be considered when being applied for forest management decision making. This study attempts to provide some context in operationalizing tree-level UAS monitoring of open canopy forest systems.
While this study only evaluated a 2 ha area of ponderosa pine, it provides insights into the feasibility of scaling UAS-SfM tree detection approaches to operational management levels. To characterize a 40 ha area of ponderosa pine using the flight parameters tested in this study, the Ultra High Quality processing would require ~60 h compared to the High setting needing ~15 h on the Agisoft Cloud or a similar computer system. Additionally, the Ultra High setting’s added data density would require ~34 GB of storage for the SfM point cloud, compared to ~8 GB for the High Quality setting. When considering the potential for repeat monitoring of individual stands to evaluate treatment effects [4] or tree-level growth [31], differences of these magnitudes have serious implications. These differences mean managers will need to decide between the increased processing time and storage demands from the Ultra High Quality setting (or finest image resolution) against small potential reductions in tree detection performance and height precision with the High setting.
This study points to the best overstory tree detection performance from combinations of either High or Ultra High Quality with the Mild or Disabled depth map filter for open-canopy conifer systems. However, suppose the relative location and density of understory trees are necessary for management objectives, such as planning fuel reduction treatments. In that case, the Ultra High Quality setting seems to provide substantial improvements in representing these smaller trees. This potential gain from the Ultra High setting must be considered against studies suggesting that too fine of a resolution imagery can result in increased levels of noise points being generated due to moving vegetation resulting from windy conditions during UAS data acquisition [11,12].
This study advances our understanding of how processing parameters not only impact UAS-SfM point clouds, but suggest optimal parameters for end-user products such as tree locations and heights in ponderosa pine forests. Future work is needed to explore how these parameters further propagate to estimates of other tree characteristics such as crown diameter/area [3] or DBH [15]. To fully operationalize UAS forest monitoring, other ways of improving data acquisition and processing efficiency need to be explored. Limited work has evaluated how image side overlap between UAS flight lines will impact end-user products such as detected tree locations and heights, but reducing side overlap is among the most effective ways of decreasing data acquisition times [11]. Additionally, alternative data acquisition designs should be explored, including subsample acquisitions of management units for developing UAS-based relationships of height to DBH that can then be applied to predict DBH across broader populations, as was achieved by Swayze et al. [28].

5. Conclusions

The detection of overstory and understory trees within open-canopy conifer systems through UAS-SfM local-maximum methods is optimized using the finest resolution imagery that computer hardware will allow, while applying only minimal depth filtering to the point cloud. The extended processing time and data storage demands that come with very high-resolution imagery will need to be balanced against small reductions in tree detection performance when down-scaling image resolution to enable the processing of greater data acquisition extents. Further work is needed to understand if the recommended processing strategy transfers to more complex forest systems with diverse species compositions or integrated conifer and deciduous species.

Author Contributions

Formal analysis, W.T.T.; Funding acquisition, W.T.T.; Investigation, W.T.T. and N.C.S.; Methodology, N.C.S.; Supervision, W.T.T.; Visualization, N.C.S.; Writing—original draft, W.T.T.; Writing—review and editing, N.C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the United States Department of Agriculture McIntire-Stennis Capacity Grant (COL00511).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors would like to thank Mike Battaglia, Wayne Shepperd, and Lance Asherin for establishing and maintaining the stem mapped study sites. Additionally, we would like to thank Alex Weissman, Matthew Creasy, Taylor Richmond, Alexis Conley, Alexa Binkley, Adam Langemeier, Brandon Hoem, Jillian LaRoe, and Steven Filippelli for assistance with field data collection.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Silva, C.A.; Hudak, A.T.; Vierling, L.A.; Loudermilk, E.L.; O’Brien, J.J.; Hiers, J.K.; Jack, S.B.; Gonzalez-Benecke, C.; Lee, H.; Falkowski, J.M.; et al. Imputation of Individual Longleaf Pine (Pinus palustris Mill.) Tree Attributes from Field and LiDAR Data. Can. J. Remote Sens. 2016, 42, 554–573. [Google Scholar] [CrossRef]
  2. Tinkham, W.T.; Smith, A.M.S.; Hoffman, C.M.; Hudak, A.T.; Falkowski, M.J.; Swanson, M.E.; Gessler, P.E. Investigating the influence of LiDAR ground surface errors on the utility of derived forest inventories. Can. J. For. Res. 2012, 42, 413–422. [Google Scholar] [CrossRef]
  3. Creasy, M.B.; Tinkham, W.T.; Hoffman, C.M.; Vogeler, J.C. Potential of individual tree monitoring in ponderosa pine-dominated forests using unmanned aerial system structure from motion point clouds. Can. J. For. Res. 2021. [Google Scholar] [CrossRef]
  4. Belmonte, A.; Sankey, T.; Biederman, J.A.; Bradford, J.; Goetz, S.J.; Kolb, T.; Woolley, T. UAV-derived estimates of forest structure to inform ponderosa pine forest restoration. Remote Sens. Ecol. Conserv. 2020, 6, 181–197. [Google Scholar] [CrossRef]
  5. Jeronimo, S.M.A.; Kane, V.; Churchill, D.J.; McGaughey, R.; Franklin, J.F. Applying LiDAR Individual Tree Detection to Management of Structurally Diverse Forest Landscapes. J. For. 2018, 116, 336–346. [Google Scholar] [CrossRef] [Green Version]
  6. Falkowski, M.J.; Smith, A.M.S.; Hudak, A.T.; Gessler, P.E.; Vierling, L.A.; Crookston, N.L. Automated estimation of individual conifer tree height and crown diameter via two-dimensional spatial wavelet analysis of lidar data. Can. J. Remote Sens. 2006, 32, 153–161. [Google Scholar] [CrossRef] [Green Version]
  7. Sačkov, I.; Kulla, L.; Bucha, T. A Comparison of Two Tree Detection Methods for Estimation of Forest Stand and Ecological Variables from Airborne LiDAR Data in Central European Forests. Remote Sens. 2019, 11, 1431. [Google Scholar] [CrossRef] [Green Version]
  8. Nilsson, M. Estimation of tree heights and stand volume using an airborne lidar system. Remote Sens. Environ. 1996, 56, 1–7. [Google Scholar] [CrossRef]
  9. Tinkham, W.T.; Smith, A.M.S.; Affleck, D.L.R.; Saralecos, J.D.; Falkowski, M.J.; Hoffman, C.M.; Hudak, A.T.; Wulder, M.A. Development of height-volume relationships in second growth Abies grandis for use with aerial LiDAR. Can. J. Remote Sens. 2016, 42, 400–410. [Google Scholar] [CrossRef]
  10. Ziegler, J.P.; Hoffman, C.; Battaglia, M.; Mell, W. Spatially explicit measurements of forest structure and fire behavior following restoration treatments in dry forests. For. Ecol. Manag. 2017, 386, 1–12. [Google Scholar] [CrossRef] [Green Version]
  11. Dandois, J.; Olano, M.; Ellis, E. Optimal Altitude, Overlap, and Weather Conditions for Computer Vision UAV Estimates of Forest Structure. Remote Sens. 2015, 7, 13895–13920. [Google Scholar] [CrossRef] [Green Version]
  12. Frey, J.; Kovach, K.; Stemmler, S.; Koch, B. UAV Photogrammetry of Forests as a Vulnerable Process. A Sensitivity Analysis for a Structure from Motion RGB-Image Pipeline. Remote Sens. 2018, 10, 912. [Google Scholar] [CrossRef] [Green Version]
  13. Nesbit, P.; Hugenholtz, C. Enhancing UAV–SfM 3D Model Accuracy in High-Relief Landscapes by Incorporating Oblique Images. Remote Sens. 2019, 11, 239. [Google Scholar] [CrossRef] [Green Version]
  14. Fraser, B.T.; Congalton, R.G. Issues in Unmanned Aerial Systems (UAS) Data Collection of Complex Forest Environments. Remote Sens. 2018, 10, 908. [Google Scholar] [CrossRef] [Green Version]
  15. Przybilla, H.-J.; Lindstaedt, M.; Kersten, T. Investigations into the quality of image-based point clouds from UAV imagery. Int. Arch. Photogramm. Remote Sens. Spatial Infor. Sci. 2019, XLII-2/W13. [Google Scholar] [CrossRef] [Green Version]
  16. Jayathunga, S.; Owari, T.; Tsuyuki, S. Digital aerial photogrammetry for uneven-aged forest management: Assessing the potential to reconstruct canopy structure and estimate living biomass. Remote Sens. 2019, 11, 338. [Google Scholar] [CrossRef] [Green Version]
  17. Lisein, J.; Pierrot-Deseilligny, M.; Bonnet, S.; Lejeune, P. A photogrammetric workflow for the creation of a forest canopy height model from small unmanned aerial system imagery. Forests 2013, 4, 922. [Google Scholar] [CrossRef] [Green Version]
  18. Iglhaut, J.; Cabo, C.; Puliti, S.; Piermattei, L.; O’Connor, J.; Rosette, J. Structure from Motion Photogrammetry in Forestry: A Review. Curr. For. Rep. 2019, 5, 155–168. [Google Scholar] [CrossRef] [Green Version]
  19. Li, Z.; Snavely, N. MegaDepth: Learning single-view depth prediction from internet photos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2041–2050. [Google Scholar]
  20. Hu, X.; Mordohai, P. A quantitative evaluation of confidence measures for stereo vision. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2121–2133. [Google Scholar] [PubMed]
  21. Carrilho, A.C.; Galo, M.; dos Santos, R.C. Statistical outlier detection method for airborne lidar data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2018, 42, 87–92. [Google Scholar] [CrossRef] [Green Version]
  22. Vastaranta, M.; Melkas, T.; Holopainen, M.; Kaartinen, H.; Hyyppä, J.; Hyyppä, H. Laser-based field measurements in tree-level forest data acquisition. Photogramm. J. Finl. 2009, 21, 51–61. [Google Scholar]
  23. Wang, Y.; Lehtomäki, M.; Liang, X.; Pyörälä, J.; Kukko, A.; Jaakkola, A.; Liu, J.; Feng, Z.; Chen, R.; Hyyppä, J. Is field-measured tree height as reliable as believed–a comparison study of tree height estimates from field measurement, airborne laser scanning and terrestrial laser scanning in boreal forest. ISPRS J. Photogramm. Remote Sens. 2019, 147, 132–145. [Google Scholar] [CrossRef]
  24. Roussel, J.R.; Auty, D. lidR: Airborne LiDAR Data Manipulation and Visualization for Forestry Applications. R Package Version 2.0.2. 2019. Available online: https://CRAN.R-project.org/package=lidR (accessed on 12 March 2020).
  25. Popescu, S.C.; Wynne, R.H. Seeing the Trees in the Forest. Photogramm. Eng. Remote Sens. 2004, 70, 589–604. [Google Scholar] [CrossRef] [Green Version]
  26. Plowright, A. ForestTools: Analyzing Remotely Sensed Forest Data. R Package Version 0.2.0. 2018. Available online: https://CRAN.R-project.org/package=ForestTools (accessed on 12 March 2020).
  27. Krause, S.; Sanders, T.G.; Mund, J.-P.; Greve, K. UAV-Based Photogrammetric Tree Height Measurement for Intensive Forest Monitoring. Remote Sens. 2019, 11, 758. [Google Scholar] [CrossRef] [Green Version]
  28. Swayze, N.C.; Tinkham, W.T.; Vogeler, J.C.; Hudak, A.T. Influence of flight parameters on UAS-based monitoring of tree height, diameter, and density. Remote Sens. Environ. 2021, 258, 1–23. [Google Scholar]
  29. Fawcett, D.; Azlan, B.; Hill, T.C.; Kho, L.K.; Bennie, J.; Anderson, K. Unmanned aerial vehicle (UAV) derived structure-from-motion photogrammetry point clouds for oil palm (Elaeis guineensis) canopy segmentation and height estimation. Int. J. Remote Sens. 2019, 40, 7538–7560. [Google Scholar] [CrossRef] [Green Version]
  30. Mohan, M.; Silva, C.; Klauberg, C.; Jat, P.; Catts, G.; Cardil, A.; Hudak, A.; Dia, M. Individual Tree Detection from Unmanned Aerial Vehicle (UAV) Derived Canopy Height Model in an Open Canopy Mixed Conifer Forest. Forests 2017, 8, 340. [Google Scholar] [CrossRef] [Green Version]
  31. Guerra-Hernández, J.; González-Ferreiro, E.; Monleón, V.J.; Faias, S.P.; Tomé, M.; Díaz-Varela, R.A. Use of multi-temporal UAV-derived imagery for estimating individual tree growth in Pinus pinea stands. Forests 2017, 8, 300. [Google Scholar] [CrossRef]
Figure 1. Study area portrayed with (A) unmanned aerial system (UAS) orthophoto at 2.6 cm resolution with inset distribution of survey tree heights and (B) UAS-derived Canopy Height Model at 10 cm resolution from High Quality and Mild depth map filtering with overlaid survey trees colored by height to demonstrate data alignment.
Figure 1. Study area portrayed with (A) unmanned aerial system (UAS) orthophoto at 2.6 cm resolution with inset distribution of survey tree heights and (B) UAS-derived Canopy Height Model at 10 cm resolution from High Quality and Mild depth map filtering with overlaid survey trees colored by height to demonstrate data alignment.
Forests 12 00250 g001
Figure 2. Subsets of filtered UAS-structures from motion (SfM) point clouds (15 m × 30 m) for the 20 combinations of the build dense cloud Quality and depth map filter parameter settings.
Figure 2. Subsets of filtered UAS-structures from motion (SfM) point clouds (15 m × 30 m) for the 20 combinations of the build dense cloud Quality and depth map filter parameter settings.
Forests 12 00250 g002
Figure 3. Subsets of filtered UAS-SfM point cloud-derived canopy height models (CHMs) (13 m × 30 m) for the 20 combinations of the build dense cloud Quality and depth map filter parameter settings. UAS-derived orthophoto of the subset area provided to aid in CHM inspection.
Figure 3. Subsets of filtered UAS-SfM point cloud-derived canopy height models (CHMs) (13 m × 30 m) for the 20 combinations of the build dense cloud Quality and depth map filter parameter settings. UAS-derived orthophoto of the subset area provided to aid in CHM inspection.
Forests 12 00250 g003
Figure 4. Influence of Metashape build dense cloud Quality and depth map filter settings on individual tree detection (ITD) F-score for overstory and understory trees presented as violin plots to portray the distribution of data points. Letters represent significant difference (α = 0.05) determined by two-way ANOVA with Tukey’s HSD test.
Figure 4. Influence of Metashape build dense cloud Quality and depth map filter settings on individual tree detection (ITD) F-score for overstory and understory trees presented as violin plots to portray the distribution of data points. Letters represent significant difference (α = 0.05) determined by two-way ANOVA with Tukey’s HSD test.
Forests 12 00250 g004
Figure 5. Influence of Metashape build dense cloud Quality and depth map filter settings on overstory and understory detected tree height root mean square error (RMSE) presented as violin plots to portray the distribution of data points. Letters represent significant differences (α = 0.05) determined by two-way ANOVA with Tukey’s HSD test.
Figure 5. Influence of Metashape build dense cloud Quality and depth map filter settings on overstory and understory detected tree height root mean square error (RMSE) presented as violin plots to portray the distribution of data points. Letters represent significant differences (α = 0.05) determined by two-way ANOVA with Tukey’s HSD test.
Forests 12 00250 g005
Table 1. Agisoft Metashape processing settings for image alignment and sparse cloud generation.
Table 1. Agisoft Metashape processing settings for image alignment and sparse cloud generation.
ParameterSetting
Align Photos
AccuracyHigh
Generic PreselectionYes
Reference PreselectionSource
Reset Current AlignmentNo
Key Point Limit40,000
Tie Point Limit4000
Apply Masks ToNone
Optimize Alignment
Adaptive Camera Model FittingYes
Table 2. Influence of Metashape build dense cloud Quality and depth map filter settings on the mean (standard deviation) of overall and ground point return density, along with total processing time and point cloud file size. Letters represent significant differences (α = 0.05) determined by two-way ANOVA with Tukey’s honestly significant difference (HSD) test.
Table 2. Influence of Metashape build dense cloud Quality and depth map filter settings on the mean (standard deviation) of overall and ground point return density, along with total processing time and point cloud file size. Letters represent significant differences (α = 0.05) determined by two-way ANOVA with Tukey’s honestly significant difference (HSD) test.
# of Point CloudsOverall
(Points m−2)
Ground
(Points m−2)
Processing Time
(h ha−1)
File Size
(GB ha−1)
Quality Setting
Ultra High46041.1 (534.8) c10.9 (0.4) cd1.49 (0.04) d0.856 (0.087) c
High41364.6 (105.7) b11.1 (0.3) d0.37 (0.01) c0.201 (0.019) b
Medium4327.4 (23.9) a10.5 (0.2) c0.11 (0.00) b0.048 (0.004) a
Low480.1 (7.6) a8.8 (0.1) b0.04 (0.00) a0.012 (0.001) a
Lowest418.7 (1.7) a1.4 (0.2) a0.03 (0.00) a0.002 (0.0002) a
Depth Map Filter Setting
Aggressive51365.0 (2218.7)8.6 (4.3)0.41 (0.63)0.191 (0.307)
Moderate51634.3 (2687.9)8.6 (4.2)0.42 (0.64)0.233 (0.380)
Mild51.628.8 (2654.5)8.6 (4.1)0.39 (0.60)0.233 (0.376)
Disabled51637.4 (2376.7)8.3 (3.8)0.40 (0.62)0.238 (0.386)
Table 3. Influence of Metashape build dense cloud Quality and depth map filter settings on the mean (standard deviation) of overstory tree F-score and true positive, omission, and commission rates. Letters represent significant differences (α = 0.05) determined by two-way ANOVA with Tukey’s HSD test.
Table 3. Influence of Metashape build dense cloud Quality and depth map filter settings on the mean (standard deviation) of overstory tree F-score and true positive, omission, and commission rates. Letters represent significant differences (α = 0.05) determined by two-way ANOVA with Tukey’s HSD test.
# of Point CloudsF-ScoreTrue Positive
(%)
Omission
(%)
Commission
(%)
Height Mean Error (m)Height
RMSE (%)
Quality Setting
Ultra High40.717 (0.008) c65.8 (1.8) c34.2 (1.8) c17.8 (2.7) d−0.49 (0.02) b5.0 (0.2) d
High40.706 (0.028) c65.2 (3.6) c34.8 (3.6) c19.4 (0.8) d−0.60 (0.05) b5.6 (0.1) c
Medium40.651 (0.039) c61.8 (5.5) c38.2 (5.5) c27.8 (1.6) c−0.76 (0.04) a6.6 (0.2) b
Low40.472 (0.093) b42.6 (10.0) b57.4 (10.0) b36.5 (2.1) b−0.85 (0.05) a7.1 (0.3) ab
Lowest40.172 (0.045) a14.1 (04.1) a85.9 (4.1) a49.3 (2.0) a−0.83 (0.09) a7.5 (0.4) a
Depth Map Filter Setting
Disabled50.573 (0.211) b53.6 (20.8) b46.4 (20.8) b31.1 (13.1)−0.68 (0.16)6.2 (1.0)
Mild50.570 (0.221) b53.1 (21.9) b46.9 (21.9) b30.2 (12.4)−0.71 (0.18)6.3 (1.1)
Moderate50.544 (0.244) ab49.8 (23.3) b50.2 (23.3) b29.7 (13.9)−0.69 (0.14)6.5 (1.0)
Aggressive50.487 (0.248) a43.0 (23.1) a57.0 (23.1) a29.6 (13.2)−0.74 (0.16)6.5 (1.1)
Table 4. Influence of Metashape build dense cloud Quality and depth map filter settings on the mean (standard deviation) of understory tree F-score and true positive, omission, commission rates. Letters represent significant differences (α = 0.05) determined by two-way ANOVA with Tukey’s HSD test.
Table 4. Influence of Metashape build dense cloud Quality and depth map filter settings on the mean (standard deviation) of understory tree F-score and true positive, omission, commission rates. Letters represent significant differences (α = 0.05) determined by two-way ANOVA with Tukey’s HSD test.
# of Point CloudsF-ScoreTrue Positive
(%)
Omission
(%)
Commission
(%)
Height Mean Error (m)Height
RMSE (%)
Quality Setting
Ultra High40.514 (0.128) d40.8 (13.1) d59.2 (13.1) d15.3 (3.7)−0.14 (0.02)23.5 (1.5) b
High40.428 (0.120) cd31.4 (10.4) cd68.6 (10.4) cd13.7 (3.9)−0.15 (0.06)24.1 (1.8) b
Medium40.310 (0.072) bc20.9 (5.0) bc79.1 (5.0) bc13.9 (5.5)−0.19 (0.06)26.4 (1.7) b
Low40.228 (0.036) b15.1 (2.3) ab84.9 (2.3) ab17.7 (3.5)−0.18 (0.05)28.2 (0.4) b
Lowest40.107 (0.011) a6.6 (0.6) a93.4 (0.6) a17.5 (2.6)−0.03 (0.12)33.3 (4.4) a
Depth Map Filter Setting
Disabled50.376 (0.197) b28.6 (17.9) b71.4 (17.9) b15.8 (3.0) ab−0.14 (0.05)27.3 (3.6)
Mild50.368 (0.196) b27.1 (16.9) b72.9 (16.9) b13.0 (2.8) b−0.12 (0.02)25.8 (2.0)
Moderate50.306 (0.163) ab21.3 (12.7) ab78.7 (12.7) ab13.8 (3.4) b−0.17 (0.09)26.7 (5.0)
Aggressive50.219 (0.089) a14.9 (6.4) a85.1 (6.4) a20.0 (2.8) a−0.12 (0.15)28.7 (5.9)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tinkham, W.T.; Swayze, N.C. Influence of Agisoft Metashape Parameters on UAS Structure from Motion Individual Tree Detection from Canopy Height Models. Forests 2021, 12, 250. https://0-doi-org.brum.beds.ac.uk/10.3390/f12020250

AMA Style

Tinkham WT, Swayze NC. Influence of Agisoft Metashape Parameters on UAS Structure from Motion Individual Tree Detection from Canopy Height Models. Forests. 2021; 12(2):250. https://0-doi-org.brum.beds.ac.uk/10.3390/f12020250

Chicago/Turabian Style

Tinkham, Wade T., and Neal C. Swayze. 2021. "Influence of Agisoft Metashape Parameters on UAS Structure from Motion Individual Tree Detection from Canopy Height Models" Forests 12, no. 2: 250. https://0-doi-org.brum.beds.ac.uk/10.3390/f12020250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop