Next Article in Journal
Targeted Grassland Monitoring at Parcel Level Using Sentinels, Street-Level Images and Field Observations
Next Article in Special Issue
Mapping Individual Tree Species and Vitality along Urban Road Corridors with LiDAR and Imaging Sensors: Point Density versus View Perspective
Previous Article in Journal
Extrinsic Parameters Calibration Method of Cameras with Non-Overlapping Fields of View in Airborne Remote Sensing
Previous Article in Special Issue
Detection of Shoot Beetle Stress on Yunnan Pine Forest Using a Coupled LIBERTY2-INFORM Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extraction of Sample Plot Parameters from 3D Point Cloud Reconstruction Based on Combined RTK and CCD Continuous Photography

1
Precision Forestry Key Laboratory of Beijing, Beijing Forestry University, Beijing 100083, China
2
Mapping and 3S Technology Center, Beijing Forestry University, Beijing 100083, China
3
School of Land Resources and Urban and Rural Planning, Heibei GEO University, Shijiazhuang 050031, China
4
School of Nature Conservation, Beijing Forestry University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Submission received: 19 July 2018 / Revised: 13 August 2018 / Accepted: 13 August 2018 / Published: 17 August 2018
(This article belongs to the Special Issue Aerial and Near-Field Remote Sensing Developments in Forestry)

Abstract

:
Enriching forest resource inventory is important to ensure the sustainable management of forest ecosystems. Obtaining forest inventory data from the field has always been difficult, laborious, time consuming, and expensive. Advances in integrating photogrammetry and computer vision have helped researchers develop some numeric algorithms and methods that can turn 2D (images) into 3D (point clouds) and are highly applicable to forestry. This paper aimed to develop a new, highly accurate methodology that extracts sample plot parameters based on continuous terrestrial photogrammetry. For this purpose, we designed and implemented a terrestrial observation instrument combining real-time kinematic (RTK) and charge-coupled device (CCD) continuous photography. Then, according to the set observation plan, three independent experimental plots were continuously photographed and the 3D point cloud of the plot was generated. From this 3D point cloud, the tree position coordinates, tree DBHs, tree heights, and other plot characteristics of the forest were extracted. The plot characteristics obtained from the 3D point cloud were compared with the measurement data obtained from the field to check the accuracy of our methodology. We obtained the position coordinates of the trees with the positioning accuracy (RMSE) of 0.162 m to 0.201 m. The relative root mean square error (rRMSE) of the trunk diameter measurements was 3.07% to 4.51%, which met the accuracy requirements of traditional forestry surveys. The hypsometrical measurements were due to the occlusion of the forest canopy and the estimated rRMSE was 11.26% to 11.91%, which is still good reference data. Furthermore, these image-based point cloud data also have portable observation instruments, low data collection costs, high field measurement efficiency, automatic data processing, and they can directly extract tree geographic location information, which may be interesting and important for certain applications such as the protection of registered famous trees. For forest inventory, continuous terrestrial photogrammetry with its unique advantages is a solution that deserves future attention in the field of tree detection and ecological construction.

Graphical Abstract

1. Introduction

Forest inventory determines a forest’s current situation and trend and ensures the sustainable management of forest ecosystems [1,2]. Tree diameter at breast height (DBH) and tree height are the basic factors of forest inventory that determine the stand level structure distribution (i.e., tree location) and, more importantly, the forest tree growth in the present situation and the relative distance between trees and their mutual relations [3]. Obtaining forest inventory factors data with high precision and minimum cost and time is a key objective in forestry research. Tree-to-tree data collection is a traditional method of forestry sciences that is laborious and time-consuming. Therefore, developing cost-efficient and time-efficient alternative methods is currently necessary [4,5].
In the past few decades, many non-contact methods have been proposed to simplify forest resource surveys such as remote sensing images, aerial photogrammetry, 3D laser scanning, and terrestrial photogrammetry [6,7,8,9]. The forest inventory model has been gradually improved from two-dimensional (2D) data to a three-dimensional (3D) structure, which has completely changed the global forest inventory model. Remote sensing images and aerial photogrammetry promptly obtain forest surface data over a large area. However, these two methods have a limitation because the accuracy of the terrestrial data under the tree crown is compromised [10,11,12,13]. 3D laser scanning technology can efficiently collect high precision 3D data of a forest’s vertical structure. Therefore, it has useful practical applications in forestry. In the Nordic countries such as Denmark and Finland, forest inventory has been largely developed and managed by the Airborne Laser Scanning (ALS) approach [14,15,16].
In China, 3D laser scanning technology has improved remarkably in recent years. Guo [17], Pang [18], and Yang [19] studied the applicability and advantages of 3D laser scanning technology to establish a forest resource inventory with few limitations including data redundancy and cost. Zhang et al. [20] described two main point cloud data sources in photogrammetry – optical image and laser radar data—and pointed out their advantages and disadvantages. Among many modern forest survey techniques, the terrestrial photogrammetry method is attaining foresters’ attention with its high precision and low price [21,22].
3D aerial photogrammetry and laser scanning have made a breakthrough in the past two years in the field of remote sensing especially the introduction of various image matching and point cloud reconstruction algorithms [23,24]. However, there are several studies on forest resource inventory that relate to terrestrial photogrammetry under a canopy and forest stand monitoring by establishing a three-dimensional point cloud model based on an image through a non-measurement digital camera. In 2014, García-Gago et al. [25] proposed a new terrestrial measurement tool known as a photogrammetry table, which attempted to generate dense 3D models in a terrestrial mode through the initial approximation of computer vision and accurate and reliable refinement by the adjustment of the photographic measurement beam. In the same year, Liang et al. [26] developed uncelebrated hand-held consumer cameras for terrestrial photogrammetry with 88% data accuracy. Huang et al. [27] developed a photoelectric scanner consisting of a non-measurement camera, a rotating platform, and a controller. With the continuous development of imaging technology, there are many commercial software programs based on an image point cloud such as Smart3D (Acute3D—A Bentley Systems Company, Paris, French), Pix4D mapper (Pix4D company, Lausanne, Switzerland) and Agisoft PhotoScan (AgiSoft LLC, Petersburg, Russia) [26,28,29,30]. To select the appropriate tools for image processing, Song et al. [31] compared multiple image Mosaic software packages through the verification analysis and a precision evaluation and determined the free or cheap software that meets the requirements of photogrammetry. Surovy et al. [32] proposed a method to reconstruct 3D objects with a real shape and appearance by using multi-view stereo vision (MVS) technology in combination with computer vision and photogrammetry. The experiment shows that, with the increase in the number of cameras, the precision of measuring an object’s spatial point position is getting higher.
In 2016, Mikita et al. [33] proposed a forest stand inventory working model based on the aerial photogrammetry measurement of a UAV system and the terrestrial photogrammetry measurement inside the forest stand, which realized the identification and positioning of the trees in the sample plot. He reported that the root mean square error (RMSE) of the breast diameter measurement was 1 cm and the tree height estimation error was 1 m. Although this method can solve the problem of obtaining the measuring tree factor of the tall trees in an overripe forest stand, the matching and calibration of UAV images and terrestrial images are very complicated.
To reduce the cost of forest surveys, the terrestrial observation model is recommended for small forest stand monitoring. By using aerial photogrammetry as a reference, a variety of sensors (such as GPS, IMU, and CCD) have been gradually applied to terrestrial measurement modes. In 2018, Pierzchała et al. [34] proposed the method of using a simultaneous localization and mapping (SLAM) algorithm to generate a local forest map and they designed a mobile data acquisition platform that integrates multiple sensors. Lastly, the resulting map was accurately evaluated in the form of a three-dimensional point cloud. The feasibility of the equipment and the SLAM algorithm were verified, the RMSE of the DBH measurement was reported to be 2.38 cm and the mean error of the tree position measurement was 0.0476 m.
Although a relatively complete technical system for a forest resource inventory has been established, the investigation cost, measurement accuracy, observation efficiency, and complexity of the forest resource inventory are still the most difficult problems.
To minimize the forest resource inventory cost with high precision and efficiency, we proposed in this study a new method based on terrestrial photogrammetry and the sample plot 3D point cloud for the accurate extraction of forest parameters. In this method, a ground observation instrument combining real-time kinematic (RTK) and charge-coupled device (CCD) continuous photography was designed and implemented. Then, sample plot points were continuously photographed, according to the customized observation scheme, and 3D point clouds were successively established. Lastly, based on a point cloud model, a method was developed to identify and extract the geographical position coordinates, tree diameter, tree height, and other stand parameters. This study mainly aimed to obtain the following objectives: (1) to acquire a low-cost and hand-held continuous terrestrial photogrammetry observation instrument for a forest resource inventory; (2) to obtain the precise geographical location of trees including tree measurement factors in the small area of a high-value wood forest; and (3) to determine the spatial structure at the stand level and extract the stand parameters.
The structure of the rest of this paper is as follows. In Section 2, we provide a step-by-step description of the proposed method including a survey of the research area, the composition of the measuring equipment, the data acquisition method, and the corresponding types of survey factors. In Section 3, the proposed method was tested in the study area and the extraction results were verified and compared with the field survey results on site. In Section 4, the results of tree extraction and the importance of this study are discussed. Lastly, in Section 5, the conclusion is drawn.

2. Materials and Methods

The proposed method consists of three key steps including data acquisition, data processing, and data analysis. First, according to the observation scheme, RTK and two CCD lenses were used to obtain the camera point coordinates and corresponding image information, respectively. Among them, only three sets of images and the point coordinate data of several stations were collected for each site and the data were classified and stored by the development of a personal digital assistant (PDA). Second, a 3D point cloud was constructed according to the obtained data and the tree coordinates, tree diameter, tree height, and other stand parameters were extracted by designing algorithms. Lastly, based on the field measurement data, the extracted stand parameters are compared and analyzed to verify the measurement accuracy and feasibility of the scheme. The framework of the proposed method is summarized in Figure 1. The details are described in the following section.

2.1. Study Area

The study was conducted in the forest park situated in the Haidian district (40°00′30″N, 116°19′50″E) of north-central Beijing urban area, China (Figure 2). The study area is located in the warm temperate semi-humid and semi-arid continental monsoon climate zone with four distinct seasons among which the summer is hot and the winter is cold receiving less precipitation, which is normally in the form of snow. It is a man-made forest planted with Willow (Salix matsudana), Chinese white poplar (Populus tomentosa), Chinese scholar tree (Sophora japonica), elm (Ulmus pumila), and cypress (Juniperus chinensis). There are no bushes with only a small amount of understory herbs.

2.2. Field Survey

Standard data for the field survey were collected on 1 June, 2018. First, the test sample plot of 20 m × 20 m in the test area was selected covering a total area of 0.12 hectares. Under the coordinate system of WGS-84, the HI-TARGET H32 omnipotent GNSS RTK system (HI-TARGET corporation, Guangzhou, China) was used to measure and mark four set stations in the open area around the test sample plot. Then, the point coordinates and tree height of the tree in the test sample were observed successively by using the NTS-391 total station (SOUTHSURVEY, Guangzhou, China) that was installed at the set station through the unified coordinate system of the backward visual orientation to serve as the standard values of tree position and tree height. Among these values, the tree position was close to the base, the tree height was measured by the suspension mode of the station, and the tree species were also recorded. Lastly, for the measurement of the tree position, 2 m-steel tape was used to measure the corresponding DBH of the tree to serve as the standard values of the tree diameter. The overall feature parameters of the experimental plots are shown in Table 1.

2.3. Continuous Terrestrial Photogrammetry (CTP)

The terrestrial photogrammetry instrument comprised of two CCD lenses (SONY ILCE-QX1, Tokyo, Japan), Global Navigation Satellite System (GNSS) receivers (RTK module), attitude sensors (MEMS angle sensor), a Personal Digital Assistant (PDA), special joints, and retractable rods (see Figure 3). The PDA is the main operating module that uses a short-range wireless chip nRF24L01 to transmit through each module. The vertical baseline of the two CCD lenses on the instrument was fixed and the length of the baseline was 40 cm. The main optical axis of the upper lens and the horizontal surface of the horizontal lens were at an angle of 30° forward while the main optical axis of the lower lens maintained a horizontal direction forward (see Figure 4). The equipment such as the GNSS modules and the dual-CCD camera modules needs to be properly tested before work. The relative space of the MEMS sensor module is fixed and the relevant position and offset parameters need to be examined and determined.
The experiment of image data acquisition was carried out on the afternoon of 4 June 2018. According to the characteristics of the tree distribution and the instrument itself, we designed a “route method” of a continuous terrestrial photogrammetry stand (see Figure 5). While working, the operator used a stop-and-go pattern to take photos according to the designed route. Meanwhile the position coordinates corresponding to the site and the attitude parameters of the CCD lens were collected to prepare for later image matching and point cloud construction. When shooting at a single site, it is necessary to keep the instrument level upright. Three sets of images were obtained by continuous photogrammetry along the direction of “30° to the left, right ahead, 30° to the right.” Simultaneously, the lens attitude parameters three times and the position coordinates of the measuring site were acquired. One meter distance was between two adjacent sites because too much space would lead to low image overlap while too little space could add to extra internal and external work. It is worth noticing that, for forest areas with a high canopy density and severe signal interference, at least five site location coordinates should be collected. For the better results, we collected accurate point coordinate information of the four corner points (Figure 5) at the edge of the observed route. In addition, the accurate coordinate information of at least one-point position was also collected at the center of the observation route.
Three sample plots were observed successively through the above continuous photogrammetry method. The images were divided into upper images and lower images among which 804,822 and 864 images were obtained from three sample plots and a total of 2490 images were collected. The commercial software Pix4Dmapper was used to pre-process the images and site coordinates obtained by the three test samples. Prior to image initialization pre-processing, the quality parameter of the matching process was set to high and an automated lens calibration was performed simultaneously. Point clouds were generated after image matching with the point density setting set to low. The processes of creating image mosaics with Pix4Dmapper are based on the use of fundamental principles of photogrammetry combined with robust algorithms from computer vision. One of the most commonly used and most rigorous methods is the bundle adjustment algorithms based on the structure from motion techniques. This method can extract features of individual images that could be matched to their corresponding features in other images and the entire process is calculated by using an incremental approach in which the bundle adjustment of an initial image pair is sequentially repeated with more images incorporated at each iteration into a seamless panorama. After deep construction and point cloud encryption, the sparse point cloud of the three sample plots was processed into the encrypted point cloud model. Among them, the dense point cloud formed in plot 1 was 31.21 million and the point cloud density was 31,799.964 per/m². The dense point cloud formed in plot 2 was 64.50 million and the point cloud density was 64,982.316 per/m². The dense point cloud formed in plot 3 was 95.24 million and the point cloud density was 83,463.851per/m². The resulting three-dimensional view of the point cloud is shown in Figure 6.

2.4. Extraction of Tree Position

For the extraction of the tree position, the dense point cloud was formed and the point cloud data was imported into the LiDAR360 software (Beijing Digital Green Earth Technology Co., Ltd., Beijing, China). The air noise was removed to enhance the cloud data quality and to prepare for later data processing. The filtering of the terrestrial point was realized by setting parameters such as the size of the grid, the thickness of the terrestrial point, the window smoothness, and the size of the adjacent window. The terrestrial point was separated and the digital elevation model (DEM) was generated. Simultaneously, we normalized the filtered point cloud data by the DEM to remove the effect of terrain fluctuation on cloud data. Due to the low canopy vegetation in the sample plot, only the point cloud that was 0.3 m above the ground point height was used as the high vegetation point to participate in individual tree segmentation. This distinction ensured the accuracy of the extraction of the tall tree point cloud so that valuable information can be concentrated at the later stage of individual tree point cloud data extraction and application. After the segmentation of tall trees, the point cloud information of each tree was stored separately in the LiData file and was given a random color. It formed the point cloud distribution effect of individual trees as shown in Figure 7. By extracting according to the tree ID, the corresponding point cloud range and data file were selected and the location information of the trees in the plot was automatically extracted in batches including the X, Y, and Z coordinates of each tree and the corresponding tree ID information. In addition, each point coordinate was the actual geographic coordinate in the WGS-84 coordinate system.

2.5. Extraction of Tree DBH

Based on the pre-processing point cloud data after individual tree segmentation, we only needed to set the extraction position to the reference height of the stem. Afterward, by using the TLS editing tool of the LiDAR 360 software, we extracted the relevant point cutting of an individual tree (point group). Since the point density of the point cloud was sufficiently large, the diameter at any height of the tree can be measured by a very thin incision with a height interval of 20 mm (for example, DBH extraction selected all points between 1.29 m and 1.31 m above the ground point, as shown in Figure 8). Due to the irregularity of stem growth, we couldn’t simply use the method of the fitting circle to extract DBH [35,36]. To accurately extract the tree’s DBH, based on the plane coordinates of the point cloud data of the cutting ring, we used the least square method to precisely fit it into a two-dimensional ellipse. That gave the approximate shape of the stem cross-section and the diameter of the corresponding cross section at a tree’s DBH. Accordingly, in the case of high-density data, the entire shape of the stem of a tree with a very low generalization was easy to describe.
The method of DBH batch extraction was adopted to realize the automatic batch extraction of DBH in the sample plots as shown in Figure 9. Moreover, this method could also be exported in a CSV format including the tree ID and the corresponding DBH.

2.6. Extraction of Tree Height

In terrestrial observations, only the part below the canopy can be precisely modelled and it is difficult to obtain the information of the canopy or the part above it. This is the reason that the tree height of terrestrial observation is difficult to measure. Therefore, we propose a method to calculate the height of individual trees based on the Kunze M. stem curve equation with point cloud data. First, the longitudinal section of the stem for stem tapering is analyzed [37,38]. Then, according to the form factor pattern of Kunze M. stem tapering, the overall outline of the stem can be divided into four parts. Starting from top to bottom, part I is a cone (r = 2), part II is a paraboloidal (r = 1), part III is a cylinder (r = 0), and part IV is for the concave curve part (r = 3). Among these parts, r stands for the form factor (as shown in Figure 10) [39]. Since part III is the largest part of the entire volume, the stem form index of this part is regarded as constant. In this way, we used the Kunze M. stem curve principle (Equation (1)) to combine the pattern of the form factor and to calculate the height value, according to the point cloud data model.
y 2 = p × x r  
In the equation above, y is the radius of the cross-section of the stem, x is the length from the head log to the cross sections, p is the coefficient, and r is the stem form factor.
The expression for the stem form factor r is shown below.
r = 2 ln ( y i ) ln ( y j ) ln ( x i ) ln ( x j )  
In the equation above, y i and y j stand for the radiuses of different cross-sections of stems and x i and x j stand for the length from the head log to each cross-section.
It can be seen from the principle of the Kunze M. stem curve that the shape of any part of the stem can be expressed with a suitable value of form factor r . Thus, we established a plane-coordinate system X H O Y H for the stem curve as shown in Figure 10. Based on the point cloud data from individual tree segmentation, the diameters d 1 , d 2 , and d 3 of the individual trees in the cylinder (r = 0) interval of part three were extracted by using the TLS editing tool of the LiDAR 360 software referring to the method of DBH extraction. Among these diameters, d 1 is the diameter of the trees located in the third part near the breast, d 3 is the tree diameter within the terrestrial point cloud data of the highest observed position, and d 2 is the tree diameter between d 1 and d 3 . In addition, h 1 , h 2 , and h 3 stand for the corresponding height from the ground position to the position of d 1 , d 2 , and d 3 , respectively.
By applying Equations (1) and (2), with y = d j / 2 , x = H h j ( j = 1 , 2 , 3 ), the stem form factor r 1 can be obtained at d 2 and d 3 . Similarly, the stem form factor r 2 can be obtained at d 1 and d 2 . Since the third part is the largest part of the stem and assuming that r 1 = r 2 , the tree height H 1 is based on the positions of d 1 , d 2 , and d 3 , which can also be obtained.
In Section 2.5, we can extract the diameter at the breast height, which is considered to be an important index of tree measurement. Therefore, an analysis was also conducted around the DBH. Similarly, the TLS editing tool in the LiDAR 360 software was used to extract the diameter of the trees 1.2 m below the breast height and 1.4 m above the breast height. Similarly, the stem forms factors r 3 and r 4 , which is obtained and it is assumed that r 3 = r 4 to obtain the tree height based on the position of the DBH. Lastly, Equation (3) is used to calculate the tree height H .
H = ( H 1 + H 2 ) / 2  
Of course, for the complete tree height display of the tree point cloud data, it is recommended to use the measuring tool in the LiDAR 360 software to directly extract the tree height H .

2.7. Extraction of Other Stand Factors

In addition to the location and size (DBH and tree height) of the trees, there are many factors that can objectively reflect stand features such as the tree density, average diameter at breast height, average height of the stand, and stand spatial structure parameters (angle scale, size ratio, and mixed degree) [40].
Tree density is the number of trees per unit area, which is the most commonly used density index in forestry. For the sample plot, cloud data after individual tree segmentation, we set up the area box of the area under test and determined the area S of the area under the test by using the measuring tool box of the LiDAR 360 software. Meanwhile, the number of trees n in the sample plot was automatically extracted. Then, the tree density N in the area under the test was calculated with Equation (4).
N = n S  
In the equation above, n is the number of trees in the area under the test and S is the area under the test.
The average diameter at breast height is the diameter corresponding to the average sectional area of the stand, which is denoted by D g . Simultaneously, this measurement is also a basic indicator of tree thickness. For the point cloud data after individual tree segmentation, the tree DBH extraction function of the TLS editing tool in the LiDAR 360 software was used to automatically extract the DBH value of all trees in the sample plots and the corresponding total number of trees. Equation (5) was used to calculate D g , which is the average diameter at breast height.
D g = 1 n i = 1 n d i 2  
In the equation above, D g is the average diameter at breast height and d i ( i = 1 , 2 , 3 , , n ) is the diameter at breast height of tree i in the sample plot.
The average height of the stand is a quantitative index that indicates the growth status of trees and a measurement scale that reflects the average height of trees. In stand expressions, the tree height corresponding to the average diameter at breast height D g is selected as the average height of the stand, which is denoted by H D . Given considerations of accuracy, the average height of 3~5 “average trees” that are close to the average diameter at breast height are usually measured in the stand inventory. Based on the point cloud data extracted from the DBH, the trees under the test were searched and locked according to the DBH. According to the principle of tree height extraction in Section 2.6, the height of the locked trees was first calculated separately and, then, its arithmetic mean value was taken as H D known as the final average height of the stand.
The stand spatial structure parameters reflect the competition degree, size difference, and horizontal distribution pattern within a stand. Among these parameters, species mingling reflects the degree of the spatial isolation of different species and is represented by M. Dominance reflects the dominance of the tree growth distribution and is expressed in terms of U. The Uniform angle index reflects the evenness of the distribution of the trees in forests and is denoted by W. A forest stand comprises neighbourhood structural units of n-trees. We applied the latest techniques of the nearest neighbour statistics, which are based on the assumption that the spatial structure of a forest stand is determined by the distribution of specific structural relationships within neighborhood groups of trees. Based on the definitions and values of the spatial structure parameters in Figure 11 and on the relevant principles and calculation equations proposed by Hui and Hong et al. [41,42,43], the point cloud data were extracted by using the LiDAR 360 software. These data were used to calculate the mingling, dominance, and uniform angle index to describe the homogeneity or heterogeneity of trees through many species, diameter classes, and spatial arrangements.

2.8. Accuracy Assessment

Measured references in the field were used to evaluate the results of tree mapping. Mapping accuracy is based on the plane position errors and elevation measurement errors. Accuracy is the percentage of correct tree position detections [26,44,45]. The accuracy of the DBH, tree height, and position estimates were evaluated by utilizing the Bias, root mean squared error (RMSE), relative Bias ( r B i a s ), and relative RMSE ( r R M S E ), which is defined by Equations (6)–(9), respectively.
B i a s = 1 n i = 1 n e i = 1 n i = 1 n ( y i y r i )  
R M S E = ( y i y r i ) 2 n  
r B i a s = B i a s y r ¯ × 100 %  
r R M S E = R M S E y r ¯ × 100 %  
where y i is the ith estimate, y r i is the ith reference, y r ¯ is the mean of the reference values, and n is the number of estimates. Relative Bias and RMSE were also employed to evaluate the mapping result and were calculated by dividing Bias and RMSE by the mean of the reference values.

3. Results

3.1. Estimation and Accuracy Evaluation of the Tree Position

The forest stand 3D point cloud that was established based on images is the basis for subsequent parameter extraction and calculation. Based on a comparison of the field measurements, we found that the image-based tree position extraction results acquired positional accuracy (Table 2). For the position extraction results of the three test plots, the maximum Bias was 0.190 m, the minimum Bias was 0.137 m, the maximum RMSE was 0.201 m, and the minimum RMSE was 0.162 m. Our model accuracy was within the range of the standard accuracy requirements of a forest resource inventory [45].

3.2. Estimation and Accuracy Evaluation of the DBH and Tree Height

The results of the automatic processing of the photogrammetric point cloud showed a high precision for the estimation of the DBH and height of the sampled trees (Table 3). The DBH accuracy for the three experimental plots was satisfactory because the maximum value of rBias was 3.55% and the minimum value was 2.05% while the maximum value of rRMSE was 4.51% and the minimum value was 3.07%. This accuracy was within the range of the relevant accuracy requirements of a forest resource inventory.
In the case of the tree heights, we obtained a bias greater than 2.2 m and an RMSE greater than 2.5 m compared with the total station measurement results, which indicated that the tree overestimation error of the image-based point cloud is slightly larger than the relevant requirements of a forest resource inventory. In the three experimental plots, the maximum value of rBias was 11.00% and the minimum value was −10.28% while the maximum value of rRMSE was 11.91% and the minimum value was 11.26%. Despite the best efforts in the tree height ground measurement, there are many factors that influence tree height estimation on the ground and many reasons for large extraction errors. Even though the tree height estimation error was not ideal, this image-based tree height estimation method is still a good idea.

3.3. Calculation of Other Stand Factors

After acquiring the basic parameters of the stand (i.e., tree position, DBH, tree height, and plot area) based on the 3D point cloud of the stand, we calculated the trees’ density, average DBH, average height, uniform angle index, dominance, species mingling, and other important indicators of forest stands, according to the relevant principles and calculation equations. These calculations have a great value for reflecting the overall structure of the stand and mastering the characteristics of a stand of trees. After further calculation, the results of other key factors in the three stand samples are shown in Table 4. The density of the trees was not much different since the overall forest density was small, which indicated that the forests in the study area was sparse. The average DBH and average height of the forest stand in the three sample plots were very close, which indicated that the overall growth of the trees in the study area was the same. The uniform angle index, dominance, and species mingling were basically the same, which indicated that the spatial structure of the forest stand in the entire study area has little difference and the forest stand characteristics were relatively consistent.

3.4. Comparison of Work Efficiency

For the image-based point cloud model, the data processing of the three plots included two parts: field photography and internal processing. The number of images collected in the field of sample plots No. 1 and No. 2 are almost similar although more images were taken from the No. 3 sample plot than the first two plots. Expectedly, the corresponding internal and external processing times for sample plot 3 was more than the other two plots (Table 5). Even though there was little difference in the number of pictures from No. 1 and No. 2, the internal processing time was shorter for the No. 1 plot than for the No. 2 plot, which was due to the degree of overlap between the images and the complexity of the actual picture of the forest stand. In general, during the development of the 3D point cloud in the forest sample, the time required was much longer for the internal image processing than for the field image collection and was approximately 30 times longer.

4. Discussion

Through the field test of three sample plots, our study results have shown great potential for RTK combined with CCD continuous photogrammetry for the automatic calculation of forest tree parameters for forest management inventory purposes. This study has great significance for the fine monitoring and management of rare and high-value tree species. It is suitable for the key monitoring of small-scale plots and could be extended to large-scale plots when needed. The focus of the study was to develop a new terrestrial photogrammetry device for the creation of 3D point cloud models for the estimation of sample plot survey. As shown in Figure 3 and Figure 4, the device is based on the principle of stereoscopic photogrammetry and uses the RTK module to collect the linear elements in real time and determines the angular elements in real time with the MEMS module. At the same time, in the sample observation, it is necessary to follow the design route shown in Figure 5 strictly to ensure the accuracy of data collection. These data are the key elements of the accurate calculation of the obtained images. To check the reliability of our system, we compared our model estimates with the field data and it showed a very high accuracy.
The most significant advantage of the proposed method is the combination of RTK real-time positioning technology and terrestrial photogrammetry using Structure from Motion (SfM) [33,46,47] to calculate the attitude of each image (the motion) to the external camera and to realize the feature matching of the images and the construction of the real 3D point cloud (the structure). Compared with the existing method of using simple photogrammetry to construct the non-real coordinate model in the relative coordinate system [26,38], our method could directly extract the real geographical coordinates of the points without the use of other external auxiliary equipment or ground objects to restore the model scale. Our results showed that the extraction accuracy (RMSE) of the trees in the three test plots was 0.201 m, 0.162 m, and 0.162 m. Our model’s estimates showed a high accuracy compared with that of the field data collected from the same sampling plots (Table 2). The accuracy of the model estimates was highly consistent with the accuracy requirements of the forestry survey [48] with slight deviations. This inaccuracy and deviation could be caused by the method of field measurement because all surveyed positions correspond with the closest point at the base of the stem while the automatically detected trees from an image-based measurement were identified directly by its centers from the point cloud. In terms of assessing the number of trees per hectare or stand and the overall structure of the forest, these errors are negligible [33]. Foresters are needed to consider the accuracy of GNSS receiver while working under the forest canopies. Since there is a relationship between positioning accuracy and certain variables of the forest canopy characteristics (such as tree density, basal area, and biomass volume), positioning accuracy can affect the measurements [49,50,51]. Moreover, if the forest canopy structure is more complex, then the chances of multipath effect of the localization signal would increase [52]. In addition, the accuracy of the positioning signal under forest canopy is directly related to the formation of the 3D point cloud of sample plot and the extraction accuracy of the point cloud parameters [53]. Therefore, our method is more suitable for low-density and medium-density forests than high-density forests. The DBH values of the sample plot of trees that were extracted by the image point cloud and the field data were compared for all three sample plots (Figure 12). Our results were very similar to the error behavior of the diameter estimation. The RMSE of Figure 12a was 1.13 cm, the RMSE of Figure 12b was 0.93 cm, and the RMSE of Figure 12c was 0.92 cm while the overall fitting effect was better and followed the same pattern. This method of extracting DBH through nonlinear fitting of a point cloud model is an improvement to the simple circle fitting method [9,35,36,54] and the fitting accuracy is also improved. In addition, the difficulty and accuracy of the extraction of the DBH extraction were related to the topography of the observed stand. For the sample plot with a single terrain and flat terrain, the extraction of the DBH was less difficult, the breast height was extracted accurately, and the DBH extraction accuracy was high. For the sample plot, it is not recommended to use the method of breast diameter batch extraction. It is necessary to extract the single tree separately since it is difficult to ensure the precision of breast diameter extraction.
Figure 13 shows the comparison of the estimated tree height by using the image point cloud and the measured tree height with the total station instrument, according to the principle of Section 2.6 for the three sample plots. The results showed that the RMSE of Figure 13a was 2.41 m, the RMSE of Figure 13b was 2.46 m, the RMSE of Figure 13c was 2.51 m, and the overall tree height estimate was smaller. Although we have done considerable work on the tree height extraction of the terrestrial photogrammetry, we have also attempted a variety of tree height extraction methods, but the total extraction accuracy (RMSE) of the tree height was inferior. This was because of the limitations of the terrestrial photography itself. For the extraction of the upper position that cannot be taken on the ground, regardless of the feasibility of extraction or the measurement accuracy, the terrestrial photogrammetry is clearly lagging behind the UAV photogrammetry [33,55,56]. In view of the limitation of terrestrial photography, this paper introduced the Kunze M. curve equation of the point cloud model to estimate individual tree height. Based on the effective point cloud data in the lower part of the canopy, according to the curve growth law of the trunk section, the tree height value of the occluded part is scientifically fitted to achieve the purpose of tree height estimation. Currently, we can achieve the purpose of tree height extraction based on ground continuous photogrammetry especially in the case where the canopy is occluded in the sample plot, but the accuracy and application effect of the extraction are not guaranteed. The possible reason behind this could be the distribution of effective point clouds on the ground photography, the size of the canopy closure, the distribution of coniferous broadleaf species, and the standard of tree growth in the sample plot and other factors. To improve the extraction accuracy of the tree height of the stand, it is necessary to ensure that the 3D point cloud of the entire tree (including the root of the tree and the top of the tree) is obtained and it is the effective point cloud data.
Apart from the mentioned accuracy, our method of the continuous terrestrial photogrammetry for a sample plot survey is very efficient and cost-effective in terms of field work. Since there is no need to lay out control points or prepare landmarks in advance, this method saves plenty of field work time and it can greatly reduce not only the investment of workers but also labor costs. The requirement of additional investment for data acquisition equipment (DAE) or external expert knowledge could be minimized to a great extent. However, since image data processing has high requirements for software and hardware devices and Table 5 showed the time consumption of internal and external data processing, we need more smooth processing software and higher configuration hardware devices. After the optimization process, this methodology could serve as a good tool for a fast forest inventory even in the commercial sector. Although this method can obtain information regarding the tree positions for forest structure modelling and the accuracy of its DBH measurements is almost similar to the field data, the most possible limitation of this methodology could be its applicability only to the under-crown layer of the sample plot. This limitation could be fixed by obtaining cloud points above the crown. In addition, in high-density forests, the applicability and positioning accuracy of the methodology could be affected. Even with the combination of GNSS and Inertial Measurement Unit (IMU) positioning, it is still difficult to achieve satisfactory results under the high-density forest canopy [57,58]. For low-density and medium-density forest areas, the positioning signal could be interfered slightly, but the impact is negligible. It can be tried by increasing the acquisition point time and adjusting the observation scheme or placing auxiliary signs.

5. Conclusions

In this work, we presented a novel method for extracting sample plot parameters based on continuous terrestrial photogrammetry. We establish 3D point cloud models by using the self-developed terrestrial photogrammetry instrument, commercial software, and sample plot parameter extraction methods to provide high-resolution realistic stand point clouds and to describe individual trees and the estimation of common sample plot survey parameters. Compared with previous studies, the most significant contribution of this methodology is to provide a low-cost, high-precision, and stable forest sample plot parameter extraction model and it can directly obtain the geographical coordinates of trees by using the RTK module, which saves a lot of manpower and material investment in the field survey. This methodology may be a cheap and fast alternative for terrestrial field measurement with a special focus on the quick and precise determination of the individual trees and forest stand attributes in high-value timber class stands. Moreover, our results showed that this methodology could be widely applied to tree position and DBH measurement and its tree height estimation method can also be used as a reference.
Nevertheless, there are many areas that need to be improved such as how to establish a high-precision 3D point cloud with less images. Further studies are recommended to determine the accuracy of the stem curve estimation using image-based point cloud data and the possibilities of applying different image collection strategies and setups in the field.

Author Contributions

Conceptualization, J.L. and Z.F.; Methodology, J.L.; Software, L.Y.; Validation, Z.F., J.L. and L.Y.; Formal Analysis, J.L. and L.Y.; Investigation, J.L., Z.Z., Z.C.; Resources, Z.F.; Data Curation, J.L. and L.Y.; Writing-Original Draft Preparation, J.L.; Writing-Review & Editing, A.M. and T.K.; Visualization, L.Y.; Supervision, Z.F. and J.L.; Project Administration, Z.F. and J.L.; Funding Acquisition, Z.F.

Funding

This research was jointly supported by the National Natural Science Foundation of China (No. U1710123), the Natural Science Foundation of Beijing (Key Program, No. 6161001), and the medium long-term project of the “Precision Forestry Key Technology and Equipment Research” (No. 2015ZCQ-LX-01).

Acknowledgements

We are grateful to the staff of the Precision Forestry Key Laboratory of Beijing, Beijing Forestry University. We are also very thankful for the prompt response of the Journal of Remote Sensing and its reviewers. The reviewer comments were highly impactful for improving the manuscript draft.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Corona, P.; Marchetti, M. Outlining multi-purpose forest inventories to assess the ecosystem approach in forestry. Plant Biosyst. 2007, 141, 243–251. [Google Scholar] [CrossRef] [Green Version]
  2. Corona, P. Integration of Forest Mapping and Inventory to support Forest Management; The Italian Society of Silviculture and Forest Ecology: Viberto, Italy, 2010; pp. 59–64. [Google Scholar]
  3. Kankare, V.; Liang, X.; Vastaranta, M.; Yu, X.; Holopainen, M.; Hyyppa, J. Diameter distribution estimation with laser scanning based multisource single tree inventory. ISPRS J. Photogramm. Remote Sens. 2015, 108, 161–171. [Google Scholar] [CrossRef]
  4. Liang, X.; Kankare, V.; Hyyppa, J.; Wang, Y.; Kukko, A.; Haggren, H.; Yu, X.; Kaartinen, H.; Jaakkola, A.; Guan, F.; et al. Terrestrial laser scanning in forest inventories. ISPRS J. Photogramm. Remote Sens. 2016, 115, 63–77. [Google Scholar] [CrossRef] [Green Version]
  5. Iizuka, K.; Yonehara, T.; Itoh, M.; Kosugi, Y. Estimating Tree Height and Diameter at Breast Height (DBH) from Digital Surface Models and Orthophotos Obtained with an Unmanned Aerial System for a Japanese Cypress (Chamaecyparis obtusa) Forest. Remote Sens. 2018, 10, 13. [Google Scholar] [CrossRef]
  6. Mulla, D.J. Twenty five years of remote sensing in precision agriculture: Key advances and remaining knowledge gaps. Biosyst. Eng. 2013, 114, 358–371. [Google Scholar] [CrossRef]
  7. Magnussen, S.; Naesset, E.; Kaendler, G.; Adler, P.; Renaud, J.P.; Gobakken, T. A functional regression model for inventories supported by aerial laser scanner data or photogrammetric point clouds. Remote Sens. Environ. 2016, 184, 496–505. [Google Scholar] [CrossRef]
  8. Maselli, F.; Chiesi, M.; Mura, M.; Marchetti, M.; Corona, P.; Chirici, G. Combination of optical and LiDAR satellite imagery with forest inventory data to improve wall-to-wall assessment of growing stock in Italy. Int. J. Appl. Earth Obs. 2014, 26, 377–386. [Google Scholar] [CrossRef]
  9. Forsman, M.; Borlin, N.; Holmgren, J. Estimation of Tree Stem Attributes Using Terrestrial Photogrammetry with a Camera Rig. Forests 2016, 7, 61. [Google Scholar] [CrossRef]
  10. White, J.C.; Coops, N.C.; Wulder, M.A.; Vastaranta, M.; Hilker, T.; Tompalski, P. Remote Sensing Technologies for Enhancing Forest Inventories: A Review. Can. J. Remote Sens. 2016, 42, 619–641. [Google Scholar] [CrossRef]
  11. Cunliffe, A.M.; Brazier, R.E.; Anderson, K. Ultra-fine grain landscape-scale quantification of dryland vegetation structure with drone-acquired structure-from-motion photogrammetry. Remote Sens. Environ. 2016, 183, 129–143. [Google Scholar] [CrossRef] [Green Version]
  12. Wallace, L.; Hillman, S.; Reinke, K.; Hally, B. Non-destructive estimation of above-ground surface and near-surface biomass using 3D terrestrial remote sensing techniques. Methods Ecol. Evol. 2017, 8, 1607–1616. [Google Scholar] [CrossRef]
  13. Saarinen, N.; Vastaranta, M.; Nasi, R.; Rosnell, T.; Hakala, T.; Honkavaara, E.; Wulder, M.A.; Luoma, V.; Tommaselli, A.M.G.; Imai, N.N.; et al. Assessing Biodiversity in Boreal Forests with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging. Remote Sens. 2018, 10. [Google Scholar] [CrossRef]
  14. Jaakkola, A.; Hyyppa, J.; Yu, X.; Kukko, A.; Kaartinen, H.; Liang, X.; Hyyppa, H.; Wang, Y. Autonomous Collection of Forest Field Reference-The Outlook and a First Step with UAV Laser Scanning. Remote Sens. 2017, 9. [Google Scholar] [CrossRef]
  15. White, J.C.; Tompalski, P.; Coops, N.C.; Wulder, M.A. Comparison of airborne laser scanning and digital stereo imagery for characterizing forest canopy gaps in coastal temperate rainforests. Remote Sens. Environ. 2018, 208, 1–14. [Google Scholar] [CrossRef]
  16. Saarinen, N.; Kankare, V.; Vastaranta, M.; Luoma, V.; Pyorala, J.; Tanhuanpaa, T.; Liang, X.; Kaartinen, H.; Kukko, A.; Jaakkola, A.; et al. Feasibility of Terrestrial laser scanning for collecting stem volume information from single trees. ISPRS J. Photogramm. 2017, 123, 140–158. [Google Scholar] [CrossRef]
  17. Su, Y.; Guo, Q.; Xue, B.; Hu, T.; Alvarez, O.; Tao, S.; Fang, J. Spatial distribution of forest aboveground biomass in China: Estimation through combination of spaceborne lidar, optical imagery, and forest inventory data. Remote Sens. Environ. 2016, 173, 187–199. [Google Scholar] [CrossRef] [Green Version]
  18. Pang, Y.; Li, Z.; Ju, H.; Lu, H.; Jia, W.; Si, L.; Guo, Y.; Liu, Q.; Li, S.; Liu, L.; et al. LiCHy: The CAF’s LiDAR, CCD and Hyperspectral Integrated Airborne Observation System. Remote Sens. 2016, 8. [Google Scholar] [CrossRef]
  19. Yang, B.; Dai, W.; Dong, Z.; Liu, Y. Automatic Forest Mapping at Individual Tree Levels from Terrestrial Laser Scanning Point Clouds with a Hierarchical Minimum Cut Method. Remote Sens. 2016, 8, 372. [Google Scholar] [CrossRef]
  20. Zhang, J.; Lin, X. Advances in fusion of optical imagery and LiDAR point cloud applied to photogrammetry and remote sensing. Int. J. Image Data Fusion. 2017, 8, 1–31. [Google Scholar] [CrossRef]
  21. Miller, J.; Morgenroth, J.; Gomez, C. 3D modelling of individual trees using a handheld camera: Accuracy of height, diameter and volume estimates. Urban For. Urban Green. 2015, 14, 932–940. [Google Scholar] [CrossRef]
  22. Bauwens, S.; Fayolle, A.; Gourlet-Fleury, S.; Ndjele, L.M.; Mengal, C.; Lejeune, P. Terrestrial photogrammetry: a non-destructive method for modelling irregularly shaped tropical tree trunks. Methods Ecol. Evol. 2017, 8, 460–471. [Google Scholar] [CrossRef]
  23. Liang, X.; Wang, Y.; Jaakkola, A.; Kukko, A.; Kaartinen, H.; Hyyppa, J.; Honkavaara, E.; Liu, J. Forest Data Collection Using Terrestrial Image-Based Point Clouds From a Handheld Camera Compared to Terrestrial and Personal Laser Scanning. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5117–5132. [Google Scholar] [CrossRef]
  24. Hortobagyi, B.; Corenblit, D.; Vautiei, F.; Steiger, J.; Roussel, E.; Burkart, A.; Peiry, J. A multi-scale approach of fluvial biogeomorphic dynamics using photogrammetry. J. Environ. Manag. 2017, 202, 348–362. [Google Scholar] [CrossRef] [PubMed]
  25. Garcia-Gago, J.; Gonzalez-Aguilera, D.; Gomez-Lahoz, J.; Ignacio San Jose-Alonso, J. A Photogrammetric and Computer Vision-Based Approach for Automated 3D Architectural Modeling and Its Typological Analysis. Remote Sens. 2014, 6, 5671–5691. [Google Scholar] [CrossRef] [Green Version]
  26. Liang, X.; Jaakkola, A.; Wang, Y.; Hyyppa, J.; Honkavaara, E.; Liu, J.; Kaartinen, H. The Use of a Hand-Held Camera for Individual Tree 3D Mapping in Forest Sample Plots. Remote Sens. 2014, 6, 6587–6603. [Google Scholar] [CrossRef] [Green Version]
  27. Huang, S.; Zhang, Z.; Ke, T.; Tang, M.; Xu, X. Scanning Photogrammetry for Measuring Large Targets in Close Range. Remote Sens. 2015, 7, 10042–10077. [Google Scholar] [CrossRef] [Green Version]
  28. Fei, L.; Yan, L.; Chen, C.; Ye, Z.; Zhou, J. OSSIM: An Object-Based Multiview Stereo Algorithm Using SSIM Index Matching Cost. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6937–6949. [Google Scholar] [CrossRef]
  29. Ruzgiene, B.; Berteska, T.; Gecyte, S.; Jakubauskiene, E.; Aksamitauskas, V.C. The surface modelling based on UAV Photogrammetry and qualitative estimation. Measurement 2015, 73, 619–627. [Google Scholar] [CrossRef]
  30. James, M.R.; Robson, S.; D’Oleire-Oltmanns, S.; Niethammer, U. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment. Geomorphology 2017, 280, 51–66. [Google Scholar] [CrossRef]
  31. Song, H.; Yang, C.; Zhang, J.; Hoffmann, W.C.; He, D.; Thomasson, J.A. Comparison of mosaicking techniques for airborne images from consumer-grade cameras. J. Appl. Remote Sens. 2016, 10. [Google Scholar] [CrossRef]
  32. Surovy, P.; Yoshimoto, A.; Panagiotidis, D. Accuracy of Reconstruction of the Tree Stem Surface Using Terrestrial Close-Range Photogrammetry. Remote Sens. 2016, 8. [Google Scholar] [CrossRef]
  33. Mikita, T.; Janata, P.; Surovy, P. Forest Stand Inventory Based on Combined Aerial and Terrestrial Close-Range Photogrammetry. Forests 2016, 7, 165. [Google Scholar] [CrossRef]
  34. Pierzchala, M.; Giguere, P.; Astrup, R. Mapping forests using an unmanned ground vehicle with 3D LiDAR and graph-SLAM. Comput. Electron. Agric. 2018, 145, 217–225. [Google Scholar] [CrossRef]
  35. Koren, M.; Mokros, M.; Bucha, T. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods. Int. J. Appl. Earth Obs. 2017, 63, 122–128. [Google Scholar] [CrossRef]
  36. Forsman, M.; Borlin, N.; Olofsson, K.; Reese, H.; Holmgren, J. Bias of cylinder diameter estimation from ground-based laser scanners with different beam widths: A simulation study. ISPRS J. Photogramm. Remote Sens. 2018, 135, 84–92. [Google Scholar] [CrossRef]
  37. Stankova, T.V.; Dieguez-Aranda, U. A two-component dynamic stand model of natural thinning. For. Ecol. Manag. 2017, 385, 264–280. [Google Scholar] [CrossRef]
  38. Yoshimoto, A.; Surovy, P.; Konoshima, M.; Kurth, W. Constructing tree stem form from digitized surface measurements by a programming approach within discrete mathematics. Trees 2014, 28, 1577–1588. [Google Scholar] [CrossRef]
  39. Zhao, F.; Feng, Z.; Gao, X.; Zheng, J.; Wang, Z. Measure method of tree height and volume using total station under canopy cover condition. Trans. Chin. Soc. Agric. Eng. 2014, 30, 182–190. [Google Scholar]
  40. Kansanen, K.; Vauhkonen, J.; Lahivaara, T.; Mehtatalo, L. Stand density estimators based on individual tree detection and stochastic geometry. Can. J. For. Res. 2016, 46, 1359–1366. [Google Scholar] [CrossRef]
  41. Hong, H.N.; Erfanifard, Y.; Petritan, I.C. Nearest Neighborhood Characteristics of a Tropical Mixed Broadleaved Forest Stand. Forests 2018, 9, 33. [Google Scholar] [CrossRef]
  42. Zhao, Z.; Hui, G.; Hu, Y.; Wang, H.; Zhang, G.; von Gadow, K. Testing the significance of different tree spatial distribution patterns based on the uniform angle index. Can. J. For. Res. 2014, 44, 1419–1425. [Google Scholar] [CrossRef]
  43. Li, Y.; Hui, G.; Zhao, Z.; Hu, Y. The bivariate distribution characteristics of spatial structure in natural Korean pine broad-leaved forest. J. Veg. Sci. 2012, 23, 1180–1190. [Google Scholar] [CrossRef]
  44. Liang, X.; Hyyppa, J.; Kukko, A.; Kaartinen, H.; Jaakkola, A.; Yu, X. The Use of a Mobile Laser Scanning System for Mapping Large Forest Plots. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1504–1508. [Google Scholar] [CrossRef]
  45. Berveglieri, A.; Tommaselli, A.; Liang, X.; Honkavaara, E. Photogrammetric measurement of tree stems from vertical fisheye images. Scand. J. For. Res. 2017, 32, 737–747. [Google Scholar] [CrossRef]
  46. Mlambo, R.; Woodhouse, I.H.; Gerard, F.; Anderson, K. Structure from Motion (SfM) Photogrammetry with Drone Data: A Low Cost Method for Monitoring Greenhouse Gas Emissions from Forests in Developing Countries. Forests 2017, 8, 68. [Google Scholar] [CrossRef]
  47. Ingwer, P.; Gassen, F.; Püst, S.; Duhn, M.; Schälicke, M.; Müller, K.; Ruhm, H.; Rettig, J.; Hasche, E.; Fischer, A.; et al. Practical usefulness of structure from motion (SfM) point clouds obtained from different consumer cameras. Int. Soc. Opt. Photonics 2015, 9411. [Google Scholar] [CrossRef]
  48. The State Forestry Administration, C. Technical Regulations on the Continuous Inventory of National Forest Resources. Available online: https://wenku.baidu.com/view/e0b332884bfe04a1b0717fd5360cba1aa9118c35.html (accessed on 17 August 2018).
  49. Ordonez, C.; Rodriguez-Perez, J.R.; Moreira, J.J.; Matias, J.M.; Sanz-Ablanedo, E. Machine Learning Techniques Applied to the Assessment of GPS Accuracy under the Forest Canopy. J. Surv. Eng. 2011, 137, 140–149. [Google Scholar] [CrossRef]
  50. Simwanda, M.; Wing, M.G.; Sessions, J. Evaluating Global Positioning System Accuracy for Forest Biomass Transportation Tracking within Varying Forest Canopy. West. J. Appl. For. 2011, 26, 165–173. [Google Scholar]
  51. Bakula, M.; Przestrzelski, P.; Kazmierczak, R. Reliable Technology of Centimeter GPS/GLONASS Surveying in Forest Environments. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1029–1038. [Google Scholar] [CrossRef]
  52. Danskin, S.; Bettinger, P.; Jordan, T. Multipath Mitigation under Forest Canopies: A Choke Ring Antenna Solution. For. Sci. 2009, 55, 109–116. [Google Scholar]
  53. Monnet, J.; Mermin, E. Cross-Correlation of Diameter Measures for the Co-Registration of Forest Inventory Plots with Airborne Laser Scanning Data. Forests 2014, 5, 2307–2326. [Google Scholar] [CrossRef] [Green Version]
  54. Forsman, M.; Holmgren, J.; Olofsson, K. Tree Stem Diameter Estimation from Mobile Laser Scanning Using Line-Wise Intensity-Based Clustering. Forests 2016, 7, 206. [Google Scholar] [CrossRef]
  55. Frey, J.; Kovach, K.; Stemmler, S.; Koch, B. UAV Photogrammetry of Forests as a Vulnerable Process. A Sensitivity Analysis for a Structure from Motion RGB-Image Pipeline. Remote Sens. 2018, 10. [Google Scholar] [CrossRef]
  56. Wallace, L.; Lucieer, A.; Malenovsky, Z.; Turner, D.; Vopenka, P. Assessment of Forest Structure Using Two UAV Techniques: A Comparison of Airborne Laser Scanning and Structure from Motion (SfM) Point Clouds. Forests 2016, 7, 62. [Google Scholar] [CrossRef]
  57. Kaartinen, H.; Hyyppa, J.; Vastaranta, M.; Kukko, A.; Jaakkola, A.; Yu, X.; Pyorala, J.; Liang, X.; Liu, J.; Wang, Y.; et al. Accuracy of Kinematic Positioning Using Global Satellite Navigation Systems under Forest Canopies. Forests 2015, 6, 3218–3236. [Google Scholar] [CrossRef] [Green Version]
  58. Kukko, A.; Kaijaluoto, R.; Kaartinen, H.; Lehtola, V.V.; Jaakkola, A.; Hyyppa, J. Graph SLAM correction for single scanner MLS forest data under boreal forest canopy. ISPRS J. Photogramm. 2017, 132, 199–209. [Google Scholar] [CrossRef]
Figure 1. Workflow diagram exhibiting the stepwise processing of the data.
Figure 1. Workflow diagram exhibiting the stepwise processing of the data.
Remotesensing 10 01299 g001
Figure 2. Study area location map.
Figure 2. Study area location map.
Remotesensing 10 01299 g002
Figure 3. Surface photogrammetry appearance.
Figure 3. Surface photogrammetry appearance.
Remotesensing 10 01299 g003
Figure 4. Two CCD lens placement details.
Figure 4. Two CCD lens placement details.
Remotesensing 10 01299 g004
Figure 5. Terrestrial continuous photogrammetry route. The site marked with red indicates two points to be noted: one point is that the coordinates of the site are collected accurately and the other point is that the route to the site ensures the continuity of the captured image.
Figure 5. Terrestrial continuous photogrammetry route. The site marked with red indicates two points to be noted: one point is that the coordinates of the site are collected accurately and the other point is that the route to the site ensures the continuity of the captured image.
Remotesensing 10 01299 g005
Figure 6. Detailed 3D view of the point cloud from CTP. (a), (b), and (c) show the 3D point cloud renderings established by CTP of plots 1, 2, and 3, respectively.
Figure 6. Detailed 3D view of the point cloud from CTP. (a), (b), and (c) show the 3D point cloud renderings established by CTP of plots 1, 2, and 3, respectively.
Remotesensing 10 01299 g006
Figure 7. Display diagram of the single tree partition effect in the sample plots. (a), (b), and (c) represent the point cloud models of plots 1, 2, and 3, respectively, after the completion of the single-segmentation operation.
Figure 7. Display diagram of the single tree partition effect in the sample plots. (a), (b), and (c) represent the point cloud models of plots 1, 2, and 3, respectively, after the completion of the single-segmentation operation.
Remotesensing 10 01299 g007
Figure 8. Single tree diameter extraction based on point cloud data. The orange position in the red box on the right is the breast height position and the red box on the left side is the point cloud of the intercepted breast height position while the DBH is automatically extracted. The unit is meter.
Figure 8. Single tree diameter extraction based on point cloud data. The orange position in the red box on the right is the breast height position and the red box on the left side is the point cloud of the intercepted breast height position while the DBH is automatically extracted. The unit is meter.
Remotesensing 10 01299 g008
Figure 9. Batch extraction of the DBH in three test plots. (a), (b), and (c) show a schematic diagram of a cross-section of a point cloud in a breast height position on a point cloud model after the single-wood segmentation of samples 1, 2, and 3, respectively, where the red circle on the trunk is the breast height position.
Figure 9. Batch extraction of the DBH in three test plots. (a), (b), and (c) show a schematic diagram of a cross-section of a point cloud in a breast height position on a point cloud model after the single-wood segmentation of samples 1, 2, and 3, respectively, where the red circle on the trunk is the breast height position.
Remotesensing 10 01299 g009
Figure 10. Schematic diagram of the tree height calculated by the Kunze M. stem curve equation.
Figure 10. Schematic diagram of the tree height calculated by the Kunze M. stem curve equation.
Remotesensing 10 01299 g010
Figure 11. Definition of the spatial structure parameters of mingling, dominance, and the uniform angle index.
Figure 11. Definition of the spatial structure parameters of mingling, dominance, and the uniform angle index.
Remotesensing 10 01299 g011
Figure 12. Comparative analysis of the DBH extraction values based on the image point clouds and the field measured value. (a), (b), and (c) are the comparative analysis of the DBH of plots 1, 2, and 3, respectively.
Figure 12. Comparative analysis of the DBH extraction values based on the image point clouds and the field measured value. (a), (b), and (c) are the comparative analysis of the DBH of plots 1, 2, and 3, respectively.
Remotesensing 10 01299 g012
Figure 13. Comparative analysis of the tree height estimation values based on image point clouds and field measured values. (a), (b), and (c) are the height comparison analysis of plots 1, 2, and 3, respectively.
Figure 13. Comparative analysis of the tree height estimation values based on image point clouds and field measured values. (a), (b), and (c) are the height comparison analysis of plots 1, 2, and 3, respectively.
Remotesensing 10 01299 g013
Table 1. Feature parameters of sample plots.
Table 1. Feature parameters of sample plots.
Stand FeaturesAverage DBH (cm) Average Tree Height (m)Average Height of Crown (m)Average Tree Density (tree/hectares)Average SlopeAverage Slope PositionAverage Slope Orientation
Parameter values27.721.0018.37372Grade I 1Flat 2No slope 3
1 The slope is divided into I-VI grades of which grade I is a flat slope. 2 The slope position is divided into seven types: ridge, upper, middle, lower, valley, flat, and whole slope. 3 The slope direction is divided into nine directions: east, south, west, north, northeast, southeast, northwest, southwest, and no slope.
Table 2. Comparison of the tree position extraction results by utilizing image-based and field measurements.
Table 2. Comparison of the tree position extraction results by utilizing image-based and field measurements.
Sample NumberBias (m)RMSE (m)
10.1900.201
20.1520.162
30.1370.162
Table 3. Comparison of the DBH and height estimates by utilizing image-based estimates and field measurements.
Table 3. Comparison of the DBH and height estimates by utilizing image-based estimates and field measurements.
Sample NumberCategoryBiasrBias (%)RMSErRMSE (%)
1DBH (cm)0.893.551.134.51
Height (m)−2.22−11.002.4111.91
2DBH (cm)0.582.050.933.30
Height (m)−2.21−10.282.4611.47
3DBH (cm)0.782.610.923.07
Height (m)−2.33−10.442.5111.26
Table 4. Calculation results of other stand factors based on image point cloud data.
Table 4. Calculation results of other stand factors based on image point cloud data.
Sample NumberTree Density (tree/hectares)Average DBH of Stand (cm)Average Height of Stand (m)Uniform Angle IndexDominanceSpecies Mingling
137625.918.000.5000.6880.025
233028.719.300.2500.5630.075
343530.820.000.2500.6250.000
Table 5. Comparison of the efficiency of constructing 3D point clouds based on images.
Table 5. Comparison of the efficiency of constructing 3D point clouds based on images.
Sample NumberImage QuantityField Photography Time (min)Internal Processing Time (min)Overall Work Time Consuming (min)
180413417430
280813347360
386215483498

Share and Cite

MDPI and ACS Style

Liu, J.; Feng, Z.; Yang, L.; Mannan, A.; Khan, T.U.; Zhao, Z.; Cheng, Z. Extraction of Sample Plot Parameters from 3D Point Cloud Reconstruction Based on Combined RTK and CCD Continuous Photography. Remote Sens. 2018, 10, 1299. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10081299

AMA Style

Liu J, Feng Z, Yang L, Mannan A, Khan TU, Zhao Z, Cheng Z. Extraction of Sample Plot Parameters from 3D Point Cloud Reconstruction Based on Combined RTK and CCD Continuous Photography. Remote Sensing. 2018; 10(8):1299. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10081299

Chicago/Turabian Style

Liu, Jincheng, Zhongke Feng, Liyan Yang, Abdul Mannan, Tauheed Ullah Khan, Ziyu Zhao, and Zhuxin Cheng. 2018. "Extraction of Sample Plot Parameters from 3D Point Cloud Reconstruction Based on Combined RTK and CCD Continuous Photography" Remote Sensing 10, no. 8: 1299. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10081299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop