Next Article in Journal
Factors Controlling a Synthetic Aperture Radar (SAR) Derived Root-Zone Soil Moisture Product over The Seward Peninsula of Alaska
Next Article in Special Issue
Refinement of Individual Tree Detection Results Obtained from Airborne Laser Scanning Data for a Mixed Natural Forest
Previous Article in Journal
PerDet: Machine-Learning-Based UAV GPS Spoofing Detection Using Perception Data
Previous Article in Special Issue
Cross-Comparison of Individual Tree Detection Methods Using Low and High Pulse Density Airborne Laser Scanning Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semantic Segmentation Guided Coarse-to-Fine Detection of Individual Trees from MLS Point Clouds Based on Treetop Points Extraction and Radius Expansion

1
Institute of Computer Science and Engineering, Xi’an University of Technology, No.5 South of Jinhua Road, Xi’an 710048, China
2
Shaanxi Key Laboratory of Network Computing and Security Technology, Xi’an 710048, China
3
School of Artificial Intelligence and Computer Science, Jiangnan University, 1800 of Lihu Road, Wuxi 214122, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(19), 4926; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194926
Submission received: 27 August 2022 / Revised: 23 September 2022 / Accepted: 27 September 2022 / Published: 1 October 2022
(This article belongs to the Special Issue Applications of Individual Tree Detection (ITD))

Abstract

:
Urban trees are vital elements of outdoor scenes via mobile laser scanning (MLS), accurate individual trees detection from disordered, discrete, and high-density MLS is an important basis for the subsequent analysis of city management and planning. However, trees cannot be easily extracted because of the occlusion with other objects in urban scenes. In this work, we propose a coarse-to-fine individual trees detection method from MLS point cloud data (PCD) based on treetop points extraction and radius expansion. Firstly, an improved semantic segmentation deep network based on PointNet is applied to segment tree points from the scanned urban scene, which combining spatial features and dimensional features. Next, through calculating the local maximum, the candidate treetop points are located. In addition, the optimized treetop points are extracted after the tree point projection plane was filtered to locate the candidate treetop points, and a distance rule is used to eliminate the pseudo treetop points then the optimized treetop points are obtained. Finally, after the initial clustering of treetop points and vertical layering of tree points, a top-down layer-by-layer segmentation based on radius expansion to realize the complete individual extraction of trees. The effectiveness of the proposed method is tested and evaluated on five street scenes in point clouds from Oakland outdoor MLS dataset. Furthermore, the proposed method is compared with two existing individual trees segmentation methods. Overall, the precision, recall, and F-score of instance segmentation are 98.33%, 98.33%, and 98.33%, respectively. The results indicate that our method can extract individual trees effectively and robustly in different complex environments.

1. Introduction

With the rapid development of MLS, the PCD obtained by MLS is widely used to express the 3D surface information of roadside objects [1,2]. The result of extracting trees individually and capturing the attributes of trees from MLS point clouds can be widely used in various applications, such as urban road planning, street tree 3D modeling [3], street tree monitoring [4], tree species identifying [5], and biomass estimation [6]. However, MLS point clouds of an outdoor scene are usually characterized by complex, diverse outdoor scene objects, and the different densities distribution of tree point cloud. Furthermore, trees usually are spatially overlapping with other non-tree objects (e.g., lamps, billboards) and tree crowns. These attributes pose significant challenges to detect individual trees from scanned outdoor scene.
In recent years, some automated methods based on MLS have been proposed [7]. There are many scientific contributions aiming to segment scanned urban scenes into different objects [8,9,10] and capture the attributes of trees [11,12,13,14,15] (e.g., tree height, trunk diameter and diameter at breast height), and more outstanding work on 3-D object detection based on LiDAR data emerges [16]. In this work, we focus on the current methods for individual trees detection from MLS. These methods can be roughly divided into three categories, i.e., the normalized cut methods (NCut) [17,18,19,20], the region growing methods [21,22,23,24], the clustering-based methods [25,26,27,28].
To improve the classification accuracy, Xu et al. [18] spatially smoothed the semantic label results obtained by Random Forest classifier via a regularization process, and then extracted the individual trees based on NCut. However, NCut only considers the distance, resulting in inaccurate boundary segmentation of tree crowns. In addition, the prior knowledge of the number of trees cannot be obtained, over-segmentation and under-segmentation are prone to occur. Zhong et al. [19] used the improved NCut to segment overlapping regions to obtain the individual trees. However, when there are poles near the trunk, the height threshold has a great influence on the trunk detection. The individual trees detection based on NCut needs to manually estimate the number of trees in a multi-tree cluster to determine the iteration termination condition. NCut requires large storage space and is inefficient when the PCD is dense, so NCut is mostly used for fine segmentation of under-segmentation overlapping objects.
Bonneau et al. [23] divided the PCD into voxels and clustered the connected voxel units based on region growing. And then judged whether this was correctly segmented by analyzing the spatial range and eigenvalue ratio of the clustering units, to further refine the under-segmented clusters and merge the over-segmented clusters. However, this method requires complete tree structure information, and it will fail when the tree data is incomplete. Luo et al. [24] proposed a deep network for semantic segmentation to extract tree points from raw point clouds. A pointwise direction embedding deep network (PDE-Net) is proposed to predict the direction vector of each tree cluster pointing to the tree center to improve the tree boundary segmentation accuracy. On this basis, a tree center detection method based on pointwise direction aggregation is proposed, and finally, extract individual trees based on the detected tree center as the seeds of the region growing. However, the direction prediction is inaccurate when the classification accuracy of tree points is low, and satisfactory extraction results cannot be obtained. A region growing-based method may not be able to obtain correct segmentation results due to improper selection of seeds or inaccurate feature extraction. Especially when trees are adjacent to some pole-like objects, it is difficult to separate trees and roadside pole-like objects, so there are major flaws in extracting individual trees from complex outdoor scenes.
Yang et al. [25] extracted the treetop points by 3-D spatial distribution analysis and used the treetop points as the seeds of the k-means clustering to segment the individual trees. However, k-means clustering requires the number of clustering as an input parameter. Tao et al. [26] intercepted PCD of a certain height and used DBSCAN clustering to obtain tree trunks. However, the trunks extraction result is unsatisfactory when the data density is uneven. Chen et al. [28] extracted individual trees based on the Euclidean clustering. The Euclidean clustering does not require prior knowledge of the number of trees in the clustering process, the Euclidean distance of adjacent points needs to be compared with a user-defined threshold, which is difficult to set. When the threshold is small, tree points may be lost or over-segmented into multi-tree clusters. On the contrary, objects close to the tree cannot be separated. It is easy to cluster multiple connected trees together in complex outdoor scenes and produce over-segmentation when the tree data is missing. In addition, the clustering based methods also have certain limitations. For example, k-means based tree extraction requires the number of trees and the initial clustering position in advance. When the data is missing or the parameters are set incorrectly, the segmentation of the DBSCAN will be affected. Therefore, prior knowledge and parameter settings are very important factors when using the tree extraction method based on clustering.
Ncut has high time complexity when it dealing with complex scenes. Compared with NCut, region growth makes full use of the local features of point clouds for segmentation, but the segmentation effect depends on the growth criteria and seeds selection, and it is difficult to segment correctly when trees and pole-like objects are close to each other. Clustering-based methods can achieve better extraction results in simple scenes, but under-segmentation occurs when trees are densely distributed, and over-segmentation occurs when point cloud data is incomplete. In conclusion, for outdoor scenes with large tree spacing and small overlap between tree crowns and nearby objects, most existing methods can segment and extract individual trees well. However, in complex scenes where multiple trees are connected or trees are adjacent to other objects, the extraction results of an individual tree is unsatisfactory. In addition, the current method is also affected by the density of point cloud data, which affects the extraction results of individual trees when the point cloud data is missing or incomplete.
In this paper, to overcome the problem of low tree extraction accuracy caused by the uneven density, missing or incomplete of point clouds, we proposed a novel method which combined tree detection with multi-feature enhanced PointNet, treetop points detection and radius expansion, to achieve a coarse-to-fine individual trees extraction from MLS point clouds. The main contributions of the proposed method are as follows.
(1)
A comprehensive framework combining semantic segmentation, treetop points locating, and radius expansion is constructed for individual trees extraction. It can accurately extract an individual tree and solve the over segmentation caused by incomplete point cloud data and uneven density.
(2)
A tree detection method based on the semantic segmentation by multi-feature enhancement PointNet is proposed to solve the classification of multiple-categories objects in complex outdoor scenes.
(3)
A novel individual trees extraction method is introduced for scanned urban scene. Through calculating the local maximum, the candidate treetop points are extracted. Taking the treetop points as center, the radius expansion guided method is presented for further extraction of an individual tree.

2. Materials and Methods

The proposed method mainly contains three steps: (1) tree detection based on the semantic segmentation by multi-feature enhancement PointNet, (2) optimal treetop points location based on projection, and (3) individual trees detection based on radius expansion. The overview of our proposed framework is shown in Figure 1.

2.1. Tree Points Detection Based on Multi-Feature Enhanced PointNet Semantic Segmentation

Generally, there are various objects in the outdoor scene, such as trees, buildings, ground, poles, vehicles, and etc. Therefore, it is necessary to remove non-tree objects in the scene and extract the tree points before extracting an individual tree. With the development of deep learning, Qi et al. [29] proposed a PointNet network that can directly process point cloud, which showed high accuracy and efficiency in semantic segmentation. We could detect trees from raw outdoor scene point cloud based on PointNet deep neural network. However, it only uses the Multilayer Perceptron (MLP) to increase the feature dimension when the PointNet model extracts local features and does not consider the neighborhood information of the point cloud, resulting in a poor description of the extracted local features. Therefore, the local features of the PCD are extracted and the coordinate values are combined to form feature vectors as the input of the PointNet network to perform semantic segmentation of complex outdoor point cloud scenes.

2.1.1. 3D Point Cloud Features Extraction

The performance of 3D PCD local features description depends on its local neighborhood information. At present, the selection of point cloud neighborhood data can be roughly divided into two methods, i.e., the k-nearest neighbor (KNN) search algorithm and the spherical local search algorithm. KNN method is a density-adaptive search algorithm that takes the k points closest to the query point as neighborhood points and can obtain a consistent number of neighborhood points in the case of uneven point cloud density, which is beneficial to improve data storage and calculation efficiency.
Given scanned scene data P = { p i | i = 1 , 2 , , N } , the k neighboring points of a point p i be q j = { ( x j , y j , z j ) | j = 1 , 2 , , k } . The normal vector estimation is implemented by a least-square plane fitting on the nearest neighbors, which is mainly based on Principal Components Analysis (PCA). Therefore, the local covariance matrix M of p i is constructed as:
M = 1 N i = 1 N ( p i P ¯ ) ( p i P ¯ ) T
where N is the number of points in the point cloud, P ¯ is the centroid point of the PCD, which is calculated by P ¯ = 1 / N i = 1 N p i . The eigenvalues are positive and ordered as λ 1 λ 2 λ 3 0 . The normal vector ( n i x , n i y , n i z ) of point p i can be determined by the eigenvector corresponding to λ 3 . Ning et al. [30] applied the local features calculated by the covariance matrix to the machine learning classification algorithm for tree extraction and achieved good classification results. Based on this, we selected 6 features that have a strong description ability for outdoor scene PCD, namely linearity L λ , flatness F λ , divergence D λ , anisotropy A λ , characteristic entropy E λ , and curvature variation C λ [31], these features can be calculated by Equation (2):
{ L λ = ( λ 1 λ 2 ) / λ 1 F λ = ( λ 2 λ 3 ) / λ 1 D i = λ 3 / λ 1 A i = ( λ 1 λ 3 ) / λ 1 E λ = i = 1 3 λ i ln λ i C i = λ 3 / ( λ 1 + λ 2 + λ 3 )
As we all know, the divergence, characteristic entropy, and curvature variation of trees are significantly higher than those of ground and buildings, while the linearity, flatness, and anisotropy of trees are lower than those of poles, buildings, and other objects. Therefore, the characteristics of different objects can be grasped more comprehensively and effectively through multi-feature fusion, and the discrimination between different objects can be improved.

2.1.2. PointNet Enhanced by Multi-Features

The disorder of the PCD makes the point cloud of different input orders get different high-dimensional features after passing through the MLP layer, which affects the feature extraction of the deep neural network. The rigid body invariance of the point cloud makes the spatial structure and shape information of the point cloud unaffected under different perspectives. Therefore, Qi et al. [29] introduced a T-Net module and a symmetric function to reduce the influence of disordered point clouds on the segmentation results. The specific steps of semantic segmentation are as follows. The PointNet network architecture diagram is shown in Figure 1.
The spatial coordinate features of N points are combined with local features in the data preparation stage and the input data is represented by a 9-D vector { X , Y , Z , L λ , F λ , D λ , A λ , E λ , C λ } . To adapt to the new number of channels, change the T-Net (3) of the PointNet network to T-Net (9), and then multiply the original PCD by the 9 × 9 transformation matrix learned by T-Net (9) to get the aligned data. After data alignment, the information of each point is learned and extracted by the MLP layer shared by 2 layers, and an N × 64 matrix is obtained. Finally, the 64 × 64 feature space transformation matrix is predicted by T-Net (64), and the transformation matrix is applied to the N × 64 matrix to achieve feature alignment, and the aligned features are used as the local features of the point cloud.
Input an N × 64 matrix in the shared MLP, and map the data to 64-D, 128-D, and 1024-D in turn to obtain an N × 1024 matrix. Then, the maximum value of N data in each dimension is extracted through the max pooling operation to obtain the global features of the point cloud. The aligned N × 64 local features and 1 × 1024 global features are spliced through the fully connected layer to obtain an N × 1088 matrix. Then, the three-layer MLP is used to classify and output the data, and a matrix of N × m is obtained, where N is the number of point clouds, m is the number of categories, and finally the semantic segmentation task of the scene is realized.

2.1.3. Filtering and Optimization

There are noisy points in the tree PCD obtained by semantic segmentation, so a filtering algorithm needs to be used to remove them. We use the pass-through filtering algorithm and the statistical filtering algorithm in Point Cloud Library (PCL) [32] to denoise the tree points. The statistical filter is mainly aimed at scattered noise points with a small amount of data. By calculating the average distance from each point to the adjacent points, and then comparing it with the given mean and variance, the noise points outside the range are eliminated. The pass-through filter can quickly remove a large number of outliers beyond the set range by determining the extent of the PCD on the X, Y, and Z axes. The comparison diagram of filtering is shown in Figure 2.

2.2. Treetop Points Extraction

Treetop points is the local highest points of crowns, which could determine the number of trees in the tree points from scene. Since there are often certain gaps between trees in urban scenes, even if the tree canopy overlaps. As the elevation increases, the horizontal spacing between the treetop points of different trees will become larger and larger. For single-row trees, it can be found that the treetop points of street trees are mostly located on the vertical plane of the tree distribution direction. According to the distribution of trees in outdoor scenes, we proposed a novel method to extract treetop points through local coordinate system (LCS) establishment, projection, and local maxima calculation.

2.2.1. Projection Direction

The trees in the outdoor scene have the characteristics that the treetop points are always the highest points. To extract accurate treetops, it is necessary to project all the trees points especially for the single-row street trees. As we all known, the outline of the projected trees is approximate to an ellipse, and the treetops are mainly located on the long axis of the ellipse. Therefore, the LCS of trees can be constructed by PCA, i.e., v 1 , v 2 , and v 3 (corresponding to λ 1 λ 2 λ 3 0 ) represents the x-axis, y-axis, and z-axis, respectively. Then the plane where the x-axis and z-axis are located is selected as the projection direction.
Assume that the tree PCD in outdoor scene is T = { t i | i = 1 , 2 , , N t } . The centroid T ¯ of all the data in T is calculated by T ¯ = 1 N t ( i = 1 N t t i ) = ( t ¯ x , t ¯ y , t ¯ z ) , where N t is the number of tree points and t i = ( t i x , t i y , t i z ) , t i T . Then, the tree points are projected onto the XOZ plane, and the point set after projection is T = { t i | i = 1 , 2 , , N t } , which is shown in Figure 3.
We can calculate the coordinate of T = { t i ( t i x , t i y , t i z ) | i = 1 , 2 , , N t } by the Equation (3):
{ t i x = t i x a × l n 2 t i y = t i y b × l n 2 t i z = t i z c × l n 2
where n 2 = ( a , b , c ) is the normal vector of the XOZ plane. l = t i t i = ( t i x t ¯ x ) × a + ( t i y t ¯ y ) × b + ( t i z t ¯ z ) × c / n 2 .
The LCS is established for the single row of outdoor trees data (shown in Figure 4a,4b). We projected the tree points onto the XOY plane (shown in Figure 4c) and the XOZ plane (shown in Figure 4d) of the LCS, respectively. We can see that it is easier to extract the treetops of the tree by projecting onto the XOZ plane.

2.2.2. Optimal Treetop Points Extraction

The tree points data after projecting onto the XOZ plane could provide an easy way to obtain the most unobstructed treetop points and is a good representation of the shape of the tree canopy. Based on this, we propose an optimal treetop points extraction method by three steps: (1) local maxima calculation. (2) candidate treetop points locating. (3) optimal treetop points extraction.
(1)
Local maxima calculation
For the projection points on the XOZ plane, it is necessary to extract local maxima from the projection points to reduce the extraction range of treetop points and improve computational efficiency. Firstly, for points t i ( t i x , t i y , t i z ) and t j ( t j x , t j y , t j z ) on the projected contour, the redundant data are removed. That is to say, if t i x = t j x and t i z = t j z , one of the points are kept. If t i x = t j x and t i z < t j z , remove the point t i . Then sort all the projected points in ascending order of x coordinate to get the point set T S = { t s i i = 1 , 2 , , N t s } . Next, t s i is defined as the local maxima when t s i z > t s i 1 z and t s i z > t s i + 1 z , and the above procedure are repeated to extract all local maxima.
Figure 5 displays the comparison of local maxima extraction results before and after filtering redundant data. Figure 5a,b shows the raw PCD and the local maxima before filtering. Figure 5c indicates the data after the redundant points are removed, and the further extracted local maxima are shown in Figure 5d. It is worth noting that the local maxima obtained from the filtered scene are located on the outer contour position of the tree crown and more conducive to the extraction of subsequent treetop points.
(2)
Candidate treetop points locating
Based on the local maxima, a critical step in treetop points extraction is to locate candidate treetop points. For one tree, the changing trend of the crown contour points is to expand outward from the treetop points. Therefore, we locate the candidate treetop points according to the variation of the z-coordinate of the PCD. First, all local maximums are sorted in ascending order of x, denoted as L M = { m i | i = 1 , 2 , , N m } . Nm is the number of the points of local maximum. Then, the difference DMi of point m i ( m i x , m i y , m i z ) on the z-axis are calculated by Equation (4).
D M i = { m i + 1 z m i z , i = 1 ( m i + 1 z m i 1 z ) / 2 , i [ 2 , 3 , , N m 1 ] m i z m i 1 z , i = N m
Theoretically, the treetop point generally has maximum value of x coordinate among all its neighborhood points. Therefore, we locate candidate treetop points by detecting those points where their DM varies from positive to negative. As the x-axis coordinate value continues to increase, there will be some randomly distributed noise points on the z-axis coordinate. To reduce the influence of noise, the difference needs to be smoothed. Thus, a two-step-based method is developed to detect candidate treetop points, i.e., the noisy points are deleted by smooth filtering (step 1), and then judge the symbol of DM. In step 1, for point m i we search its k nearest neighboring points and calculate the average difference D M ¯ and get the smoothed difference D M . In step 2, the sign function is used to judge the positive and negative of smoothed difference of the point mi:
s i g n ( m i ) = { 1   D M > 0 0   D M = 0 1   D M < 0
If s i g n ( m i ) > s i g n ( m i + 1 ) , it means that the smooth difference will change from positive to negative, so the point m i + 1 is regarded as a candidate treetop point. This procedure is repeated, and then the set of candidate treetop points is obtained as S = { s i | i = 1 , 2 , , N s } (see Figure 6a).
(3)
Optimal treetop points extraction
The candidate treetop points obtained not only contain correct treetop points but also include some local extreme points with lower heights or redundant points with close distances between data. Thus, it is important to filter or remove those data that do not belong to the real treetop points. In our paper, two criterions are introduced. One is tree height and the other is distances between treetop points.
To begin with, the points that do not conform to the tree height are eliminated by judging whether the z-axis coordinate of each candidate treetop point is less than the height threshold zth. We set the treetop point to be above 1/2 of the height of the entire tree scene, and calculate the height threshold zth according to the Equation (6):
z t h = ( z m a x z m i n ) / 2 + z m i n
where z m a x and z m i n are the maximum z-coordinate and the minimum z-coordinate of tree PCD, respectively.
Then judged the distance between each pair of candidate treetop points. Those treetop points that are very close to each other are merged and optimized. Calculate the Euclidean distance between all candidate treetop points, and then sort the distances in ascending order. If the distance between the nearest pair of treetop points is less than the distance threshold dth (value is 0.5 m), the current two treetop points are replaced with their center points. After that, the distance between the updated treetop points is recalculated and evaluated. The optimized treetop points (see Figure 6b) are obtained until the distance between each two candidate treetop points are greater than dth.
The front view and top view of the candidate treetop points extracted from Figure 4d are shown in Figure 6a. The optimal treetop points obtained after filtering and merging the candidate treetop points are shown in Figure 6b.

2.3. Radius Expansion Based Individual Tree Extraction

The challenge task of individual tree extraction is instance-level separation for spatially overlapping tree points [24]. After getting all the treetop points in the scene, we extract an individual tree in the outdoor scene based on the radius expansion.
The core steps of our proposed algorithm include initial clustering by analyzing the treetop points, initial bounding box and expansion circle determination, high-level layering for tree PCD and individual trees extraction by radius expansion.
Given the optimal treetop points G = { g i | i = 1 , 2 , , N g } , N g is the number of the treetop points. The initial clustering is carried out with treetop points as the center. The specific steps are as follows: First, establish a KD-tree (k-dimensional tree, KdTree) with the point g i G as the seed point. Then take the seed point as the center of the sphere and set the radius of the ball to I R ( I R = 2 m ) . Next, cluster the data points in the range of IR with the seed points (Figure 7) to get the initial cluster C l u i . In addition, this process is executed iteratively, until the spherical neighborhood data of all treetop points are divided, and the clusters are obtained, as shown in Figure 7.
The purpose of clustering is to get the initial position where the bounding box and the extended circle are located. According to the initial clustering results, the maximum ( x m a x i , y m a x i ) and minimum ( x m i n i , y m i n i ) of all points in the cluster C l u i to form the initial boundary set Boui, i [ 1 , N g ] . We calculate the radius Ri and the center O i ( O i x , O i y ) of the extended circle where the cluster C l u i is located according to Equations (7) and (8), respectively.
R i = ( ( x m a x i x m i n i ) + ( y m a x i y m i n i ) ) / 4
{ O i x = ( x m a x i x m i n i ) / 2 O i y = ( y m a x i y m i n i ) / 2
After that, the boundary set of all clusters is B o u = { B o u i i = 1 , 2 , , N g } , the radius of the expansion circle is R = { R i i = 1 , 2 , , N s } where the cluster is located, and the center of the circle is O = { O i i = 1 , 2 , , N s } .
It is necessary to slice the tree PCD in the scene after obtaining the initial boundary, as shown in Figure 8. Set the number of split layers to Nl, then extract the maximum z-axis coordinate zmax and the minimum z-axis coordinate zmin from the tree points and calculate the height Hl of each layer of data according to Equation (9).
H l = ( z m a x z m i n ) / N l
The point set of each layer from top to bottom is L = { L i i = 1 , 2 , , N l } , the number of points in the L i layer is N L i . There are two cases to segment tree points:
(1) If point t u L i ( t u x L i , t u y L i ) in layer L i is within the range of B o u j of the cluster C l u j in layer L i 1 , and the horizontal distance d H ( t u L i , O j ) from point t u L i to the center O j ( O j x , O j y ) is less than the radius R j , then point t u L i belongs to the tree where cluster C l u j is located, and point t u L i is assigned to cluster C l u j , d H is calculated by Equation (10):
d H ( t u L i , O j ) = ( t u x L i O j x ) 2 + ( t n y L i O j y ) 2 2
where t u L i is the u -th ( u [ 1 , N L i ] ) point of the layer L i , O j ( O j O ) is the center of the cluster C l u j ( j [ 1 , N g ] ).
(2) If the point t u L i ( t u x L i , t u y L i ) does not belong to any cluster, the circular distance D i s ( t u L i , O j ) corresponding to the point t u L i and each extended circle is calculated by Equation (11), then sort D i s ( t u L i , O j ) in ascending order, and assign point t u L i to the cluster corresponding to the smallest distance,
D i s ( t u L i , O j ) = | d H ( t u L i , O j ) R j |
where R j ( R j R ) is the radius the extended circle of cluster C l u J , j [ 1 , N g ] .
After processing the data of layer L i according to the above two cases, update the radius and center of the circle according to Equations (7) and (8), and then continue to segment the data of layer L i + 1 until all data processing is completed, that is, the extraction of individual trees is completed. A diagram of these two cases is shown in Figure 9. In addition, the individual trees of scene 1 obtained by the layer-by-layer radius expansion method is displayed in Figure 10.

3. Results and Discussion

To verify the effectiveness and robustness of our proposed method, experiments are performed on the Oakland 3D Point Cloud dataset. Our method is implemented using C++ and run on a desktop PC with an Intel I5-8500 and NVIDIA GeForce GTX 1660Ti graphics card.

3.1. Dataset

The Oakland 3D Point Cloud dataset provided by Munoz et al. [33] is used to verify the effectiveness of the proposed method. The Oakland 3D Point Cloud dataset contains 1.6 million 3D points, consisting of two subsets, part2 and part3, where each scene contains approximately 100,000 3D points. The 3D data from Oakland 3D Point Cloud was acquired using a side-looking SICK LMS lidar MLS system and the dataset was collected near the University of Chicago campus in Oakland, Pennsylvania, and Pittsburgh, Pennsylvania. The dataset is expressed in ASCII format file, and the expression format is {x, y, z, label, confidence}, that is, the three-dimensional space coordinates, label, and confidence of the PCD six information. In addition, a label count file (*. Stats) is provided, which counts the number of points of different categories in each scene. The Oakland 3D Point Cloud dataset roughly classifies 3D point clouds into the following categories: facades, ground, trees, wires, and poles, as shown in Figure 11. This paper simplifies the data categories into trees and non-trees, that is, transforms the semantic segmentation problem into a binary classification problem.

3.2. Scene Semantic Segmentation Analysis

We used the Intersection over Union ( I o U ) of each category, the Mean Intersection over Union ( m I o U ) of all categories, and the Overall Accuracy ( O A ) to evaluate the effect of semantic segmentation. I o U is the intersection of the network prediction result and the real value compared to their union, m I o U is the result of summing and averaging the I o U of each category, and O A is the ratio of the number of correctly classified samples to the total number of samples. The calculation methods of I o U , m I o U and O A are computed by Equation (12), Equation (13), and Equation (14), respectively.
I o U = T P F N + F P + T P
m I o U = 1 k + 1 i = 0 k T P F N + F P + T P
O A = T P + T N F N + F P + T P
where TP is the actual number of points on the tree, F P = N a lg o T P , N a lg o represents the number of tree points detected in the scene, F N = N r e f T P , N r e f and represents the number of tree points marked as true values in the original scene.
Six scenarios are selected as the test set for semantic segmentation of the original PointNet and the multi-feature PointNet network. The original network is a PointNet that only contains XYZ information, and the method in this paper is a PointNet that contains XYZ information and six local features.
The quantitative evaluation results are displayed in Table 1. As can be seen from Table 1, applying multi-features data to the semantic segmentation network can improve the segmentation results to different degrees in OA, m I o U , and I o U of each category. The OA is above 95%, and the average correct rate reaches 97.8%, which is 5% higher than that before feature fusion, and the m I o U is improved by 9.5%. From the I o U results of each type of object, after adding local features, the I o U of trees and non-trees has been greatly improved. Among them, the I o U of trees is significantly improved, which is 13.5% higher than that of the original PointNet network, and the I o U of non-tree point clouds is also improved by 5.5%. It can be seen that the local information of the point cloud can enhance the ability of network semantic segmentation effectively.
Figure 12 demonstrates the comparison of the semantic segmentation results of the four scenes. The black boxes in Figure 12 indicate the difference between PointNet and our method. Four scene data can be more finely segmented (e.g., wires and utility poles) based on our method. However, with PointNet most of these small objects are wrongly classified as trees. Especially in the scene 4, part of the ground is wrongly divided into trees.

3.3. Analysis of Individual Trees Extraction Results

Figure 13 and Figure 14 shows the single tree extraction process diagram of scene 2 and scene 4. Scene 2 contains multiple trees with different shapes and sizes, and there are also cases where tree crowns are connected together. Scene 4 that the PCD in this scene is incomplete and has uneven density. Local maxima and candidate treetop points can be successfully extracted and merged and optimized in two different cases (shown in Figure 13 and Figure 14b–e). The single tree extraction result shown in Figure 13f and Figure 14f is obtained by radius expansion. It can be seen from the result that this method can accurately extract connected individual trees and can also correctly extract trees with obvious crown differences.
Experimental results demonstrate that the single tree extraction method based on treetop points detection and radius expansion can correctly extract individual trees in outdoor scenes, and the extraction results are not affected by incomplete data and partial tree crown collapse.

3.4. Comparative Analysis

Moreover, our proposed method is compared with the voxel-based clustering method [27] and the horizontal slice-based method (3D Forest) [34]. Figure 15 illustrates the experimental results of different methods on five scene datasets. The clustering-based method removes the ground by region growing and then uses the Euclidean clustering algorithm to segment the non-ground points. This method is simple and easy to implement, but due to the existence of various objects in urban outdoor scenes, it is prone to under-segmentation problems when non-tree elements are adjacent to trees or connected to multiple trees. The 3D Forest [34] divides the scene into slices and then divides a single tree according to the number of points in the slice clusters and the distance and angle between the clusters.
Compared with these two methods, our proposed method could make finer segmentation and the extraction result is more accurate. Our proposed method can classify trees and non-tree objects in the semantic segmentation stage, and the radius expansion-based method can make full use of the characteristics of trees, and effectively overcome the impact of data missing through top-down hierarchical expansion. Compared with clustering-based method (see in Figure 15a) and 3d Forest (see in Figure 15b), our tree extraction results are closer to real trees, as shown in Figure 15c.
To verify the effectiveness of the proposed algorithm, we analyzed the experimental results quantitatively through six indicators. T P (True Positive) represents the number of correctly extracted individual trees, F N (False Negative) represents the number of undetected single trees, that is, a single tree and nearby trees are divided into the same tree, F P (False Positive) indicates the number of non-trees detected as trees, that is, a point cluster that is not a tree is regarded as a tree. T P , F N and F P represent correct segmentation, under-segmentation, and over-segmentation cases respectively. P (Precision) is the precision rate, indicating the proportion of the number of correctly extracted trees to all detected trees, R (Recall) is the recall rate, indicating the proportion of the number of correctly extracted trees to the actual trees, F (F-score) is a comprehensive index used to evaluate the overall accuracy of tree extraction. The values of P , R and F are calculated according to Equation (15):
{ P = T P T P + F P R = T P T P + F N F = 2 × P × R P + R ,
The quantitative results of the three methods are listed in Table 2. It can be seen that among the three methods, the accuracy of clustering-based method [26] is the worst. This is because the clustering-based method is prone to under-segmentation when trees are connected with other elements. For example, the three trees connected to electric wires cannot be extracted separately (shown in the second row of Figure 15a). The 3D Forest method [34] is better than the clustering-based method, but there are still over-segmentation and under-segmentation of trees. Comparison experimental results could demonstrate that our proposed method is better than other two methods. For example, for Figure 14a with tree crown overlap, the precision, recall and F-score of the proposed method reach 98.33%, 98.33% and 98.33%, respectively, which are higher than 62.75%, 62.08%, 62.39% of the 3D Forest method that is second only to our method.

4. Conclusions

In this paper, a new method is proposed for individual trees detection from MLS point clouds, which can be used in street tree 3D modeling, street tree monitoring, tree species identifying, and biomass estimation. Our method consists of (1) non-tree points removal and tree detection via multi-feature enhanced PointNet, (2) locating treetop points via filtering the tree point projection plane and optimized treetop points by a distance rule, and (3) after the initial clustering of treetop points and vertical layering of tree points, a top-down layer-by-layer segmentation based on radius expansion to realize the complete individual extraction of trees. The experimental results derived from the Oakland 3D Point Cloud dataset demonstrate that benefiting from the accuracy of scene semantic segmentation and the proposed method can effectively extract the individual trees. Compared with the other two methods, the proposed method can effectively avoid the influence of artificial roadside pole-like objects and the crown overlaps. Overall, the precision, recall and F-score of instance segmentation on the used datasets are 98.33%, 98.33% and 98.33%, respectively.
In future work, we will improve the robustness of the method to adapt to forests. Additional deep learning can also be explored with goal of improving tree classification accuracy. Meanwhile, the fusion of the orthophoto image and the LiDAR point clouds would provide a better way to greatly improve the efficiency and the accuracy of urban trees detection, especially for the larger scale urban scenes.

Author Contributions

Conceptualization, X.N. and Y.H.; Methodology, Y.H.; Software, Y.H.; Validation, X.N., Y.H. and Y.M.; Writing-Original Draft Preparation, X.N., Y.H. and Y.M.; Writing-Review and Editing, X.N., Z.L., H.J. and Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

National Natural Science Foundation of China (61871320, 61872291), Shaanxi key Laboratory project (17JS099).

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, J.; Cheng, X.; Xiao, Z. A branch-trunk-constrained hierarchical clustering method for street trees individual extraction from mobile laser scanning point clouds. Measurement 2021, 189, 110440. [Google Scholar] [CrossRef]
  2. Wang, J.; Lindenbergh, R.; Menenti, M. SigVox—A 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds. ISPRS J. Photogramm. Remote Sens. 2017, 128, 111–129. [Google Scholar] [CrossRef]
  3. Yadav, M.; Lohani, B. Identification of trees and their trunks from mobile laser scanning data of roadway scenes. Int. J. Remote Sens. 2019, 41, 1233–1258. [Google Scholar] [CrossRef]
  4. Du, S.; Lindenbergh, R.; Ledoux, H.; Stoter, J.; Nan, L. AdTree: Accurate, Detailed, and Automatic Modelling of Laser-Scanned Trees. Remote Sens. 2019, 11, 2074. [Google Scholar] [CrossRef] [Green Version]
  5. Wu, B.; Yu, B.; Yue, W.; Shu, S.; Tan, W.; Hu, C.; Huang, Y.; Wu, J.; Liu, H. A Voxel-Based Method for Automated Identification and Morphological Parameters Estimation of Individual Street Trees from Mobile Laser Scanning Data. Remote Sens. 2013, 5, 584–611. [Google Scholar] [CrossRef] [Green Version]
  6. Holopainen, M.; Vastaranta, M.; Kankare, V.; Räty, M.; Vaaja, M.; Liang, X.; Yu, X.; Hyyppä, J.; Hyyppä, H.; Viitala, R.; et al. Biomass estimation of individual trees using stem and crown diameter TLS measurements. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2011, 3812, 91–95. [Google Scholar] [CrossRef] [Green Version]
  7. Sparks, A.M.; Corrao, M.V.; Smith, A.M.S. Cross-Comparison of Individual Tree Detection Methods Using Low and High Pulse Density Airborne Laser Scanning Data. Remote Sens. 2022, 14, 3480. [Google Scholar] [CrossRef]
  8. Kuželka, K.; Slavík, M.; Surový, P. Very High Density Point Clouds from UAV Laser Scanning for Automatic Tree Stem Detection and Direct Diameter Measurement. Remote Sens. 2020, 12, 1236. [Google Scholar] [CrossRef] [Green Version]
  9. Windrim, L.; Bryson, M. Detection, Segmentation, and Model Fitting of Individual Tree Stems from Airborne Laser Scanning of Forests Using Deep Learning. Remote Sens. 2020, 12, 1469. [Google Scholar] [CrossRef]
  10. Zhang, W.; Wan, P.; Wang, T.; Cai, S.; Chen, Y.; Jin, X.; Yan, G. A Novel Approach for the Detection of Standing Tree Stems from Plot-Level Terrestrial Laser Scanning Data. Remote Sens. 2019, 11, 211. [Google Scholar] [CrossRef]
  11. Brolly, G.; Király, G.; Lehtomäki, M.; Liang, X. Voxel-Based Automatic Tree Detection and Parameter Retrieval from Terrestrial Laser Scans for Plot-Wise Forest Inventory. Remote Sens. 2021, 13, 542. [Google Scholar] [CrossRef]
  12. Kolendo, Ł.; Kozniewski, M.; Ksepko, M.; Chmur, S.; Neroj, B. Parameterization of the Individual Tree Detection Method Using Large Dataset from Ground Sample Plots and Airborne Laser Scanning for Stands Inventory in Coniferous Forest. Remote Sens. 2021, 13, 2753. [Google Scholar] [CrossRef]
  13. Gollob, C.; Ritter, T.; Wassermann, C.; Nothdurft, A. Influence of Scanner Position and Plot Size on the Accuracy of Tree Detection and Diameter Estimation Using Terrestrial Laser Scanning on Forest Inventory Plots. Remote Sens. 2019, 11, 1602. [Google Scholar] [CrossRef] [Green Version]
  14. Cabo, C.; Ordóñez, C.; López-Sánchez, C.A.; Armesto, J. Automatic dendrometry: Tree detection, tree height and diameter estimation using terrestrial laser scanning. Int. J. Appl. Earth Obs. Geoinf. 2018, 69, 164–174. [Google Scholar] [CrossRef]
  15. Oveland, I.; Hauglin, M.; Giannetti, F.; Kjørsvik, N.S.; Gobakken, T. Comparing Three Different Ground Based Laser Scanning Methods for Tree Stem Detection. Remote Sens. 2018, 10, 538. [Google Scholar] [CrossRef] [Green Version]
  16. Lv, Z.; Li, G.; Jin, Z.; Benediktsson, J.A.; Foody, G.M. Iterative Training Sample Expansion to Increase and Balance the Accuracy of Land Classification From VHR Imagery. IEEE Trans. Geosci. Remote Sens. 2021, 59, 139–150. [Google Scholar] [CrossRef]
  17. Shi, J.; Malik, J. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 888–905. [Google Scholar] [CrossRef] [Green Version]
  18. Xu, Y.; Sun, Z.; Hoegner, L.; Stilla, U.; Yao, W. Instance Segmentation of Trees in Urban Areas from MLS Point Clouds Using Supervoxel Contexts and Graph-Based Optimization. In Proceedings of the 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS), Beijing, China, 19–20 August 2018; pp. 1–5. [Google Scholar] [CrossRef]
  19. Zhong, L.; Cheng, L.; Xu, H.; Wu, Y.; Chen, Y.; Li, M. Segmentation of Individual Trees From TLS and MLS Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 10, 774–787. [Google Scholar] [CrossRef]
  20. Yan, W.; Guan, H.; Cao, L.; Yu, Y.; Gao, S.; Lu, J. An Automated Hierarchical Approach for Three-Dimensional Segmentation of Single Trees Using UAV LiDAR Data. Remote. Sens. 2018, 10, 1999. [Google Scholar] [CrossRef]
  21. Li, L.; Li, D.; Zhu, H.; Li, Y. A dual growing method for the automatic extraction of individual trees from mobile laser scanning data. ISPRS J. Photogramm. Remote Sens. 2016, 120, 37–52. [Google Scholar] [CrossRef]
  22. Husain, A.; Vaishya, R.C. An automated approach for street trees detection using mobile laser scanner data. Remote Sens. Appl. Soc. Environ. 2020, 20, 100371. [Google Scholar] [CrossRef]
  23. Bonneau, D.A.; Bonneau, D.A.; DiFrancesco, P.-M.; DiFrancesco, P.-M.; Hutchinson, D.J.; Hutchinson, D.J.; Bonneau, D.A.; Bonneau, D.A.; DiFrancesco, P.-M.; DiFrancesco, P.-M.; et al. A method for vegetation extraction in mountainous terrain for rockfall simulation. Remote Sens. Environ. 2020, 251, 112098. [Google Scholar] [CrossRef]
  24. Luo, H.; Khoshelham, K.; Chen, C.; He, H. Individual tree extraction from urban mobile laser scanning point clouds using deep pointwise direction embedding. ISPRS J. Photogramm. Remote Sens. 2021, 175, 326–339. [Google Scholar] [CrossRef]
  25. Yang, J.; Kang, Z.; Cheng, S.; Yang, Z.; Akwensi, P.H. An Individual Tree Segmentation Method Based on Watershed Algorithm and Three-Dimensional Spatial Distribution Analysis From Airborne LiDAR Point Clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1055–1067. [Google Scholar] [CrossRef]
  26. Tao, S.; Wu, F.; Guo, Q.; Wang, Y.; Li, W.; Xue, B.; Hu, X.; Li, P.; Tian, D.; Li, C.; et al. Segmenting tree crowns from terrestrial and mobile LiDAR data by exploring ecological theories. ISPRS J. Photogramm. Remote Sens. 2015, 110, 66–76. [Google Scholar] [CrossRef] [Green Version]
  27. Xu, S.; Ye, N.; Xu, S.; Zhu, F. A supervoxel approach to the segmentation of individual trees from LiDAR point clouds. Remote Sens. Lett. 2018, 9, 515–523. [Google Scholar] [CrossRef]
  28. Chen, Y.; Wang, S.; Li, J.; Ma, L.; Wu, R.; Luo, Z.; Wang, C. Rapid Urban Roadside Tree Inventory Using a Mobile Laser Scanning System. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 3690–3700. [Google Scholar] [CrossRef]
  29. Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar] [CrossRef] [Green Version]
  30. Ning, X.; Tian, G.; Wang, Y. Shape classification guided method for automated extraction of urban trees from terrestrial laser scanning point clouds. Multimed. Tools Appl. 2021, 80, 33357–33375. [Google Scholar] [CrossRef]
  31. Weinmann, M.; Urban, S.; Hinz, S.; Jutzi, B.; Mallet, C. Distinctive 2D and 3D features for automated large-scale scene analysis in urban areas. Comput. Graph. 2015, 49, 47–57. [Google Scholar] [CrossRef]
  32. Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar] [CrossRef] [Green Version]
  33. Munoz, D.; Bagnell, J.A.; Vandapel, N.; Hebert, M. Contextual classification with functional Max-Margin Markov Networks. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 975–982. [Google Scholar] [CrossRef]
  34. Trochta, J.; Krůček, M.; Vrška, T.; Král, K. 3D Forest: An application for descriptions of three-dimensional forest structures using terrestrial LiDAR. PLoS ONE 2017, 12, e0176871. [Google Scholar] [CrossRef]
Figure 1. Overview of our proposed method.
Figure 1. Overview of our proposed method.
Remotesensing 14 04926 g001
Figure 2. Trees points before and after filtering. (a) Trees points before filtering. (b) Trees points after filtering.
Figure 2. Trees points before and after filtering. (a) Trees points before filtering. (b) Trees points after filtering.
Remotesensing 14 04926 g002
Figure 3. Tree points projection.
Figure 3. Tree points projection.
Remotesensing 14 04926 g003
Figure 4. Projection of tree PCD. (a) is the point cloud of trees, (b) is the horizontal projection of tree clusters, (c) is the LCS establishment, (d) is the XOZ plane projection.
Figure 4. Projection of tree PCD. (a) is the point cloud of trees, (b) is the horizontal projection of tree clusters, (c) is the LCS establishment, (d) is the XOZ plane projection.
Remotesensing 14 04926 g004
Figure 5. Comparison of local maximum extraction results. (a) PCD before filtering. (b) local maxima found before filtering. (c) PCD after filtering. (d) local maxima found after filtering.
Figure 5. Comparison of local maximum extraction results. (a) PCD before filtering. (b) local maxima found before filtering. (c) PCD after filtering. (d) local maxima found after filtering.
Remotesensing 14 04926 g005
Figure 6. Comparison of treetop points optimization results. (a) Candidate treetop points extraction result. (b) Optimal treetop points extraction result.
Figure 6. Comparison of treetop points optimization results. (a) Candidate treetop points extraction result. (b) Optimal treetop points extraction result.
Remotesensing 14 04926 g006
Figure 7. Initial clustering based on treetop points.
Figure 7. Initial clustering based on treetop points.
Remotesensing 14 04926 g007
Figure 8. The schematic diagram of tree layering.
Figure 8. The schematic diagram of tree layering.
Remotesensing 14 04926 g008
Figure 9. Radius expansion method (Points that are located in both the bounding box B o u 1 and the extended circle O1 area is marked in red and assigned to the tree where the circle O1 is located, and the points located in the bounding box B o u 2 and the extended circle O2 area are marked in green, belongs to the tree where the circle O2 is located. For unclassified black points outside the area, the distances from point t 1 L i to the boundaries of circle O1 and circle O2 are d1 and d2 respectively, and d 1 < d 2 , so t 1 L i is assigned to the cluster where the circle O1 is located, similarly, compare the distances from point t 2 L i to the boundary of circle O1 and circle O2. Because d 3 > d 4   t 1 L i is assigned to the category of circle O2).
Figure 9. Radius expansion method (Points that are located in both the bounding box B o u 1 and the extended circle O1 area is marked in red and assigned to the tree where the circle O1 is located, and the points located in the bounding box B o u 2 and the extended circle O2 area are marked in green, belongs to the tree where the circle O2 is located. For unclassified black points outside the area, the distances from point t 1 L i to the boundaries of circle O1 and circle O2 are d1 and d2 respectively, and d 1 < d 2 , so t 1 L i is assigned to the cluster where the circle O1 is located, similarly, compare the distances from point t 2 L i to the boundary of circle O1 and circle O2. Because d 3 > d 4   t 1 L i is assigned to the category of circle O2).
Remotesensing 14 04926 g009
Figure 10. Individual trees extraction result of scene 1.
Figure 10. Individual trees extraction result of scene 1.
Remotesensing 14 04926 g010
Figure 11. Part of Oakland 3D Point Cloud dataset. The dataset roughly classifies 3D point clouds into the following categories with different labels: facades, ground, trees, wires, and poles.
Figure 11. Part of Oakland 3D Point Cloud dataset. The dataset roughly classifies 3D point clouds into the following categories with different labels: facades, ground, trees, wires, and poles.
Remotesensing 14 04926 g011
Figure 12. Semantic segmentation result on four scene data. (a) is the result of semantic segmentation, (b) is the result of semantic segmentation based on multi-features.
Figure 12. Semantic segmentation result on four scene data. (a) is the result of semantic segmentation, (b) is the result of semantic segmentation based on multi-features.
Remotesensing 14 04926 g012
Figure 13. Individual trees extraction process of scene 2. (a) The PCD of tree, (b) local maxima extraction results, (c) the side view of the candidate treetop points, (d) the front view of the candidate treetop points, (e) optimal treetop points, (f) individual tree extraction result.
Figure 13. Individual trees extraction process of scene 2. (a) The PCD of tree, (b) local maxima extraction results, (c) the side view of the candidate treetop points, (d) the front view of the candidate treetop points, (e) optimal treetop points, (f) individual tree extraction result.
Remotesensing 14 04926 g013
Figure 14. Individual trees extraction process of scene 4. (a) The PCD of tree, (b) local maxima extraction results, (c) the side view of the candidate treetop points, (d) the front view of the candidate treetop points, (e) optimal treetop points, (f) individual tree extraction result.
Figure 14. Individual trees extraction process of scene 4. (a) The PCD of tree, (b) local maxima extraction results, (c) the side view of the candidate treetop points, (d) the front view of the candidate treetop points, (e) optimal treetop points, (f) individual tree extraction result.
Remotesensing 14 04926 g014
Figure 15. Comparison of visual results. (a) is the result of the clustering-based method, (b) is the result of the 3D Forest, (c) is the result of is our method. The black boxes in (a,b) are misclassification results, and our method could get the result in (c).
Figure 15. Comparison of visual results. (a) is the result of the clustering-based method, (b) is the result of the 3D Forest, (c) is the result of is our method. The black boxes in (a,b) are misclassification results, and our method could get the result in (c).
Remotesensing 14 04926 g015
Table 1. Overall Accuracy and Mean IoU of six scenes in Oakland 3D Point Cloud dataset.
Table 1. Overall Accuracy and Mean IoU of six scenes in Oakland 3D Point Cloud dataset.
The Raw SceneOA (%)mIoU (%)IoU (%) TreeIoU (%) Non-Tree
PointNetOursPointNetOursPointNetOursPointNetOurs
Scene194.5697.2387.9393.6983.3491.3092.5396.08
Scene288.1796.1576.1991.0568.2587.3584.1394.76
Scene396.4997.3092.5094.2190.1792.4894.8395.95
Scene496.9098.7093.3197.0891.1896.0795.4598.09
Scene596.5398.8191.2296.8186.9495.1895.4998.44
Scene687.3998.6068.9994.6052.6690.8385.3398.38
average93.3497.8085.0294.5778.7692.2091.2996.95
Table 2. Quantitative comparison results on five scenes.
Table 2. Quantitative comparison results on five scenes.
The Raw SceneMethodTPFPFNPRF
Scene1Clustering method [27]2440.33330.33330.3333
3D Forest [34]1450.2000.16670.1818
Ours600111
Scene2Clustering method [27]2330.40000.20000.2667
3D Forest [34]2440.33330.33330.3333
Ours600111
Scene3Clustering method [27]4550.44440.44440.4444
3D Forest [34]4220.66670.66670.6667
Ours600111
Scene4Clustering method [27]10210.83330.90910.8696
3D Forest [34]1100111
Ours11110.91670.91670.9167
Scene5Clustering method [27]15110.93750.93750.9375
3D Forest [34]15110.93750.93750.9375
Ours1600111
averageClustering method [27]---0.58970.56490.5703
3D Forest [34]---0.62750.62080.6239
Ours---0.98330.98330.9833
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ning, X.; Ma, Y.; Hou, Y.; Lv, Z.; Jin, H.; Wang, Y. Semantic Segmentation Guided Coarse-to-Fine Detection of Individual Trees from MLS Point Clouds Based on Treetop Points Extraction and Radius Expansion. Remote Sens. 2022, 14, 4926. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194926

AMA Style

Ning X, Ma Y, Hou Y, Lv Z, Jin H, Wang Y. Semantic Segmentation Guided Coarse-to-Fine Detection of Individual Trees from MLS Point Clouds Based on Treetop Points Extraction and Radius Expansion. Remote Sensing. 2022; 14(19):4926. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194926

Chicago/Turabian Style

Ning, Xiaojuan, Yishu Ma, Yuanyuan Hou, Zhiyong Lv, Haiyan Jin, and Yinghui Wang. 2022. "Semantic Segmentation Guided Coarse-to-Fine Detection of Individual Trees from MLS Point Clouds Based on Treetop Points Extraction and Radius Expansion" Remote Sensing 14, no. 19: 4926. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194926

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop