Next Article in Journal
A New Calculation Method of Cutterhead Torque Considering Shield Rolling Angle
Next Article in Special Issue
Low-Cost Sensors Accuracy Study and Enhancement Strategy
Previous Article in Journal
Prediction and Analysis of the Surface Roughness in CNC End Milling Using Neural Networks
Previous Article in Special Issue
Port Structure Inspection Based on 6-DOF Displacement Estimation Combined with Homography Formulation and Genetic Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vegetation Removal on 3D Point Cloud Reconstruction of Cut-Slopes Using U-Net

College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter EX4 4PY, UK
*
Author to whom correspondence should be addressed.
Current address: Harrison Building North Park Road, Exeter EX4 4QF, UK.
Submission received: 18 October 2021 / Revised: 24 December 2021 / Accepted: 28 December 2021 / Published: 31 December 2021
(This article belongs to the Special Issue Artificial Intelligence Technologies for Structural Health Monitoring)

Abstract

:
The 3D point cloud reconstruction from photos taken by an unmanned aerial vehicle (UAV) is a promising tool for monitoring and managing risks of cut-slopes. However, surface changes on cut-slopes are likely to be hidden by seasonal vegetation variations on the cut-slopes. This paper proposes a vegetation removal method for 3D reconstructed point clouds using (1) a 2D image segmentation deep learning model and (2) projection matrices available from photogrammetry. For a given point cloud, each 3D point of it is reprojected into the image coordinates by the projection matrices to determine if it belongs to vegetation or not using the 2D image segmentation model. The 3D points belonging to vegetation in the 2D images are deleted from the point cloud. The effort to build a 2D image segmentation model was significantly reduced by using U-Net with the dataset prepared by the colour index method complemented by manual trimming. The proposed method was applied to a cut-slope in Doam Dam in South Korea, and showed that vegetation from the two point clouds of the cut-slope at winter and summer was removed successfully. The M3C2 distance between the two vegetation-removed point clouds showed a feasibility of the proposed method as a tool to reveal actual change of cut-slopes without the effect of vegetation.

1. Introduction

With the advent of unmanned aerial vehicle (UAV), the 3D reconstruction by photogrammetry has become an useful tool for numerous civil engineering applications including structural modelling [1,2], structural assessment [3], historical building protecting [4], road construction [5], road maintenance [6], construction progress measurement [7], monitoring and controlling construction site activities [8], land usage investigation [9,10], etc.
The 3D reconstruction is also useful for the cut-slope application as a drone can survey a wide range of steep cut-slopes efficiently and effectively without a risk of climbing for field investigators. Photogrammetry provides a 3D point cloud as a comprehensive and quantitative documentation of the slope at a given time, without any subjective assessment by field investigators. The 3D point cloud can provide further geometric and geological information for the slope stability analysis [11].
Ideally, engineers want to monitor long-term changes of cut-slopes. In theory, this can be done by inspecting the difference between two 3D point clouds temporarily apart, but it is often the case the difference is submerged by seasonal changes of vegetation on the cut-slopes. In order to make such technology realisable, it is important to remove vegetation from the 3D point clouds.
There are three different approaches found in the literature for vegetation removal depending on how each point in the point cloud is classified: (1) the colour index approach [12,13]; (2) the planar estimation approach [14,15,16]; or (3) the machine learning based 3D point cloud segmentation approach [17,18].
The colour index approach is simple to use and works reasonably for a specific scene with tuned parameters, but has a limited performance when the colour of a non-vegetation is close to the colour of vegetation. The planar estimation approach showed a limited performance on slopes with complex surface geometry [14]. Considering the planar estimation approach’s working principle that vegetation is separated from ground surface by distance, it is expected to work reasonably well for tall vegetation, but not for short vegetation. The comparative study [19] showed even the best performance method was not able to remove vegetation completely. The machine learning based 3D point cloud segmentation approach is relatively new, but promising with an expectation of high-performance when trained with a large dataset [17,18]. Pix4Dmapper provides a machine learning based point cloud classifier for a dense point cloud reconstructed by itself [20]. The active research for 3D segmentation in computer science, medical imaging, etc. [21,22,23,24] is likely to benefit the 3D vegetation removal approach. However, the requirement for a large 3D dataset is a practical problem in the development [17].
For a high-performance vegetation removal with less effort, this study proposes a new approach combining the deep learning-based 2D image segmentation and photogrammetry. The colour index method is employed to reduce the effort in preparing a 2D segmented image dataset.
The paper is organised as following: introduction to the cut-slope studied in this work (Section 2), methodology of the proposed 3D vegetation removal approach (Section 3), performance of the proposed approach on the cut-slope (Section 4), and the conclusion.

2. Cut-Slope of Doam Dam

The cut-slope used in this study was a rocky cut-slope located in Doam Dam at the north-east of South Korea, as shown in Figure 1a. Images taken with UAV at different times (winter and summer) were used to reconstruct 3D point clouds of the cut slope. A DJI Phantom 3 Professional UAV was used to take 4000 × 3000 pixels images as shown in Figure 1c. The UAV was manually driven and the camera position and orientation of each image is shown as a pyramid in Figure 2. The images were processed by an open-source photogrammetry software COLMAP [25] to reconstruct a 3D point cloud of the cut-slope, as shown in Figure 1b. In the reconstruction, the “automatic reconstruction” command of COLMAP was used with the default setting.
The photogrammetry procedure consists of structure from motion (SfM) and multi-view stereo (MVS) as illustrated in Figure 3. Structure from motion estimates the camera position and orientation of each image (i.e., the structure) by triangulation of the feature points appearing on multiple images. Structure from motion also determines the 3D coordinates of the feature points, known as a sparse point cloud. As a sparse point cloud is usually not dense enough to represent the surface of a slope, multi-view stereo procedure is performed to generate a dense point cloud by triangulation on each pixel of two adjacent images to determine the corresponding 3D coordinates in the world coordinates.
As shown in Figure 1c, there was no clear distinction between the rock and vegetation, which made vegetation removal challenging on both the 2D images and 3D point cloud.

3. Methodology

3.1. Proposed Vegetation Removal Method by 2D Segmentation and Photogrammetry

There are two possible vegetation removal approaches on the 3D point cloud reconstruction using the 2D image segmentation as shown in Figure 4. Approach number (#) 1 is to perform the 2D image segmentation to generate mask files to be fed to photogrammetry procedure. A mask file of an input image is a file with the same number of pixels of the input image, but filled with 0 or 1 instructing the photogrammetry procedure whether to ignore each pixel or not. COLMAP does have such functionality for structure from motion, but not for multi-view stereo. Thus, this approach was not implementable using COLMAP.
This paper proposes an alternative, but a simpler approach as shown as approach #2 in Figure 4. The photogrammetry procedure is done without any vegetation consideration, to produce a point cloud with vegetation (Figure 5a). Each point in the point cloud is re-projected into the image coordinates (Figure 5c) using the transformation matrix identified by structure from motion. the corresponding point in the image coordinates can be determined if it belongs to vegetation or not, using the 2D vegetation segmentation by U-net on the image (Figure 5b). If it belongs to vegetation, it is deleted from the point cloud. Otherwise, it is kept. This process is repeated for every point in the point cloud to create a point cloud without vegetation (Figure 5d).
For a robust 2D segmentation, this study used a deep learning based 2D vegetation segmentation using U-Net to overcome the limitations of the colour index method, as described in Section 3.2. However, the use of the colour index method is still useful especially for green vegetation, and it is proposed to use the colour index method to minimise efforts for preparing a segmented image dataset as shown in Figure 6. Original images are segmented by the colour index method first, followed by segmentation quality check by human. Images without any apparent incorrect segmentation are fed to the segmented image dataset. Images with incorrect segmentation are manually trimmed and corrected by humans, and fed to the segmented image dataset.

3.2. Colour Index

The colour index is an efficient way for vegetation segmentation in 2D images [19,26]. The widely-used colour index is the Excessive Greenness Index ( E x G ) [27,28,29]. E x G as shown in Equation (1) is a continuous index where a higher value indicates a green surface and a lower value indicates a bare ground.
E x G = 2 G R B
where R, G, B values are the normalised red, green, and blue values of each pixel in an image, 0 R , G , B 1 . E x G performs reasonably for vegetation segmentation from soil. However, the original E x G or any widely used colour index no longer showed a good performance in this study.
A modified E x G ( m E x G ) was developed in this study.
m E x G = G 1.2 B
where the coefficients 1.2 was determined by trial and error for the best performance. Figure 7 shows a performance comparison of the original E x G and m E x G , where any vegetation pixel was displayed as it was and any non-vegetation pixel was displayed as white. The yellow traffic line, the light blue guardrail and some of the slate grey rock were classified as vegetation by E x G as they also contained green. However, m E x G segmented the traffic line, guardrail, and rock successfully as non-vegetation.
Figure 8 shows an example that a single colour index was not sufficient for all kinds of vegetation. Two different colour indices, m E x G and m E x R ( = R 1.2 B ), were used. The factor 1.2 in m E x R was also identified by trial and error. Figure 8b showed the green mask generated with m E x G , Figure 8c showed the red mask generated with m E x R , and any pixel corresponding to either m E x G or m E x R was added up to get the entire vegetation, fresh or dried, as shown in Figure 8d.
The colour index method may fail when colour of vegetation is close to the background as shown in Figure 9 and Figure 10. In the red rectangle of Figure 9, both fresh and dry leaves were segmented with the colour index method successfully (vegetation as white, and non-vegetation as black), as the colours of them were different with the rock. However, in the blue rectangle, the colour of dry bush branch (grey) was close to the rock behind, and they were segmented wrongly as rock. In Figure 10, the dry grass in the red rectangle was segmented correctly, but the colour of weathered rock surface in the blue rectangle was close to yellow leaves, so that it was segmented wrongly as vegetation.

3.3. Manual Trimming

To address the limitations of the colour index method, manual trimming by human was required. However, even with manual trimming, there were cases where segmentation was not straightforward as shown in the blue rectangle in Figure 9. Figure 11 showed how the mainstream image segmentation dataset (Cityscapes) dealt with this problem. Instead of segmenting the branch from the background, pixel-by-pixel, it gave the envelope of the branch. A similar processing was used in this study and, thus, a small part of the background was also segmented as vegetation in areas crowded with vegetation. For this reason, the area of vegetation was slightly over estimated.
Figure 12 shows examples of manual trimming with the strategy. In Figure 12a, the grey dry bush was manually segmented as vegetation. In Figure 12b, weathered rock was manually segmented as non-vegetation. Some scatters were erased since it was hard to tell from the image itself whether they were corresponding to the dry grass or not.

3.4. U-Nets

The deep learning methods used for the land usage investigation showed a good performance [9,10]. It proved that many deep learning models were suitable for vegetation recognition. In this study, the classification problem is not complex (vegetation or non-vegetation) and the aim is to find a deep learning model requiring a smaller dataset whilst keeping details of images.
U-Net (Figure 13) was firstly used in medical image segmentation, the name comes from the encoder–decoder shape. It was invented for a small dataset and was well trained with only 30 images [30]. The skip connections between the encoder and decoder keep details of the input images. In U-Net, the encoder part is similar with some image classification models and, thus, U-Net can be built based on an image classification model. The pre-trained weight of an image classification model can be used for transfer-learning reducing time-consumption for model training. Two image classification models, VGG16 and MobileNetV2, were used to build U-Nets and the results were compared. VGG16 uses a smaller convolution kernel, which helps extract complex features [31], whilst MobileNetV2 was designed for mobile applications with a smaller model size, resulting in a reduced amount of computation [32].
For training and testing, a dataset of 70 images was used: 35 images taken from summer and the other 35 images from winter. The images from winter were selected to cover the situations in Figure 9 and Figure 10 and were all manually trimmed. The images from summer were randomly selected and segmented successfully by m E x G only. To make full use of the 70 segmented images, each 4000 × 3000 pixels image was cut into 25 images of 800 × 600 pixels. In addition, horizontal-flip data augmentation was used so that the 70 images were augmented as 3500 ( = 70 × 25 × 2 ) images. Among them, 70% (2450 images):30% (1050 images) were used as training and test datasets respectively. The augmented images were resized to 512 × 512 pixels to be fed to the U-net. Performance metrics used in training/testing were (1) the binary accuracy on how often predictions equal to labels, and (2) the binary cross-entropy loss function between true and predicted labels. The batch size was 16, and the initial learning rate was 10 4 with the Adam optimiser. Figure 14 shows the training history with the two metrics.
Figure 14 shows that the accuracy on the training dataset was close to the accuracy on the test dataset. This was the same for the binary cross-entropy loss index as well. Both the accuracy and binary cross-entropy loss plots showed converging trends. U-Nets based on VGG16 and MobileNet V2 reached approximately 95% and 90% binary accuracy after 16 training epochs, respectively. The binary cross-entropy loss of VGG16 U-Net is much lower than MobileNetV2 U-Net. U-Net based on MobilenetV2 took approximately 1/3 training time of that based on VGG16, and the model size was less than half of it. Meanwhile, the binary accuracy difference between the two U-Nets was about 5%.
Figure 15 shows the vegetation segmentation results of m E x G , and the U-Nets with VGG16 or MobileNetV2, both with 50% confidence level and above. The first row in Figure 15 shows the original images taken in different locations and seasons. The second row shows the mask obtained with the colour index method and the third row were the manually trimmed results. In the third row, the first two images were the high-quality results obtained by the colour index method only, whilst the last two images were manually trimmed based on the masks generated with the colour index method. The last two rows were the predicted masks with the two U-Nets. The predictions with the two U-Nets were close to each other and they both agreed well with the trimmed masks. VGG16 U-Net kept more details than MobileNetV2 U-Net.

4. Result of the Proposed Method on Doam Dam

This section presented (1) the vegetation removed results by the proposed method in comparison with two existing vegetation removed methods on Doam Dam’s point clouds, (2) the negative effect of vegetation on long-term cut-slope monitoring, and (3) a possibility of long-term cut-slope monitoring realised by the proposed vegetation removed method. The two existing methods used were (1) Cloth Simulation Filter [14], available in CloudCompare [33], and (2) dense point cloud classification [20], available in Pix4Dmapper [34].
Doam Dam’s point clouds used in this study were reconstructed from two sets of photos taken at two different times, one at winter (January 2018) and the other at summer (September 2017). Each image set consists of 1305 images and was processed by COLMAP. The proposed method was applied to the two point clouds and the results were shown in Figure 16, Figure 17 and Figure 18.
Figure 16 shows (a) the original 3D point cloud in winter, and (b) the point cloud classification result by Pix4Dmapper, (c) vegetation removed point cloud by the Cloth Simulation Filter, and (d) the vegetation removed point cloud by the proposed method. The point cloud classification by Pix4Dmapper showed a poor performance: many parts on the slope were classified as “building”, and the large dry vegetation patch on the left of the cut-slope was classified as a mixture of Ground and high vegetation, rather than vegetation. This may indicate that the machine learning model of Pix4D was not trained extensively for cut-slopes. The vegetation removed point cloud by the Cloth Simulation Filter (the options used were the cloth resolution of 0.1 m and the classification threshold of 0.1 m) showed the trees on the side of the road at the bottom were removed significantly. However, a large portion of the point cloud was still brown due to unremoved dry grasses and bushes on the slope. Considering the working principle of the Cloth Simulation Filter, this was an expected result. The vegetation removed result by the proposed method showed that the dry vegetation was successfully removed from the point cloud.
Figure 17 shows (a) the original 3D point cloud in summer, and (b) the point cloud classification result by Pix4Dmapper, (c) vegetation removed point cloud by the Cloth Simulation Filter, and (d) the vegetation removed point cloud by the proposed method. The point cloud classification by Pix4Dmapper showed a poor performance again: most parts on the slope were classified as “high vegetation”, and “ground” point clouds were relatively smaller than “high vegetation”. However, it was interesting to observe that “road surface” and two utility poles (“human made object”) were properly classified. Again, this may indicate that the machine learning model of Pix4D was not trained extensively for vegetation and non-vegetation on cut-slopes. The vegetation removed point cloud by the Cloth Simulation Filter showed a similar result with the winter one. It removed the trees on the side of the road at the bottom well. However, a large portion of the point cloud was still green due to unremoved fresh grasses and bushes on the slope. The vegetation removed result by the proposed method showed that the fresh vegetation was successfully removed from the point cloud.
Figure 18 shows zoom-in views of Figure 16 and Figure 17 for the same region on the cut-slope. Excellent vegetation removal performance was found for both fresh (green) and dry (brown) vegetation in comparison with the two existing methods.
For a purpose of long-term cut-slope monitoring, a geometric distance metric must be used to calculate distance between two point clouds. A promising metric is multi-scale model to model cloud comparison (M3C2) distance [35], which physical meaning is a local distance between two point clouds along the surface normal direction. The M3C2 distances between winter and summer point clouds were calculated and shown in Figure 19 and Figure 20.
Figure 19a shows orange regions with large differences. By comparing the orange regions with summer point cloud (Figure 17), it was found that the orange regions corresponded to the vegetation regions. Figure 20a confirmed this in comparison with Figure 18a,b. These demonstrated the negative effect of vegetation for long-term cut-slope monitoring using point clouds.
On the other hand, the M3C2 distance between the vegetation removed point clouds by the proposed method was shown in Figure 19a and Figure 20b. It can be found the orange regions with vegetation were removed successfully from both figures. However, the top and bottom parts of Figure 19b were shown to be green, which may indicate an alignment issue between winter and summer point clouds. This issue may be a topic of further study to improve accuracy of the M3C2 distance calculation.

5. Conclusions and Discussion

This study proposed the vegetation removal method on a 3D point cloud using the photogrammetry and the 2D segmentation by U-Net. The study demonstrated that the proposed method effectively removed seasonal change due to vegetation on the cut-slope in Doam Dam in South Korea. This showed a feasibility of the 3D point clouds to monitor long-term changes in cut-slopes. The findings in the study are summarised as following.
  • The colour index was found to be successful for vegetation segmentation when the contrast between vegetation and the background was high, and the colour of vegetation was relatively simple. However, it was not successful otherwise.
  • Deep learning-based 2D image segmentation using U-Net was robust to deal with such complex situations, showing the similar performance with the manually trimmed image segmentation.
  • The effort to build the 2D segmentation deep learning model was significantly reduced by using U-Net with the dataset prepared by both the colour index method and manual trimming.
  • Two base models, VGG16 and MobileNetV2, were used to build the U-Nets. The VGG16 U-Net showed the better performance, but the MobileNetV2 U-Net took much less time for training with a smaller model size. The segmentation accuracy of VGG16 and MobileNetV2 U-Net were approximately 95% and 90%, respectively.
  • The proposed vegetation removal method on the 3D point cloud reconstruction was found to remove vegetation successfully on the two 3D point clouds constructed with images taken at two different times.
  • The M3C2 distance between two vegetation-removed point clouds of the cut-slope successfully removed the negative effect due to seasonal vegetation on the cut-slope.

Author Contributions

Conceptualisation, K.-Y.K.; software, Y.W.; investigation and writing, Y.W. and K.-Y.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Korea Agency for Infrastructure Technology Advancement under the Ministry of Land, Infrastructure and Transport of the Korean government (project number: 19SCIP-C151408-01).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Popescu, C.; Täljsten, B.; Blanksvärd, T.; Elfgren, L. 3D reconstruction of existing concrete bridges using optical methods. Struct. Infrastruct. Eng. 2019, 15, 912–924. [Google Scholar] [CrossRef] [Green Version]
  2. Tang, S.; Zhang, Y.; Li, Y.; Yuan, Z.; Wang, Y.; Zhang, X.; Li, X.; Zhang, Y.; Guo, R.; Wang, W. Fast and automatic reconstruction of semantically rich 3D indoor maps from low-quality RGB-D sequences. Sensors 2019, 19, 533. [Google Scholar] [CrossRef] [Green Version]
  3. Valença, J.; Júlio, E.N.B.S.; Araújo, H.J. Applications of photogrammetry to structural assessment. Exp. Tech. 2012, 36, 71–81. [Google Scholar] [CrossRef]
  4. Khaloo, A.; Lattanzi, D.; Cunningham, K.; Dell’Andrea, R.; Riley, M. Unmanned aerial vehicle inspection of the Placer River Trail Bridge through image-based 3D modelling. Struct. Infrastruct. Eng. 2018, 14, 124–136. [Google Scholar] [CrossRef]
  5. Congress, S.S.C.; Puppala, A.J. Novel Methodology of Using Aerial Close Range Photogrammetry Technology for Monitoring the Pavement Construction Projects; American Society of Civil Engineers: Reston, VA, USA, 2019; pp. 121–130. [Google Scholar] [CrossRef]
  6. Inzerillo, L.; Di Mino, G.; Roberts, R. Image-based 3D reconstruction using traditional and UAV datasets for analysis of road pavement distress. Autom. Construct. 2018, 96, 457–469. [Google Scholar] [CrossRef]
  7. El-Omari, S.; Moselhi, O. Integrating 3D laser scanning and photogrammetry for progress measurement of construction work. Autom. Construct. 2008, 18, 1–9. [Google Scholar] [CrossRef]
  8. Omar, H.; Mahdjoubi, L.; Kheder, G. Towards an automated photogrammetry-based approach for monitoring and controlling construction site activities. Comput. Ind. 2018, 98, 172–182. [Google Scholar] [CrossRef]
  9. Liu, T.; Abd-Elrahman, A. Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification. ISPRS J. Photogramm. Remote Sens. 2018, 139, 154–170. [Google Scholar] [CrossRef]
  10. Jiang, Y.; Bai, Y.; Han, S. Determining ground elevations covered by vegetation on construction sites using drone-based orthoimage and convolutional neural network. J. Comput. Civ. Eng. 2020, 34, 04020049. [Google Scholar] [CrossRef]
  11. Menegoni, N.; Giordan, D.; Perotti, C.; Tannant, D.D. Detection and geometric characterization of rock mass discontinuities using a 3D high-resolution digital outcrop model generated from RPAS imagery—Ormea rock slope, Italy. Eng. Geol. 2019, 252, 145–163. [Google Scholar] [CrossRef]
  12. Mesas-Carrascosa, F.J.; de Castro, A.I.; Torres-Sánchez, J.; Triviño-Tarradas, P.; Jiménez-Brenes, F.M.; García-Ferrer, A.; López-Granados, F. Classification of 3D Point Clouds Using Color Vegetation Indices for Precision Viticulture and Digitizing Applications. Remote Sens. 2020, 12, 317. [Google Scholar] [CrossRef] [Green Version]
  13. Bassine, F.Z.; Errami, A.; Khaldoun, M. Vegetation Recognition Based on UAV Image Color Index. In Proceedings of the 2019 IEEE International Conference on Environment and Electrical Engineering and 2019 IEEE Industrial and Commercial Power Systems Europe (EEEIC/I&CPS Europe), Genova, Italy, 10–14 June 2019; pp. 1–4. [Google Scholar] [CrossRef]
  14. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  15. Štroner, M.; Urban, R.; Lidmila, M.; Kolář, V.; Křemen, T. Vegetation Filtering of a Steep Rugged Terrain: The Performance of Standard Algorithms and a Newly Proposed Workflow on an Example of a Railway Ledge. Remote Sens. 2021, 13, 3050. [Google Scholar] [CrossRef]
  16. Bulatov, D.; Stütz, D.; Hacker, J.; Hacker, J.; Weinmann, M. Classification of airborne 3D point clouds regarding separation of vegetation in complex environments. Appl. Opt. 2021, 60, F6–F20. [Google Scholar] [CrossRef] [PubMed]
  17. Weidner, L.M. Generalized Machine-Learning-Based Point Cloud Classification for Natural and Cut Slopes. Ph.D. Thesis, Colorado School of Mines, Denver, CO, USA, 2021. [Google Scholar]
  18. Pinto, M.F.; Melo, A.G.; Honório, L.M.; Marcato, A.L.M.; Conceição, A.G.S.; Timotheo, A.O. Deep Learning Applied to Vegetation Identification and Removal Using Multidimensional Aerial Data. Sensors 2020, 20, 6187. [Google Scholar] [CrossRef]
  19. Anders, N.; Valente, J.; Masselink, R.; Keesstra, S. Comparing filtering techniques for removing vegetation from UAV-based photogrammetric point clouds. Drones 2019, 3, 61. [Google Scholar] [CrossRef] [Green Version]
  20. Becker, C.; Häni, N.; Rosinskaya, E.; d’Angelo, E.; Strecha, C. Classification of Aerial Photogrammetric 3D Point Clouds. arXiv 2017, arXiv:1705.08374. [Google Scholar] [CrossRef] [Green Version]
  21. Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3D ShapeNets: A Deep Representation for Volumetric Shapes. arXiv 2015, arXiv:cs/1406.5670. [Google Scholar]
  22. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. arXiv 2017, arXiv:cs/1612.00593. [Google Scholar]
  23. Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J.D.; Schindler, K.; Pollefeys, M. Semantic3D.Net: A New Large-Scale point cloud Classification Benchmark. arXiv 2017, arXiv:cs/1704.03847. [Google Scholar] [CrossRef] [Green Version]
  24. Van Ginneken, B.; Heimann, T.; Styner, M. 3D segmentation in the clinic: A grand challenge. In Proceedings of the MICCAI Workshop on 3D Segmentation in the Clinic: A Grand Challenge, Brisbane, Australia, 29 October 2007; Volume 1, pp. 7–15. [Google Scholar]
  25. Schönberger, J.L.; Frahm, J.M. Structure-from-Motion Revisited. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  26. Bai, X.; Cao, Z.; Wang, Y.; Yu, Z.; Hu, Z.; Zhang, X.; Li, C. Vegetation segmentation robust to illumination variations based on clustering and morphology modelling. Biosyst. Eng. 2014, 125, 80–97. [Google Scholar] [CrossRef]
  27. Guijarro, M.; Pajares, G.; Riomoros, I.; Herrera, P.; Burgos-Artizzu, X.; Ribeiro, A. Automatic segmentation of relevant textures in agricultural images. Comput. Electron. Agric. 2011, 75, 75–83. [Google Scholar] [CrossRef] [Green Version]
  28. Yang, W.; Wang, S.; Zhao, X.; Zhang, J.; Feng, J. Greenness identification based on HSV decision tree. Inf. Process. Agric. 2015, 2, 149–160. [Google Scholar] [CrossRef] [Green Version]
  29. Hassanein, M.; Lari, Z.; El-Sheimy, N. A new vegetation segmentation approach for cropped fields based on threshold detection from hue histograms. Sensors 2018, 18, 1253. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  31. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  32. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  33. CloudCompare. 2021. Available online: https://www.cloudcompare.org/ (accessed on 24 December 2021).
  34. Pix 4D Pix4Dmapper. 2019. Available online: https://www.pix4d.com/ (accessed on 24 December 2021).
  35. James, M.R.; Robson, S.; Smith, M.W. 3-D uncertainty-based topographic change detection with structure-from-motion photogrammetry: Precision maps for ground control and directly georeferenced surveys. Earth Surf. Process. Landf. 2017, 42, 1769–1788. [Google Scholar] [CrossRef]
Figure 1. Doam Dam: (a) location, (b) 3D point cloud, and (c) typical image taken by UAV.
Figure 1. Doam Dam: (a) location, (b) 3D point cloud, and (c) typical image taken by UAV.
Applsci 12 00395 g001
Figure 2. Camera positions and orientations of the drone survey shown as pyramids.
Figure 2. Camera positions and orientations of the drone survey shown as pyramids.
Applsci 12 00395 g002
Figure 3. Photogrammetry procedure.
Figure 3. Photogrammetry procedure.
Applsci 12 00395 g003
Figure 4. Proposed vegetation removal approach #2.
Figure 4. Proposed vegetation removal approach #2.
Applsci 12 00395 g004
Figure 5. Illustration of the proposed vegetation removal approach: (a) 3D Point Cloud with vegetation, (b) Original Image, (c) Vegetation Mask, and (d) 3D Point Cloud without vegetation.
Figure 5. Illustration of the proposed vegetation removal approach: (a) 3D Point Cloud with vegetation, (b) Original Image, (c) Vegetation Mask, and (d) 3D Point Cloud without vegetation.
Applsci 12 00395 g005
Figure 6. Preparation a segmented dataset combining colour index and manual trimming.
Figure 6. Preparation a segmented dataset combining colour index and manual trimming.
Applsci 12 00395 g006
Figure 7. Comparison on the performance of the original E x G and modified E x G : (a) original image, (b) vegetation segmented by E x G , and (c) by m E x G .
Figure 7. Comparison on the performance of the original E x G and modified E x G : (a) original image, (b) vegetation segmented by E x G , and (c) by m E x G .
Applsci 12 00395 g007
Figure 8. Combination of mExG and mExR: (a) original image, (b) mExG, (c) mExR, and (d) mExG + mExR.
Figure 8. Combination of mExG and mExR: (a) original image, (b) mExG, (c) mExR, and (d) mExG + mExR.
Applsci 12 00395 g008
Figure 9. Example of colour index failure on dry bush branch.
Figure 9. Example of colour index failure on dry bush branch.
Applsci 12 00395 g009
Figure 10. Example of colour index failure on weathered surface.
Figure 10. Example of colour index failure on weathered surface.
Applsci 12 00395 g010
Figure 11. Example of image segmentation strategy in the Cityscapes dataset.
Figure 11. Example of image segmentation strategy in the Cityscapes dataset.
Applsci 12 00395 g011
Figure 12. Examples of manually trimmed vegetation: (a) Original images, (b) Colour index mask, and (c) Manually trimmed mask.
Figure 12. Examples of manually trimmed vegetation: (a) Original images, (b) Colour index mask, and (c) Manually trimmed mask.
Applsci 12 00395 g012
Figure 13. Structure of the VGG16-based U-Net architecture.
Figure 13. Structure of the VGG16-based U-Net architecture.
Applsci 12 00395 g013
Figure 14. Model training histories for binary accuracy (top) and binary cross entropy loss (bottom).
Figure 14. Model training histories for binary accuracy (top) and binary cross entropy loss (bottom).
Applsci 12 00395 g014
Figure 15. Vegetation segmentation by U-Nets: (a) summer image 1 (b) summer image 2 (c) winter image 1 (d) winter image 2.
Figure 15. Vegetation segmentation by U-Nets: (a) summer image 1 (b) summer image 2 (c) winter image 1 (d) winter image 2.
Applsci 12 00395 g015
Figure 16. Three-dimensional (3D) point clouds of Doam Dam in January 2018: (a) with vegetation, (b) point cloud classification by Pix4Dmapper, (c) vegetation removed by the Cloth Simulation Filter, and (d) vegetation removed by the proposed method.
Figure 16. Three-dimensional (3D) point clouds of Doam Dam in January 2018: (a) with vegetation, (b) point cloud classification by Pix4Dmapper, (c) vegetation removed by the Cloth Simulation Filter, and (d) vegetation removed by the proposed method.
Applsci 12 00395 g016aApplsci 12 00395 g016b
Figure 17. The 3D point clouds of Doam Dam in September 2017: (a) with vegetation, (b) point cloud classification by Pix4Dmapper, (c) vegetation removed by the Cloth Simulation Filter, and (d) vegetation removed by the proposed method.
Figure 17. The 3D point clouds of Doam Dam in September 2017: (a) with vegetation, (b) point cloud classification by Pix4Dmapper, (c) vegetation removed by the Cloth Simulation Filter, and (d) vegetation removed by the proposed method.
Applsci 12 00395 g017aApplsci 12 00395 g017b
Figure 18. Zoom-in view of 3D point clouds of Doam Dam: the first row is the original Point Cloud, the 2nd row denotes (a) winter PC with vegetation, (b) summer PC with vegetation, (c) winter point cloud classification by Pix4Dmapper, (d) summer point cloud classification by Pix4D Mapper, (e) winter vegetation removed by the Cloth Simulation Filter, and (f) summer vegetation removed by the Cloth Simulation Filter, (g) winter vegetation removed by the proposed method, and (h) summer vegetation removed by the proposed method.
Figure 18. Zoom-in view of 3D point clouds of Doam Dam: the first row is the original Point Cloud, the 2nd row denotes (a) winter PC with vegetation, (b) summer PC with vegetation, (c) winter point cloud classification by Pix4Dmapper, (d) summer point cloud classification by Pix4D Mapper, (e) winter vegetation removed by the Cloth Simulation Filter, and (f) summer vegetation removed by the Cloth Simulation Filter, (g) winter vegetation removed by the proposed method, and (h) summer vegetation removed by the proposed method.
Applsci 12 00395 g018
Figure 19. Multiscale Model to Model Cloud Comparisondistances between winter and summer: (a) with vegetation, and (b) without vegetation.
Figure 19. Multiscale Model to Model Cloud Comparisondistances between winter and summer: (a) with vegetation, and (b) without vegetation.
Applsci 12 00395 g019
Figure 20. Zoomed view of Multiscale Model to Model Cloud Comparison distances between winter and summer: (a) with vegetation and (b) without vegetation.
Figure 20. Zoomed view of Multiscale Model to Model Cloud Comparison distances between winter and summer: (a) with vegetation and (b) without vegetation.
Applsci 12 00395 g020
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Koo, K.-Y. Vegetation Removal on 3D Point Cloud Reconstruction of Cut-Slopes Using U-Net. Appl. Sci. 2022, 12, 395. https://0-doi-org.brum.beds.ac.uk/10.3390/app12010395

AMA Style

Wang Y, Koo K-Y. Vegetation Removal on 3D Point Cloud Reconstruction of Cut-Slopes Using U-Net. Applied Sciences. 2022; 12(1):395. https://0-doi-org.brum.beds.ac.uk/10.3390/app12010395

Chicago/Turabian Style

Wang, Ying, and Ki-Young Koo. 2022. "Vegetation Removal on 3D Point Cloud Reconstruction of Cut-Slopes Using U-Net" Applied Sciences 12, no. 1: 395. https://0-doi-org.brum.beds.ac.uk/10.3390/app12010395

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop