Point-Cloud Segmentation for 3D Edge Detection and Vectorization
Abstract
:1. Introduction
- Accessible, platform-independent information. There are many cases when site-work steps beyond the workstation or laptop; this may be due to cost, format compatibility or simple practicality.
- Clarification of complex 3D spaces by use of plan, section or perspective views.
- Simplicity by showing selected information pertinent to a given project.
- Reliable perception of scale, a printed plot, either dimensioned or with a scale, allows consistent shared experience of information, etc.
2. Related Work
3. Materials and Methods
- The enrichment of the RGB images with their edge maps constructing four-channel images;
- The usage of the four-channel images into image-based 3D point-cloud production workflows;
- The separation of the 3D points into edge and non-edge points;
- The 3D vectorization of 3D edge points.
3.1. Edge Maps and the Creation of Four-Channel Images
3.2. Image-Based 3D Point Cloud Software
3.2.1. Agisoft-Metashape
- Firstly, the “.psx” project file and one chunk inside it, are created.
- Then, the four-channel images are added to that chunk.
- Afterwards, the image-matching and camera-aligning algorithms are executed.
- When the sparse point cloud is generated, a depth map of each image is produced.
- Finally, the semantically enriched dense point cloud is created using the depth maps and saved using the “.ply” format.
3.2.2. Mapillary-OpenSfM
- To eliminate the fourth channel, i.e., 2D edge map, during the feature-extraction process.
- To consider the fourth channel during the determination of the color values of each point and, thus, associate each point with four values instead of three.
- During the execution of the OpenSfM software, the images’ dimensions are changed for cost purposes. Thus, the software was modified to scale down the fourth channel in the same way as the rest of the channels.
- To change the definition of the undistorted images format to “.tiff” instead of “.jpg” using the user-friendly configuration file.
3.2.3. Edge and Non-Edge Points Classification and 3D Vectorization
4. Results
4.1. Oversimplified Experiments
4.2. Implementation Using Four-Channel Images and the Professional SfM-MVS Software
4.3. Implementation of the 3DPlan Software
3DPlan User Interface
4.4. The Environment Setting and the Execution of the Algorithm
5. Discussion
- The creation of four-channel images using the given RGB images and their label channel.
- The creation of a semantically enriched point cloud exploiting the four-channel images in combination with professional SfM-MVS software.
- The modification of the Mapillary-OpenSfM software in order to create semantically enriched point clouds using four-channel images.
- The detection of the 3D edges with the most feasible accuracy with respect to the quality of the 2D edge semantic information.
- A software, which is available on GitHub (https://github.com/thobet/3DPlan (accessed on 15 September 2022)) and could be used as a base for more sophisticated approaches to the automation of the production of 2D–3D architectural drawings.
5.1. Creation of Four-Channel Images
5.2. Classification of 3D Points into Edge and Non-Edge Points
5.3. 3D Edge Points of Each Edge and 3D Vectorization
5.4. 3D Point-Cloud Characteristics and Post-Processing
6. Concluding Remarks and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
DBSCAN | Density-based spatial clustering of applications with noise |
MVS | Multi-view stereo |
RANSAC | Random sample consensus |
SfM | Structure from motion |
References
- Murtiyoso, A.; Pellis, E.; Grussenmeyer, P.; Landes, T.; Masiero, A. Towards Semantic Photogrammetry: Generating Semantically Rich Point Clouds from Architectural Close-Range Photogrammetry. Sensors 2022, 22, 966. [Google Scholar] [CrossRef] [PubMed]
- Pellis, E.; Murtiyoso, A.; Masiero, A.; Tucci, G.; Betti, M.; Grussenmeyer, P. An Image-Based Deep Learning Workflow for 3D Heritage Point Cloud Semantic Segmentation. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci.-ISPRS Arch. 2022, 46, 429–434. [Google Scholar] [CrossRef]
- Gülch, E.; Obrock, L. Automated semantic modelling of building interiors from images and derived point clouds based on deep learning methods. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2020, 43, 421–426. [Google Scholar] [CrossRef]
- Stathopoulou, E.K.; Remondino, F. Semantic photogrammetry: Boosting image-based 3D reconstruction with semantic labeling. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2019, 42, W9. [Google Scholar] [CrossRef] [Green Version]
- Stathopoulou, E.K.; Battisti, R.; Cernea, D.; Remondino, F.; Georgopoulos, A. Semantically derived geometric constraints for MVS reconstruction of textureless areas. Remote Sens. 2021, 13, 1053. [Google Scholar] [CrossRef]
- Blake, B. On Draughtsmanship and the 2 & a Half D World. Available online: https://billboyheritagesurvey.wordpress.com/2022/09/23/on-draughtsmanship-and-the-2and-a-half-d-world/ (accessed on 4 October 2022).
- Minaee, S.; Boykov, Y.Y.; Porikli, F.; Plaza, A.J.; Kehtarnavaz, N.; Terzopoulos, D. Image segmentation using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542. [Google Scholar] [CrossRef]
- Xie, Y.; Tian, J.; Zhu, X.X. Linking points with labels in 3D: A review of point cloud semantic segmentation. IEEE Geosci. Remote Sens. Mag. 2020, 8, 38–59. [Google Scholar] [CrossRef] [Green Version]
- Zhang, J.; Zhao, X.; Chen, Z.; Lu, Z. A review of deep learning-based semantic segmentation for point cloud. IEEE Access 2019, 7, 179118–179133. [Google Scholar] [CrossRef]
- Agisoft-Metashape. Discover Intelligent Photogrammetry with Metashape. 2016. Available online: http://www.agisoft.com/ (accessed on 4 October 2022).
- Mapillary-OpenSfM. An Open-Source Structure from Motion Library That Lets You Build 3D Models from Images. Available online: https://opensfm.org/ (accessed on 4 October 2022).
- Bienert, A. Vectorization, edge preserving smoothing and dimensioning of profiles in laser scanner point clouds. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; Volume 311. [Google Scholar]
- Nguatem, W.; Drauschke, M.; Mayer, H. Localization of Windows and Doors in 3d Point Clouds of Facades. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, II-3, 87–94. [Google Scholar] [CrossRef] [Green Version]
- Lin, Y.; Wang, C.; Cheng, J.; Chen, B.; Jia, F.; Chen, Z.; Li, J. Line segment extraction for large scale unorganized point clouds. ISPRS J. Photogramm. Remote Sens. 2015, 102, 172–183. [Google Scholar] [CrossRef]
- Grompone von Gioi, R.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A line segment detector. Image Process. Line 2012, 2, 35–55. [Google Scholar] [CrossRef] [Green Version]
- Mitropoulou, A.; Georgopoulos, A. An automated process to detect edges in unorganized point clouds. ISPRS Ann. Photogramm. Remote. Sens. Spat. Inf. Sci. 2019, 4, 99–105. [Google Scholar] [CrossRef] [Green Version]
- PCL. Point Cloud Library. Available online: https://pointcloudlibrary.github.io/ (accessed on 4 October 2022).
- Bazazian, D.; Casas, J.R.; Ruiz-Hidalgo, J. Fast and robust edge extraction in unorganized point clouds. In Proceedings of the 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Adelaide, SA, Australia, 23–25 November 2015; pp. 1–8. [Google Scholar]
- Lu, X.; Liu, Y.; Li, K. Fast 3D line segment detection from unorganized point cloud. arXiv 2019, arXiv:1901.02532. [Google Scholar]
- Dolapsaki, M.M.; Georgopoulos, A. Edge Detection in 3D Point Clouds Using Digital Images. ISPRS Int. J.-Geo-Inf. 2021, 10, 229. [Google Scholar] [CrossRef]
- Alshawabkeh, Y. Linear feature extraction from point cloud using color information. Herit. Sci. 2020, 8, 28. [Google Scholar] [CrossRef]
- Canny, J.F. Finding Edges and Lines in Images; Technical Report; Massachusetts Inst of Tech Cambridge Artificial Intelligence Lab: Cambridge, MA, USA, 1983. [Google Scholar]
- Bao, T.; Zhao, J.; Xu, M. Step edge detection method for 3D point clouds based on 2D range images. Optik 2015, 126, 2706–2710. [Google Scholar] [CrossRef]
- Hofer, M.; Maurer, M.; Bischof, H. Efficient 3D scene abstraction using line segments. Comput. Vis. Image Underst. 2017, 157, 167–178. [Google Scholar] [CrossRef]
- Bazazian, D.; Parés, M.E. EDC-Net: Edge detection capsule network for 3D point clouds. Appl. Sci. 2021, 11, 1833. [Google Scholar] [CrossRef]
- Koch, S.; Matveev, A.; Jiang, Z.; Williams, F.; Artemov, A.; Burnaev, E.; Alexa, M.; Zorin, D.; Panozzo, D. Abc: A big cad model dataset for geometric deep learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9601–9611. [Google Scholar]
- Chang, A.X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H. Shapenet: An information-rich 3d model repository. arXiv 2015, arXiv:1512.03012. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. arXiv 2018, arXiv:1706.02413. [Google Scholar]
- Liu, Y.; D’Aronco, S.; Schindler, K.; Wegner, J.D. PC2WF: 3D Wireframe Reconstruction from Raw Point Clouds. arXiv 2021, arXiv:2103.02766. [Google Scholar]
- Chuang, T.Y.; Sung, C.C. Learning-guided point cloud vectorization for building component modeling. Autom. Constr. 2021, 132, 103978. [Google Scholar] [CrossRef]
- Bassier, M.; Vergauwen, M.; Van Genechten, B. Automated Semantic Labelling of 3D Vector Models for Scan-to-BIM. In Proceedings of the 4th Annual International Conference on Architecture and Civil Engineering (ACE 2016), Singapore, 25–26 April 2016. [Google Scholar] [CrossRef]
- Macher, H.; Landes, T.; Grussenmeyer, P. From Point Clouds to Building Information Models: 3D Semi-Automatic Reconstruction of Indoors of Existing Buildings. Appl. Sci. 2017, 7, 1030. [Google Scholar] [CrossRef]
- Ochmann, S.; Vock, R.; Klein, R. Automatic reconstruction of fully volumetric 3D building models from oriented point clouds. ISPRS J. Photogramm. Remote. Sens. 2019, 151, 251–262. [Google Scholar] [CrossRef] [Green Version]
- Obrock, L.S.; Gülch, E. First steps to automated interior reconstruction from semantically enriched point clouds and imagery. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII-2, 781–787. [Google Scholar] [CrossRef] [Green Version]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Xie, S.; Tu, Z. Holistically-nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision, Washington, DC, USA, 7–13 December 2015; pp. 1395–1403. [Google Scholar]
- Poma, X.S.; Riba, E.; Sappa, A. Dense extreme inception network: Towards a robust cnn model for edge detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 4–8 January 2020; pp. 1923–1932. [Google Scholar]
- He, J.; Zhang, S.; Yang, M.; Shan, Y.; Huang, T. Bi-directional cascade network for perceptual edge detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3828–3837. [Google Scholar]
- Liu, Y.; Cheng, M.M.; Hu, X.; Wang, K.; Bai, X. Richer convolutional features for edge detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3000–3009. [Google Scholar]
- Wang, Y.; Zhao, X.; Huang, K. Deep crisp boundaries. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3892–3900. [Google Scholar]
- Bhanu, B.; Lee, S.; Ho, C.C.; Henderson, T. Range data processing: Representation of surfaces by edges. In Proceedings of the Eighth International Conference on Pattern Recognition, Paris, France, 21–31 October 1986; IEEE Computer Society Press: Piscataway, NJ, USA, 1986; pp. 236–238. [Google Scholar]
- The 3-Clause BSD License | Open Source Initiative. Available online: https://opensource.org/licenses/BSD-3-Clause (accessed on 4 October 2022).
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Bolles, R.C.; Fischler, M.A. A RANSAC-based approach to model fitting and its application to finding cylinders in range data. In Proceedings of the IJCAI, Vancouver, BC, Canada, 24–28 August 1981; Volume 1981, pp. 637–643. [Google Scholar]
- Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the kdd, Portland, OR, USA, 2–4 August 1996; Volume 96, pp. 226–231. [Google Scholar]
- OpenCV. Open Source Computer Vision Library. Available online: https://opencv.org/ (accessed on 4 October 2022).
- Stefanakis, M.; Kalogeropoulos, K.; Georgopoulos, A.; Bourbou, C. Exploring the ancient demos of Kymissaleis on Rhodes: Multdisciplinary experimental research and theoretical issues. In Classical Archaeology in Context: Theory and Practice in Excavation in the Greek World; Walter de Gruyter GmbH & Co. KG: Berlin, Germany, 2015; pp. 259–314. [Google Scholar]
- Stefanakis, M.I. The Kymissala (Rhodes, Greece) Archaeological Research Project. Archeologia 2015, 66, 47–63. [Google Scholar]
- Georgopoulos, A.; Tapinaki, S.; Stefanakis, M.I. Innovative Methods for Digital Heritage Documentation: The archaeological site of Kymissala in Rhodes. In Proceedings of the ICOMOS 19th General Assembly and Scientific Symposium “Heritage and Democracy”, New Delhi, India, 13–14 December 2017. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Betsas, T.; Georgopoulos, A. Point-Cloud Segmentation for 3D Edge Detection and Vectorization. Heritage 2022, 5, 4037-4060. https://0-doi-org.brum.beds.ac.uk/10.3390/heritage5040208
Betsas T, Georgopoulos A. Point-Cloud Segmentation for 3D Edge Detection and Vectorization. Heritage. 2022; 5(4):4037-4060. https://0-doi-org.brum.beds.ac.uk/10.3390/heritage5040208
Chicago/Turabian StyleBetsas, Thodoris, and Andreas Georgopoulos. 2022. "Point-Cloud Segmentation for 3D Edge Detection and Vectorization" Heritage 5, no. 4: 4037-4060. https://0-doi-org.brum.beds.ac.uk/10.3390/heritage5040208