Next Article in Journal
Erratum: Ramos, F.; Trilles, S.; Torres-Sospedra, J.; Perales, F.J. New Trends in Using Augmented Reality Apps for Smart City Contexts. ISPRS Int. J. Geo-Inf. 2018, 7, 478
Next Article in Special Issue
POSE-ID-on—A Novel Framework for Artwork Pose Clustering
Previous Article in Journal
Modeling Diurnal Changes in Land Surface Temperature in Urban Areas under Cloudy Conditions
Previous Article in Special Issue
A Neural Networks Approach to Detecting Lost Heritage in Historical Video
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparing Machine and Deep Learning Methods for Large 3D Heritage Semantic Segmentation

1
Department of Environment, Land and Infrastructure Engineering (DIATI), Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy
2
3D Optical Metrology (3DOM) Unit, Bruno Kessler Foundation (FBK), Via Sommarive 18, 38123 Trento, Italy
3
Department of Information Engineering (DII), Università Politecnica delle Marche, Via Brecce Bianche 12, 60100 Ancona, Italy
4
Department of Construction, Civil Engineering and Architecture (DICEA), Università Politecnica delle Marche, Via Brecce Bianche, 60100 Ancona, Italy
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2020, 9(9), 535; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9090535
Submission received: 29 July 2020 / Revised: 19 August 2020 / Accepted: 25 August 2020 / Published: 7 September 2020
(This article belongs to the Special Issue Machine Learning and Deep Learning in Cultural Heritage)

Abstract

:
In recent years semantic segmentation of 3D point clouds has been an argument that involves different fields of application. Cultural heritage scenarios have become the subject of this study mainly thanks to the development of photogrammetry and laser scanning techniques. Classification algorithms based on machine and deep learning methods allow to process huge amounts of data as 3D point clouds. In this context, the aim of this paper is to make a comparison between machine and deep learning methods for large 3D cultural heritage classification. Then, considering the best performances of both techniques, it proposes an architecture named DGCNN-Mod+3Dfeat that combines the positive aspects and advantages of these two methodologies for semantic segmentation of cultural heritage point clouds. To demonstrate the validity of our idea, several experiments from the ArCH benchmark are reported and commented.

1. Introduction

Semantic segmentation is one of the most important research methods for computer vision, and has the task to classify each pixel or point in the scene into classes that have specific features [1,2]. In the past, semantic segmentation concerned bi-dimensional images but, due to some limitations related to occlusions, illumination, posture and other problems, the researches began to deal with three-dimensional data. This change also occurred thanks to the growing diffusion of photogrammetry and laser scanning surveys. In the 3D form of semantic segmentation, regular or irregular points are processed in the 3D space [3].
Surely, the automatic interpretation of 3D point clouds by semantic segmentation in the cultural heritage (CH) context represents a very challenging task. Digital documentation is not easy to obtain, but it is necessary to disseminate cultural heritage [4]. Shapes are complex and the objects, even if repeatable, are unique, handcrafted and not serialised. Notwithstanding, the understanding of 3D scenes in digital CH is crucial, as it can have many applications such as the identification of similar architectural elements in large dataset, the analysis of the state of conservation of materials, the subdivision of the point clouds in its structural parts preliminary for scan-to-BIM processes, etc. [5].
In recent years, the researches for semantic segmentation of point clouds in CH have made a significant breakthrough thanks to the application of artificial intelligence (AI) methods [6,7]. In the literature, most of the machine learning (ML) and deep learning (DL) approaches employ supervised learning methods. According to [8] in the era of big-data, ML classification approaches are evolving in DL approaches since they are more efficient to deal with a large quantity of data derived from modern methods and with the complexity of 3D point clouds, by continuously teaching and adjusting their abilities [9,10,11]. However, as their success relies on the availability of large amounts of annotated dataset, the complete replacement of ML approaches within the heritage field is still not possible. A major drawback of DL methods is that they are not easily interpretable, since these models behave as black-boxes and fail to provide explanations on their predictions.
In this context, the aim of this research is to report a comparison between two different classification approaches for CH scenarios, based on machine and deep learning techniques. Among them, four state-of-the-art ML and DL algorithms are tested, highlighting the possibility to combine the positive aspects of each methodology into a new architecture (later called DGCNN-Mod+3Dfeat) for the semantic segmentation of CH 3D architectures.
Among ML methods, we used K-Nearest Neighbours (kNN) [12], Naive Bayes (NB) [13], Decision Trees (DT) [14] and Random Forest (RF) [15]. They have been trained with geometric features and small annotated patches, ad-hoc selected over the different case studies.
Regarding the DL approaches, four different versions of DGCNN [16] are used, trained on several scenes of the newly proposed heritage ArCH benchmark [17], composed of various annotated CH point clouds. Two out of the four DGCNN architectures proposed (DGCNN and DGCNN-Mod) have already been tested by the authors in a previous paper [18] where, from a comparison with other state-of-art NNs (PointNet, PointNet++, PCNN, DGCNN) the DGCNN proved to be the best architecture for our data. Therefore, in this paper, the previously presented results are compared with those achieved introducing new features to the networks.
The evaluation of the selected ML and DL methods is performed on three different heritage scenes belonging to the above cited ArCH dataset.

Research Questions and Paper Structure

In the context of CH-related point cloud classification and semantic segmentation methods, four research questions are addressed by this study:
RQ1
Is it possible to provide the research community with guidelines for the automatic segmentation of point clouds in the CH domain?
RQ2
Which ML and DL algorithms perform better for the semantic segmentation of heritage 3D point cloud?
RQ3
Is there a winning solution between ML and DL in the CH domain?
RQ4
Is it correct comparing the performance results of ML and DL algorithms with the same pipeline?
The paper is organised as follows. Section 2 provides a description of the approaches that were adopted for point clouds semantic segmentation. Section 3 describes the used dataset and methodology. Section 4 offers an extensive comparative evaluation and analysis of ML and DL approaches. A detailed discussion of the results is presented in Section 5. Finally, Section 6 draws conclusions and discusses future directions for this field of research.
Additional experiments have been finally run with the DL methods on the whole ArCH dataset (that includes four new CH labelled scenes, if compared with the 12 used for the previous tests presented in [18]), in order to check if the largest size of the training dataset would effectively improve the performances (see Appendix A, Table A4 and Table A5 for detailed metrics). The results shown in the paper do not include these four new scenes because it would have compromised a fair comparison with the DGCNN-Mod presented in [18], therefore the same number of scenes has been kept.

2. Related Works

In the literature, there is a restricted number of applications that use machine learning methods to classify 3D point clouds in different objects belonging to cultural heritage scenes, even if, according to [6], these methods had great progress to this regard. Indeed, in their study the authors explore the applicability of supervised machine learning approaches to cultural heritage by providing a standardised pipeline for several case studies.
In this domain, the research of [19] has two main objectives: providing a framework that extracts geometric primitives from a masonry image, and extracting and selecting statistical features for the automatic clustering of masonry. The authors combine existing image processing and machine learning tools for the image-based classification of masonry walls and then make a performances comparison among five different machine learning algorithms for the classification task. The main issue of this method is that each block of the wall is not individually characterised.
The research presented in [20] wants to overcome this limitation, presenting a novel automatic segmentation algorithm of masonry blocks from a 3D point cloud acquired with LiDAR technology. The image processing algorithm is based on an optimisation of the watershed algorithm, also used to improve segmentation algorithms in other works [21,22], to automatically segment 3D point clouds in 3D space isolating each single stone block.
In their research, Grilli et al. [23] propose a strategy to classify heritage 3D models by applying supervised machine learning classification algorithms to their UV maps. To verify the reliability of the method, the authors evaluate different classifiers over three heterogeneous case studies.
In [24] the authors explore the relation between covariance features and architectural elements using supervised machine learning classifier (Random Forest), finding in particular a correlation between the feature search radii and the size of the element. A more in-depth analysis of the previous approach [25] demonstrates the capability of the algorithm to generalise across different unseen architectural scenarios. The research conducted by Murtiyoso et al. [26] aims to help the manual point clouds labeling of large training data set required from machine learning algorithms. Moreover, the authors introduce a series of functions that allow the automatic processing for some issues of segmentation and classification of CH point clouds. Due to the complexity of the problem, the project considers only some important classes. The toolbox uses a multi-scale approach: the point clouds are processed from the historical complex to architectural elements, making it suitable for different types of heritage.
Mainly in recent years, deep learning has received increasing attention from the researches and has been successfully applied to semantically segment 3D point clouds in different domains [3,27]. In the context of cultural heritage there are still few studies that use deep learning approaches to classify 3D point clouds. The need to have a large scale well-annotated dataset can limit its development, blocking the research in this direction. In some cases this problem can be solved using synthetic dataset [8,28]. However, the researches conducted so far have yielded encouraging results.
Deep learning approaches are properly used for directly managing the raw data of point clouds without considering an intermediate processing that allows a more regular representation. For this purpose the first approach is proposed in [29]. An extended version of the previous network considers not only each point separately, but also its neighbors, in order to exploit the local features and thus obtain more efficient classification results [30].
Malinverni et al. [7] use PointNet++ to semantically segment 3D point clouds of CH dataset. The aim of the paper is to demonstrate the efficiency of chosen deep learning approaches to process point clouds of CH domain. Moreover, the method is evaluated on a suitably created CH dataset manually annotated by domain experts.
An alternative to these approaches is based on the point clouds Convolutional Neural Network (PCNN) [31], a novel architecture that uses two operators (extension and restriction). The extension maps functions defined over the point cloud to volumetric functions, while the restriction operator does the inverse.
An approach inspired by PointNet is proposed by [16] where the difference is to exploit local geometric structures using a neural network module, EdgeConv, that constructs a local neighborhood graph and applies convolution-like operations. Moreover the model, named DGCNN (Dynamic Graph Convolutional Neural Network), dynamically updates the graph, changing the set of k-nearest neighbors of a point from layer to layer of the network.
In the CH context, inspired by this architecture, Pierdicca et al. [18] propose to semantically segment 3D point clouds using an augmented DGCNN by adding features such as normals and the radiometric component. This modified version has the aim to simplify the management of DCH assets that have complex geometries, extremely variable and defined with a high level of detail. The authors also propose a novel publicly available dataset to validate the novel architecture making a comparison between other DL methods.
Another study that uses DL to classify objects of CH is presented in [5]. The authors make a performances comparison between machine and deep learning methods in the classification task of two different heritage datasets. Using machine learning approaches (Random Forest and One-versus-One) the performances are excellent in almost all the identified classes, but there is no correlation between the characteristics. Using DL approaches (1D CNN, 2D CNN and RNN Bi-LSTM) the 3D point clouds are considered as a sequence of points. However ML approaches overcome DL, because according to the authors the DL methods implemented are not very recent, and so other architecture will be tested.

3. Materials and Methods

In this section the workflow of the comparison between the two methodologies is presented, as well as the classifiers and scenes used for the three experiments (Figure 1).
As previously mentioned, the goal of this paper is not to compare algorithms, but rather classification approaches. In fact, for a fair comparison between classification algorithms, it would be necessary to use the same training data. In this context, some initial experiments using the same number of scenes in the training phases for both DL and ML algorithms have been performed. However, the ML classifiers did not achieve satisfactory results compared with those obtained using reduced annotated portions of the test scenes. Therefore, as the aim of the paper is discussing the best approaches for heritage classification, a comparison between ML and DL approaches is presented, where the training data are different.
Three different experiments have been performed as follows. In the first experiment both the different ML and DL classifiers have been trained on the same portion of a symmetrical scene: half of the point cloud is used for training and validation, and half for the final test. In the second and third experiment the samples used to train the ML and DL classifiers are different. On one hand, for the ML approach, a reduced portion of the test scene is annotated and used during the training phase, leaving the remaining part for the prediction phase. On the other, for the DL approach, different annotated scenes are used for the training phase, while for the testing totally new data are presented to the networks. Further details are given in the following subsections.

3.1. Benchmark for Point Cloud Semantic Segmentation

The scenes used for the following tests are part of the ArCH benchmark [17], a group of architectural point clouds collected by several universities and research bodies with the aim of sharing and labelling an adequate number of point clouds for training and testing artificial intelligence methods.
This benchmark represents the current state of the art in the field of annotated cultural heritage point clouds, with 15 point clouds of architectural scenarios for training and two for test. Although other benchmarks and datasets for point clouds’ classification and semantic segmentation already exist [32,33,34,35], the ArCH dataset is the only one specifically focused on the CH domain and with a higher level of detail, therefore it has been chosen for the tests here presented.
For our experiments, three test scenes are used (Table 1): (i) the symmetrical point cloud of the Trompone Church, (ii) the Palace of Pilato of the Sacred Mount of Varallo—SMV (a two-floor building, not symmetrical and not linear), and (iii) the portico of the Sacred Mount of Ghiffa—SMG (a simpler and quite linear scene). For the DL approach, the symmetrical point cloud is used for an initial evaluation of the hyperparameters. Whilst the other two scenes allow to evaluate the generalisation ability of state-of-art neural networks by testing them on different cases: a complex one, SMV, and a simpler one, SMG.

3.2. Machine Learning Classifiers for Point Cloud Semantic Segmentation

Over the past ten years, different Machine Learning approaches have been proposed in the literature for point cloud semantic segmentation such as k-Nearest Neighbour (kNN) [36], Support Vector Machine (SVM) [37,38], Decision Tree (DT) [39,40], AdaBoost (AB) [41,42], Naive Bayes (NB) [43,44], and Random Forest (RF) [45]. Among them, in this paper, kNN, NB, DT, and RF classifiers have been implemented in Python 3, starting from the available Scikit-learn Python library [46], in order to solve multi-class classification tasks. For each case study the four classifiers have been trained through selected features and small manually annotated portions of the datasets.
With regard to the kNN classifier, the k value being highly data-dependent, a few preliminary test with increasing values have been run, in order to find the best fit solution. Best results were achieved with low values of k ( k = 5 ) .
The NB classifier used is the GaussianNB [47], a variant of Naive Bayes that follows Gaussian normal distribution and supports continuous data.
For the DT, different maximum depths of the tree have been tested. Results confirmed that the default parameter max-depth=None, by which nodes are expanded until all leaves are pure, allows for higher accuracy results.
Within the RF classifier two parameters have been initially tuned considering the best F1-score computed on the evaluation set: the number of decision trees to be generated Ntree and the maximum depth of the tree Mtry [45]. The reported results refers to the use of 100 trees with max-depth=None.

Features Selection

In order to effectively train the different ML classifiers a composition of 3D geometric features have been used, including normal-based (Verticality), height-based (Z coordinates), and eigenvalue-based features (also defined covariance features).
The covariance features [48] are shape descriptors obtained as a combination of eigenvalues ( λ 1 > λ 2 > λ 3 ) which are extracted from the covariance matrix, a 3D tensors that describe the distribution of point within a certain neighbourhood. Through statistical analysis, the Principal Component Analysis (PCA), it is possible to extract from this matrix the three eigenvalues representing the local 3D structure. According to Weinmann et al. [49], different strategies can be applied to recover the local neighbourhood for points belonging to a 3D point cloud. It can generally be computed as a sphere or a cylinder with a fixed radius or be described by the number of the kNN. In this paper, considering the studies presented in [24,25], only a few covariance features (Omnivariance, Surface Variation and Planarity) have been calculated on spherical neighbourhoods at specific radii in order to highlight the architectural components.
As one can see in Figure 2, different features emphasises different elements. Verticality makes easier the distinction between vertical and horizontal surfaces, allowing the recognition of walls and columns as well as floors, stairs and vaults. The feature planarity becomes useful for isolating columns and cylindrical elements if extracted at radii close to the diameter dimensions. Finally, surface variation and omnivariance, calculated within a short radius, emphasises changes in shapes facilitating, for example, the detection of moldings and windows.

3.3. Deep Learning for Point Cloud Semantic Segmentation

In this paper, the approach presented in [18] is adopted, where a modified version of DGCNN is proposed, called DGCNN-Mod. This implementation includes several improvements, compared to the original version: in the input layer, kNN phase considers coordinates of normalised points, color features transformations like HSV, and normal vectors. Moreover, the performance of the DGCNN-Mod is compared with two novel versions of this network: the DGCNN-3Dfeat and the DGCNN-Mod+3Dfeat that take into consideration other important features aforementioned. In particular, the DGCNN-3Dfeat adds to the kNN the 3D features. Instead, for a complete ablation study the DGCNN-Mod+3Dfeat comprises all the available features. Figure 3 represents the configurations of the EdgeConv layer with the various feature combinations.
Compared to the DGCNN-Mod, two types of pre-processing techniques are tested: Scaler1 and Scaler2. The Scaler1 standardises features by removing the mean and scaling to unit variance. The standard score of a sample x is determined as:
z = x μ σ
where μ is the mean of the training samples and σ is the standard deviation of the training samples. Instead, Scaler2 scales features using statistics that are robust to outliers. This pre-processing phase removes the median and scales the data according to the quantile range (IQR: InterQuartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile). Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Median and interquartile range are then stored to be used on the validation and test set. In addition, the original DGCNN network uses the Cross Entropy Loss. Since we are using really unbalanced datasets, we decide to test Focal Loss [50] as well. This particular function has been implemented just to solve unbalance issues.
All deep learning approaches have been implemented using Python 3 and the well-known framework called Tensorflow. Pre-processing techniques on features, i.e., Scaler1 and Scaler2, have been implemented through the Scikit-Learn library [46], also implemented in Python.

3.4. Performance Evaluation Metrics

In the experimental section (Section 4), the employed state-of-the-art approaches are compared using the most common performance metrics for semantic segmentation. The Overall Accuracy (OA), along with weighted Precision, Recall and F1-Score are calculated regarding the test set, as these are very good performance indicators to understand if the approaches are able to generalise in a proper way. Please consider that OA and Recall have the same values, since the metrics are weighted. In addition, a comparison is also made between the individual classes of the test set, for each experiment performed: Precision, Recall, F1-Score and Intersection over Union (IoU) values are calculated for each type of object (see the Appendix A).
It is worth noting that, in the scenes to be classified, the number of points varies according to the two approaches involved. In fact, with ML the total number of points both in the input and output scene are used, while with DL the unseen point cloud is subsampled with respect to the original one, for computational reasons. The number of subsampled points could be arbitrarily set, the most used is 4096 for each analysed block, but higher values can be chosen (e.g., 8192) at training time expense. In this paper 4096 points per block have been set as subsampling parameter.

4. Results

In this section, several experiments performed with the previously presented ML and DL methods are reported. The experiment proposed in Section 4.1 regards the segmentation of the Trompone symmetrical scene, starting from the partial annotation of the same scene. In the second and third experiments, the training samples change according to the adopted classification strategy (ML or DL). Still, the same scenes are tested: SMV scene for Section 4.2 and SMG scene for Section 4.3.

4.1. First Experiment—Segmentation of a Partially Annotated Scene

In this setting, the Trompone scene is initially split into two parts, choosing one side for the training and the symmetrical one for the test. Then, the side used for the training phase is further split into training set (80%) and validation set (20%). The validation set is used to test the OA at the end of each training epoch while the evaluation is performed on the test set. For this test, nine architectural classes have been considered. Unlike the next experiments (Section 4.2 and Section 4.3), the class “Other” was used during the training as it could be uniquely identified with the furnishing of the church (mainly benches and confessionals). No points from the class "roof" were tested, this being an indoor scene.
Original DGCNN uses its standard hyper-parameters: normalised XYZ coordinates for the kNN phase and XYZ + RGB for the feature learning phase, with 1 × 1 m block size. This latter parameter defines only the size of the block base, since the height is considered “endless”; in this way the whole scene can be analysed and the lowest number of blocks is defined. For the other DGCNN-based approaches we used the Scaler1 pre-processing setting for the features, as it resulted to be the best configuration among all the various tests performed. In addition, for the DGCNN-Mod+3Dfeat network, the best result was achieved using Focal Loss function.
In Table 2, the performances of the state-of-the-art approaches are reported. As we can see, the best returns in terms of accuracy metrics come from the RF approach. In addition, the other approaches exceeding 0.80 of accuracy are DT, DGCNN-3Dfeat, and DGCNN-Mod+3Dfeat, which all have in common the use of the 3D features. We can, therefore, deduce that this type of features allows for an improvement of the original DGCNN performances as they are very representative for the classes under investigation.
Table A1 (see Appendix A) reports the accuracy metrics (Precision, Recall, F1-Score and IoU) for each class of the Trompone’s test set. From the analysis of this table it is possible to understand which are the classes that are best discriminated by the various approaches. Finally, Figure 4 depicts the manually annotated test scene (ground truth) and the automatic segmentation results, obtained with best approaches. From this visual result we can notice again the issues with the class Stair (in green), and Window-Door (in yellow) (e.g., in none of the approaches it has been possible to identify the door at the center of the scene).

4.2. Second Experiment—Segmentation of an Unseen Scene, the Sacro Monte Varallo (SMV)

In the second and third experiments, as previously anticipated, the training samples change according to the classification strategy adopted (ML or DL). Moreover, based on the experience of [30], the class "Other" is excluded from the classification, as the objects included are too variegated and it would confuse the NN. The portion of scene used to train the different ML classifiers consists of 2,526,393 points out of 16,200,442 points (approx. 16%) (Figure 5), while for the NNs 12 scenes of ArCH dataset have been used according to the previous tests performed in [18].
Same state-of-the-art approaches as in the previous section are evaluated.
In Table 3, the overall performances are reported for each tested model, while Table A2 (see Appendix A) reports detailed results on the individual classes of the test scene. Original DGCNN is trained again using its standard hyperparameters. For the other DGCNN-based approaches we achieved the best results using:
  • Focal Loss for DGCNN-Mod;
  • Scaler1 pre-processing for DGCNN-3Dfeat;
  • Focal loss and Scaler2 pre-processing for DGCNN-Mod+3Dfeat;
Table 3 shows that DGCNN-Mod+3Dfeat is the best approach in terms of OA, reaching 0.8452 on the Test Scene, followed by the RF with 0.8369. However, studying the results of the individual classes through Table A2, we can see that with the DL approach, two classes have not been well recognised (i.e., Arch and Column). The second best approach, on the contrary, gets better results on these classes, while maintaining an high average accuracy. Figure 6 depicts the manually annotated test scene (ground truth) and the automatic segmentation results obtained with the best approaches. It is possible to notice that most of the classes have been well recognised, except for the Arch class in the DGCNN-based approaches and the Door-Window class for the RF.

4.3. Third Experiment—Segmentation of an unseen scene, the Sacro Monte Ghiffa (SMG)

As in the previous experiments, for the ML approaches ad hoc annotations have been distributed along the point cloud (Figure 7), consisting of 3,545,900 points over a total of 17,798,049 points (approx. 20%).
In Table 4, the overall performances are reported for each tested model, while Table A3 (see Appendix A) reports detailed results on the individual classes of the test scene. Best results have been achieved with RF, immediately followed by the DGCNN-Mod+3Dfeat network. However, in this case, given the higher symmetry of the point cloud, if compared to the SMV scene, the increase in OA when using the 3D features is lower, but still significant. Results are consistent with the previous test and the most problematic class is again the Door-Window, probably due to the dataset unbalance.
Finally, Figure 8 depicts the manually annotated test scene (ground truth) and the automatic segmentation results obtained with best approaches.

4.4. Results Analysis

The recap of the best OA achieved (Figure 9) highlights that the Random Forest method is slightly better in the two almost symmetrical scenes of Ghiffa and the Trompone church. In these cases, with manual annotation, it is possible to select a number of adequately representative examples of the test scene, ensuring an accurate result. The DL solutions, on the other hand, seem to work better in the non-symmetric scene, thus showing a good generalisation ability. More generally, the results of DL are satisfactory, as they demonstrate the achievement of OA similar to those of RF, although the training set is partially limited, if compared to the others present in the state of the art.
Figure 10 shows the F1-Score, a combination of precision and recall, relative to the single classes. In this case, the ML approaches outperform DL for some classes such as Arch, Column, Molding and Floor, while the DL gives better results in the segmentation of Door-Window and Roof. The remaining classes of Vault, Wall and Stair are equally balanced between the results of the two techniques, with vaults and walls leaning towards the RF and stairs to the DGCNN-Mod+3Dfeat.

5. Discussions

Answering to the first research question (RQ1), it can be said that nowadays it is possible to provide best practices for semantic segmentation of point clouds in the CH domain. In fact, the tests conducted and the results described above show that the introduction of 3D features has led to an increase in OA, if compared to the simple use of radiometric components and normals. This increase is about 10% in the tests on the symmetric scene (Trompone church), while it is lower (approximately 2%) in the tests run with different scenes as training and SMV or SMG as tests. In the latter case, however, the introduction of the 3D features, associated with the use of the normals and the RGB features, has improved the recognition of the classes with fewer points and which, previously, resulted with lower metrics (for example Column, Door-Window and Stair). As it is possible to notice in Table A1, Table A2 and Table A3, for all the approaches, the worst recognised classes are Arch, Door-Window and, alternatively, Molding or Stair. This result is likely due to the fact that these are the classes with the lowest number of points within the scenes.
A similar conclusion can be made for the introduction of the focal loss, which, with the same hyperparameters configuration, has led to an increase of the performance for the Molding, Door-Window, and Stair classes.
With regard to RQ2, experiment results show that RF outperformed the other ML classifiers. At the same time, the best DL results have been achieved with the combination of all the selected features, without leading to an increase in computational time. Previous tests, not presented here, highlighted that what actually affects this latter aspect is the block size and the number of subsampled points.
Talking about RQ3, as described in the results section, the authors think that there is still no winning solution between the ML and DL approaches. The OA of the best ML method and the DL one differs slightly. However, contrasting results are highlighted if the classes are analysed individually, where approaches could be chosen according to the needs. Both techniques have strengths and weaknesses. In the case of ML, there is a customisation of the training set according to the scene to be predicted, very useful in the CH domain, while for the DL there is the possibility of cutting out the manual annotation, further automating the process. Another element to take into consideration when comparing machine and deep learning approaches is the processing time. If the ML pipeline is well defined, within the DL framework, it is necessary to make a distinction between two possible scenarios which considerably differ in times. In the first scenario, when an annotated training set is not available, it is necessary to manually label as many scenes as possible (a very time-consuming task), pre-process the data (e.g., subsampling, normals computation, centering on the 0,0,0 point, block creation, etc.), then wait for the training phase from a few hours to a few days. In the second scenario, it is possible to start from saved weights of a network which had been pre-trained on a released benchmark (ArCH in this case), and directly proceed to the preparation and test of the new scene, without any manual annotation phase. So, depending on whether one compares the RF with the first or second scenario, the balance needle can tip in favor of one or the other technique. In Figure 11, a comparison between the times required for the tests carried out in this paper is shown. It must be considered that ML tests were run on an Nvidia GTX 1050 TI 8 GB, 32 GB RAM, processor Intel(R) Xeon(R) CPU E5-1650 0 @ 3.20 GHz, while for the DL an Nvidia RTX 2080 TI 11 GB, 128 GB RAM, processor Intel(R) Xeon(R) Silver 4214 CPU @ 2.20 GHz was used.
Finally, regarding RQ4, it is fair to state that the main drawback in the comparison between different algorithms is the limited similarity of their pipeline. In fact, a proper comparison between algorithms would necessarily require the same input and/or output. As regards the input, considering the different nature of the algorithms, this would mean giving to the ML classifiers a huge amount of annotated data which would compromise its performances, or viceversa training the neural network with a few data compared to that required. For this reason, in order to analyse the best classification approaches for heritage scenarios, we preferred to use different training scenes for the ML and DL input. Concerning the output, for the DL approach an interpolation with the initial scene should be conducted for a comparison with the same number of points, leading to a likely OA decrease. However, as the subsampling operation is mainly due to computational reasons, easily solved in the near future with more and more performing machines, the usefulness of the interpolation would certainly be reduced and become even pointless. Moreover, using different interpolation algorithms would introduce a further element of error making the pipeline less objective and reproducible.

6. Conclusions and Future Works

This study explored semantic segmentation of complex 3D point clouds in the CH domain. To do so, ML techniques and DL techniques have been compared exploiting a novel and previously unexplored benchmark dataset.
Both ML and DL algorithms proved to be valuable, having great potential for classifying datasets collected with different Geomatics techniques (e.g., LiDAR and photogrammetric data). When comparing the performances of both approaches, it appears that there is not a winning solution, classifiers had similar overall performances, and none of them outperformed each other. Even considering the single classes studied for the experiments, it emerges that the different approaches are alternatively better depending on the class analysed, but none of the methods attained a result able to generally outperform all the classes.
In general terms, the training time of classical ML techniques can be up to one order of magnitude smaller; conversely, a small but noteworthy improvement in performance could be witnessed for DL techniques over classical ML techniques, considering the whole benchmark dataset (Table A4). In ML, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. Its value is used to control the process of learning. Instead, DL techniques have the advantage of allowing more additional experimentation with the model setup. Using DL techniques on a dataset of this size and for this type of problem therefore shows promise, especially in performance critical applications. On the other side, the DL model is largely influenced by the processes of tuning the structural parameters both in computational cost and operational time. However, given that state-of-the-science large-scale inventories are moving towards deep learning-based classifications, we can expect that in the upcoming future the growing availability of training dataset will overcome such limitation. The feature engineering and feature extraction are key, and time consuming parts of the ML workflow, since these phases transforming training data and augmenting it with additional features in order to make ML algorithms more effective. DL has been changing this process and deep neural networks have been explored as black-box modelling strategies.
The final legacy of this work, which was aimed at opening a positive debate among the different involved domain experts, is summarised in Table 5, where pros and cons of both ML/DL methods are summarised.

Author Contributions

Conceptualization, Francesca Matrone, Roberto Pierdicca, and Marina Paolanti; methodology, Francesca Matrone, Eleonora Grilli and Massimo Martini; software, Eleonora Grilli and Massimo Martini; validation, Francesca Matrone, Roberto Pierdicca and Marina Paolanti; formal analysis, Francesca Matrone, Eleonora Grilli and Massimo Martini; investigation, Francesca Matrone, Eleonora Grilli and Massimo Martini; data curation, Francesca Matrone and Eleonora Grilli; writing—original draft preparation, Francesca Matrone, Eleonora Grilli; writing—review and editing, Marina Paolanti, Roberto Pierdicca and Fabio Remondino; supervision, Roberto Pierdicca and Fabio Remondino All authors have read and agreed to the published version of the manuscript.

Funding

This research partially received external funding from the project “Artificial Intelligence for Cultural Heritage” (AI4CH) joint Italy-Israel lab which was funded by the Italian Ministry of Foreign Affairs and International Cooperation (MAECI).

Acknowledgments

The authors would like to thank Justin Solomon and the Geometric Data Processing group of the Massachusetts Institute of Technology (MIT) for the support in conducting most of the tests presented in the DL part.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this section the detailed results, divided per class, of the tests performed on the Trompone, SMV and SMG scenes, are included. In addition, the results of the DGCNN-based methods trained on the whole ArCH dataset have been inserted too. In this latter case, the best hyperparameters’ configuration from the previous DNN training has been chosen. The metrics selected are Precision, Recall, F1-Score and Intersection over Union (IoU) of each class for the Test scene.
Table A1. The Trompone scene has been divided into 3 parts: training, validation and test. In this Table we can see the metrics for every class, calculated on the test set.
Table A1. The Trompone scene has been divided into 3 parts: training, validation and test. In this Table we can see the metrics for every class, calculated on the test set.
ModelMetricsArchColMoldFloorDo-WiWallStairVaultFurnit
kNNPrecision0.68130.62250.54740.9410.28740.71160.52630.82550.7406
Recall0.47340.69750.50410.96980.1970.73390.03490.91680.8026
F1-Score0.55870.65790.52480.95520.23380.72260.06540.86880.7704
IoU0.38760.49020.35580.91420.13240.56570.03380.7680.6265
NBPrecision0.52170.59140.4160.85590.08120.58360.66720.78530.6891
Recall0.33840.81590.18980.92630.0130.86250.09630.79930.6288
F1-Score0.41050.68570.26070.88970.02240.69610.16830.79220.6575
IoU0.25820.52180.14990.80130.01130.53390.09190.65590.4898
DTPrecision0.84760.89240.75130.96610.35440.80210.47960.91130.7894
Recall0.69830.86960.73170.97310.28430.80990.15980.94220.8767
F1-Score0.76570.88090.74140.96960.31550.8060.23970.92650.8307
IoU0.62040.78710.5890.9410.18730.67510.13620.86310.7105
RFPrecision0.92070.96180.85620.97230.60540.83320.96610.93460.8259
Recall0.76940.89380.80660.98600.27070.87760.15190.95650.9321
F1-Score0.83830.92650.83070.97910.37410.85480.26260.94540.8758
IoU0.72160.86310.71030.95900.23010.74630.15110.89640.7790
DGCNNPrecision0.42950.57890.53410.96040.41200.66060.46270.91210.5011
Recall0.47930.61740.38770.97430.16350.37670.04830.78320.9452
F1-Score0.45300.59750.44930.96730.23410.47980.08740.84280.6550
IoU0.29280.42600.28970.93660.13250.31560.04570.72820.4869
DGCNN- ModPrecision0.44480.16330.61770.96620.40820.64830.71210.80430.6462
Recall0.57630.63280.24840.98370.07710.18600.17300.91990.9602
F1-Score0.50210.25960.35430.97490.12970.28910.27840.85820.7725
IoU0.33520.14910.21530.95090.06930.16890.16160.75160.6293
DGCNN- 3DfeatPrecision0.73800.91540.72690.98470.40780.74130.96600.95440.8657
Recall0.74930.87570.62070.98450.15310.86200.33200.92070.9251
F1-Score0.74360.89510.66960.98460.22260.79710.49410.93730.8944
IoU0.59180.81010.50330.96960.12520.66270.32810.88190.8090
DGCNN- Mod+3DfeatPrecision0.57670.68340.70420.97820.49900.74920.97640.90440.7791
Recall0.62570.93050.44550.98700.14790.78110.28440.88670.9254
F1-Score0.60020.78810.54580.98260.22820.76480.44050.89540.8460
IoU0.42870.65020.37520.96570.12870.61910.28240.81060.7330
Table A2. Tests performed on the SMV scene. For the DL approach 10 scenes as training, 1 for validation (5_SMV_chapel_1) and 1 for test.
Table A2. Tests performed on the SMV scene. For the DL approach 10 scenes as training, 1 for validation (5_SMV_chapel_1) and 1 for test.
ModelMetricsArchColMoldFloorDo-WiWallStairVaultRoof
kNNPrecision0.31130.84760.39780.95220.09860.97010.74960.86450.8063
Recall0.55130.94580.64240.91470.45040.76320.85450.870.9402
F1-Score0.39790.8940.49130.93310.16180.85430.79860.86730.8682
IoU0.24840.80840.32570.87460.0880.74560.66470.76560.767
NBPrecision0.09230.42630.25840.75770.00630.95150.73870.74860.8492
Recall0.2330.86220.35060.79230.01210.79540.68960.7440.764
F1-Score0.13220.57060.29750.77460.00830.86650.71330.74620.8043
IoU0.07080.39910.17480.63210.00420.76440.55440.59520.6727
DTPrecision0.26180.88640.46370.91410.12510.97440.77840.84110.7528
Recall0.56760.91840.61940.8570.58750.76540.83550.85490.9557
F1-Score0.35840.90210.53030.88460.20630.85740.80590.84790.8422
IoU0.21830.82170.36090.79320.1150.75030.6750.7360.7275
RFPrecision0.35860.89060.47380.96500.20580.97740.78730.89550.7795
Recall0.62620.93520.65570.91620.71150.78970.86050.91010.9747
F1-Score0.45600.91240.55010.93990.31930.87360.82230.90270.8662
IoU0.29530.83880.37940.88670.18990.77550.69820.82270.7640
DGCNNPrecision0.14060.01340.12700.66410.23190.74960.63020.52670.8445
Recall0.08770.00040.48430.80500.25010.87970.09830.87570.3735
F1-Score0.10810.00070.20130.72780.24060.80950.17000.65780.5179
IoU0.05710.00030.11190.57210.13670.67990.09290.49000.3494
DGCNN- ModPrecision0.11450.79030.42490.77750.41710.79460.82710.82820.9420
Recall0.05430.06300.41380.85710.23760.92030.59380.89040.9238
F1-Score0.07370.11670.41930.81540.30280.85280.69130.85820.9328
IoU0.03820.06190.26520.68830.17840.74340.52810.75160.8740
DGCNN- 3DfeatPrecision0.25810.82430.34910.80520.17670.77610.88370.59680.9148
Recall0.10540.10290.14730.75780.05330.90740.75530.87190.8735
F1-Score0.14960.18300.20720.78080.08190.83670.81450.70860.8937
IoU0.08080.10070.11550.64040.04270.71920.68700.54870.8078
DGCNN- Mod+3DfeatPrecision0.13450.70070.46780.83020.46640.79500.88360.85280.9271
Recall0.06080.42600.30210.83110.33720.88740.79280.85780.9670
F1-Score0.08380.52990.36710.83070.39140.83860.83570.85530.9466
IoU0.04370.36040.22480.71040.24330.72210.71770.74710.8986
Table A3. Tests performed on the SMG scene. For the DL approach 10 scenes as training, 1 for validation (5_SMV_chapel_1) and 1 for test.
Table A3. Tests performed on the SMG scene. For the DL approach 10 scenes as training, 1 for validation (5_SMV_chapel_1) and 1 for test.
ModelMetricsArchColMoldFloorDo-WiWallStairVaultRoof
kNNPrecision0.07970.10830.22450.7410.11220.64330.10480.67960.8658
Recall0.15150.24660.36760.54410.07540.61350.05220.75010.7345
F1-Score0.10440.15050.27880.62750.09020.62810.06960.71310.7948
IoU0.05510.08140.1620.45720.04720.45780.03610.55410.6594
NBPrecision0.29610.66610.3890.97080.05180.86840.21940.56210.9177
Recall0.38550.91630.34980.91770.35810.72050.60140.88710.6382
F1-Score0.3350.77140.36840.94350.09050.78760.32150.68820.7528
IoU0.20120.62790.22580.8930.04740.64960.19160.52460.6036
DTPrecision0.37480.67230.24670.92930.11000.73480.34160.79290.9681
Recall0.07660.17360.13790.84660.30840.88230.34420.87370.9439
F1-Score0.12720.27590.17690.88600.16210.80180.34290.83140.9559
IoU0.06790.16010.09700.79530.08820.66920.20690.71140.9154
RFPrecision0.69110.94800.77500.96700.38420.92100.73200.92810.9834
Recall0.85070.98570.73040.95250.10720.94240.74150.96840.9605
F1-Score0.76200.96650.75200.95970.16770.93160.73670.94780.9718
IoU0.61550.93510.60260.92250.09150.87180.58310.90080.9451
DGCNNPrecision0.37480.67230.24670.92930.11000.73480.34160.79290.9681
Recall0.07660.17360.13790.84660.30840.88230.34420.87370.9439
F1-Score0.12720.27590.17690.88600.16210.80180.34290.83140.9559
IoU0.06790.16010.09700.79530.08820.66920.20690.71140.9154
DGCNN- ModPrecision0.45810.79280.59730.91960.10800.77400.43920.88950.9799
Recall0.16850.54780.22410.86620.07080.94170.44870.90660.9851
F1-Score0.24640.64790.32600.89210.08560.84970.44390.89800.9825
IoU0.14040.47910.19470.80510.04460.73860.28520.81480.9655
DGCNN- 3DfeatPrecision0.49860.89800.61020.94250.10040.84440.48840.68900.9717
Recall0.20060.82160.49070.87720.27530.83260.61280.98130.9314
F1-Score0.28600.85810.54400.90870.14710.83840.54350.80960.9511
IoU0.16680.75140.37360.83260.07940.72170.37310.68000.9068
DGCNN- Mod+3DfeatPrecision0.64790.76260.66590.96690.21830.83770.47990.88700.9839
Recall0.18400.92550.49740.89370.36810.89100.63170.97940.9831
F1-Score0.28660.83620.56950.92890.27410.86350.54550.93090.9835
IoU0.16720.71840.39800.86720.15880.75980.37500.87060.9675
Table A4. Tests performed on the A_SMV scene, with the whole ArCH dataset as training. Fourteen scenes as training, 1 for validation (5_SMV_chapel_1) and 1 for test.
Table A4. Tests performed on the A_SMV scene, with the whole ArCH dataset as training. Fourteen scenes as training, 1 for validation (5_SMV_chapel_1) and 1 for test.
NetworkMetricsMeanArchColMoldFloorDo-WiWallStairVaultRoof
DGCNNOverall Accuracy0.7516
Precision0.77060.09450.17830.25050.62480.26250.75440.73960.70580.9648
Recall0.75160.05170.08920.38490.78190.16190.90040.10390.89730.8359
F1-Score0.73980.06690.11890.30350.69460.20030.82100.18230.79010.8958
IoU0.35340.03450.06310.17880.53200.11120.69630.10020.65300.8111
DGCNN- ModOverall Accuracy0.8368
Precision0.82850.28910.76260.41430.82510.77850.78700.80070.79220.9496
Recall0.83690.08040.19160.35980.87390.17780.92660.49790.95110.9361
F1-Score0.82230.12580.30620.38510.84880.28940.85110.61640.86440.9428
IoU0.46990.06710.18070.23840.73720.16920.74070.44290.76120.8918
DGCNN- 3DfeatOverall Accuracy0.8282
Precision0.82530.34990.69530.41390.72200.35760.84280.92900.69360.9572
Recall0.82830.28240.77320.32420.71030.13750.88290.70950.91700.9306
F1-Score0.82260.31260.73220.36360.71610.19870.86240.80450.78980.9437
IoU0.51440.18520.57750.22210.55780.11020.75800.67290.65250.8934
DGCNN- Mod+3DfeatOverall Accuracy0.8645
Precision0.85320.26190.69400.52170.79270.56600.84470.85630.82950.9611
Recall0.86460.06310.67800.44180.89210.26150.89990.78370.94740.9464
F1-Score0.85570.10170.68590.47840.83940.35780.87140.81840.88450.9537
IoU0.55550.05350.52190.31440.72330.21780.77210.69260.79290.9114
Table A5. Tests performed on the B_SMG scene, with the whole ArCH dataset as training. Fourteen scenes as training, 1 for validation (5_SMV_chapel_1) and 1 for test.
Table A5. Tests performed on the B_SMG scene, with the whole ArCH dataset as training. Fourteen scenes as training, 1 for validation (5_SMV_chapel_1) and 1 for test.
NetworkMetricsMeanArchColMoldFloorDo-WiWallStairVaultRoof
DGCNNOverall Accuracy0.7836
Precision0.82210.00080.88580.17310.88270.18620.72920.38880.62460.9592
Recall0.78370.00210.24050.22600.66840.13620.91250.50930.83270.8560
F1-Score0.79390.00120.37830.19610.76080.15730.81060.44100.71380.9046
IoU0.37630.00060.23320.10860.61390.08530.68150.28280.55490.8259
DGCNN- ModOverall Accuracy0.8958
Precision0.89260.47660.81150.48090.96530.13360.83380.35680.90460.9545
Recall0.89580.23250.78750.31750.85390.14150.89920.48530.94460.9876
F1-Score0.89200.31260.79930.38250.90620.13740.86530.41120.92420.9708
IoU0.53480.18520.66570.23640.82840.07370.76250.25880.85900.9432
DGCNN- 3DfeatOverall Accuracy0.8318
Precision0.81580.39560.71010.37150.81500.33120.81250.88180.70740.9409
Recall0.83190.11950.69000.18930.71800.20460.87050.80940.93610.9495
F1-Score0.81810.18360.69990.25080.76340.25290.84050.84400.80580.9452
IoU0.50780.10100.53830.14330.61730.14470.72490.73010.67470.8960
DGCNN- Mod+3DfeatOverall Accuracy0.9144
Precision0.91730.53180.84970.65020.95660.13550.87970.46610.89090.9753
Recall0.91450.25780.92500.59590.90300.19560.85510.71010.96880.9880
F1-Score0.91480.34720.88580.62190.92900.16010.86720.56280.92820.9816
IoU0.59970.21000.79490.45120.86730.08700.76550.39150.86600.9630

References

  1. Yu, H.; Yang, Z.; Tan, L.; Wang, Y.; Sun, W.; Sun, M.; Tang, Y. Methods and datasets on semantic segmentation: A review. Neurocomputing 2018, 304, 82–103. [Google Scholar] [CrossRef]
  2. Zhang, K.; Hao, M.; Wang, J.; de Silva, C.W.; Fu, C. Linked dynamic graph CNN: Learning on point cloud via linking hierarchical features. arXiv 2019, arXiv:1904.10014. [Google Scholar]
  3. Xie, Y.; Tian, J.; Zhu, X. A Review of Point Cloud Semantic Segmentation. IEEE Geosci. Remote Sens. Mag. (GRSM) 2020. [Google Scholar] [CrossRef] [Green Version]
  4. Llamas, J.; M Lerones, P.; Medina, R.; Zalama, E.; Gómez-García-Bermejo, J. Classification of architectural heritage images using deep learning techniques. Appl. Sci. 2017, 7, 992. [Google Scholar] [CrossRef] [Green Version]
  5. Grilli, E.; Özdemir, E.; Remondino, F. Application of machine and deep learning strategies for the classification of heritage point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-4/W18, 447–454. [Google Scholar] [CrossRef] [Green Version]
  6. Grilli, E.; Remondino, F. Classification of 3D Digital Heritage. Remote Sens. 2019, 11, 847. [Google Scholar] [CrossRef] [Green Version]
  7. Malinverni, E.; Pierdicca, R.; Paolanti, M.; Martini, M.; Morbidoni, C.; Matrone, F.; Lingua, A. Deep learning for semantic segmentation of 3D point cloud. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W15, 735–742. [Google Scholar] [CrossRef] [Green Version]
  8. Pierdicca, R.; Mameli, M.; Malinverni, E.S.; Paolanti, M.; Frontoni, E. Automatic Generation of Point Cloud Synthetic Dataset for Historical Building Representation. In Proceedings of the International Conference on Augmented Reality, Virtual Reality and Computer Graphics, Santa Maria al Bagno, Italy, 24–27 June 2019; pp. 203–219. [Google Scholar]
  9. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  10. Klokov, R.; Lempitsky, V. Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 863–872. [Google Scholar]
  11. Xie, S.; Liu, S.; Chen, Z.; Tu, Z. Attentional shapecontextnet for point cloud recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4606–4615. [Google Scholar]
  12. Altman, N.S. An introduction to kernel and nearest-neighbor nonparametric regression. Am. Stat. 1992, 46, 175–185. [Google Scholar]
  13. Zhang, H. Exploring conditions for the optimality of naive Bayes. Int. J. Pattern Recognit. Artif. Intell. 2005, 19, 183–198. [Google Scholar] [CrossRef]
  14. Breiman, L.; Friedman, J.; Stone, C.J.; Olshen, R.A. Classification and Regression Trees; CRC Press: Boca Raton, FL, USA, 1984. [Google Scholar]
  15. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  16. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. (TOG) 2019, 38, 1–12. [Google Scholar] [CrossRef] [Green Version]
  17. Matrone, F.; Lingua, A.; Pierdicca, R.; Malinverni, E.S.; Paolanti, M.; Grilli, E.; Remondino, F.; Murtiyoso, A.; Landes, T. A benchmark for large-scale heritage point cloud semantic segmentation. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B2, 1419–1426. [Google Scholar] [CrossRef]
  18. Pierdicca, R.; Paolanti, M.; Matrone, F.; Martini, M.; Morbidoni, C.; Malinverni, E.S.; Frontoni, E.; Lingua, A.M. Point Cloud Semantic Segmentation Using a Deep Learning Framework for Cultural Heritage. Remote Sens. 2020, 12, 1005. [Google Scholar] [CrossRef] [Green Version]
  19. Oses, N.; Dornaika, F.; Moujahid, A. Image-based delineation and classification of built heritage masonry. Remote Sens. 2014, 6, 1863–1889. [Google Scholar] [CrossRef] [Green Version]
  20. Riveiro, B.; Lourenço, P.B.; Oliveira, D.V.; González-Jorge, H.; Arias, P. Automatic morphologic analysis of quasi-periodic masonry walls from LiDAR. Comput. Aided Civ. Infrastruct. Eng. 2016, 31, 305–319. [Google Scholar] [CrossRef]
  21. Barsanti, S.G.; Guidi, G.; De Luca, L. Segmentation of 3D models for cultural heritage structural analysis–some critical issues. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, 4, 115. [Google Scholar] [CrossRef] [Green Version]
  22. Poux, F.; Neuville, R.; Hallot, P.; Billen, R. Point cloud classification of tesserae from terrestrial laser data combined with dense image matching for archaeological information extraction. Int. J. Adv. Life Sci. 2017, 4, 203–211. [Google Scholar] [CrossRef] [Green Version]
  23. Grilli, E.; Dininno, D.; Marsicano, L.; Petrucci, G.; Remondino, F. Supervised segmentation of 3D cultural heritage. In Proceedings of the 2018 3rd Digital Heritage International Congress (DigitalHERITAGE) held jointly with 2018 24th International Conference on Virtual Systems & Multimedia (VSMM 2018), San Francisco, CA, USA, 26–30 October 2018; pp. 1–8. [Google Scholar]
  24. Grilli, E.; Farella, E.; Torresani, A.; Remondino, F. Geometric features analysis for the classification of cultural heritage point clouds. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, XLII-2/W15, 541–548. [Google Scholar] [CrossRef] [Green Version]
  25. Grilli, E.; Remondino, F. Machine Learning Generalisation across Different 3D Architectural Heritage. ISPRS Int. J. Geo-Inf. 2020, 9, 379. [Google Scholar] [CrossRef]
  26. Murtiyoso, A.; Grussenmeyer, P. Virtual Disassembling of Historical Edifices: Experiments and Assessments of an Automatic Approach for Classifying Multi-Scalar Point Clouds into Architectural Elements. Sensors 2020, 20, 2161. [Google Scholar] [CrossRef] [Green Version]
  27. Zhang, J.; Zhao, X.; Chen, Z.; Lu, Z. A Review of Deep Learning-based Semantic Segmentation for Point Cloud (November 2019). IEEE Access 2019, 7, 179118–179133. [Google Scholar] [CrossRef]
  28. Griffiths, D.; Boehm, J. SynthCity: A large scale synthetic point cloud. arXiv 2019, arXiv:1907.04758. [Google Scholar]
  29. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  30. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; pp. 5099–5108. [Google Scholar]
  31. Atzmon, M.; Maron, H.; Lipman, Y. Point convolutional neural networks by extension operators. arXiv 2018, arXiv:1803.10091. [Google Scholar] [CrossRef] [Green Version]
  32. De Deuge, M.; Quadros, A.; Hung, C.; Douillard, B. Unsupervised feature learning for classification of outdoor 3d scans. In Proceedings of the Australasian Conference on Robitics and Automation, Sydney, NSW, Australia, 2–4 December 2013; Volume 2, p. 1. [Google Scholar]
  33. Armeni, I.; Sener, O.; Zamir, A.R.; Jiang, H.; Brilakis, I.; Fischer, M.; Savarese, S. 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1534–1543. [Google Scholar]
  34. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The kitti dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef] [Green Version]
  35. Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J.D.; Schindler, K.; Pollefeys, M. Semantic3d. net: A new large-scale point cloud classification benchmark. arXiv 2017, arXiv:1704.03847. [Google Scholar]
  36. Chen, B.; Shi, S.; Gong, W.; Zhang, Q.; Yang, J.; Du, L.; Sun, J.; Zhang, Z.; Song, S. Multispectral LiDAR point cloud classification: A two-step approach. Remote Sens. 2017, 9, 373. [Google Scholar] [CrossRef] [Green Version]
  37. Zhang, J.; Lin, X.; Ning, X. SVM-based classification of segmented airborne LiDAR point clouds in urban areas. Remote Sens. 2013, 5, 3749–3775. [Google Scholar] [CrossRef] [Green Version]
  38. Laube, P.; Franz, M.O.; Umlauf, G. Evaluation of features for SVM-based classification of geometric primitives in point clouds. In Proceedings of the IEEE 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA), Nagoya, Japan, 8–12 May 2017; pp. 59–62. [Google Scholar]
  39. Babahajiani, P.; Fan, L.; Gabbouj, M. Object recognition in 3D point cloud of urban street scene. In Proceedings of the Asian Conference on Computer Vision, Singapore, 1–2 November 2014; pp. 177–190. [Google Scholar]
  40. Li, Z.; Zhang, L.; Tong, X.; Du, B.; Wang, Y.; Zhang, L.; Zhang, Z.; Liu, H.; Mei, J.; Xing, X.; et al. A three-step approach for TLS point cloud classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5412–5424. [Google Scholar] [CrossRef]
  41. Lodha, S.K.; Fitzpatrick, D.M.; Helmbold, D.P. Aerial lidar data classification using adaboost. In Proceedings of the IEEE Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007), Montreal, QC, Canada, 21–23 August 2007; pp. 435–442. [Google Scholar]
  42. Liu, Y.; Aleksandrov, M.; Zlatanova, S.; Zhang, J.; Mo, F.; Chen, X. Classification of power facility point clouds from unmanned aerial vehicles based on adaboost and topological constraints. Sensors 2019, 19, 4717. [Google Scholar] [CrossRef] [Green Version]
  43. Kang, Z.; Yang, J.; Zhong, R. A bayesian-network-based classification method integrating airborne lidar data with optical images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 10, 1651–1661. [Google Scholar] [CrossRef]
  44. Thompson, D.R.; Hochberg, E.J.; Asner, G.P.; Green, R.O.; Knapp, D.E.; Gao, B.C.; Garcia, R.; Gierach, M.; Lee, Z.; Maritorena, S.; et al. Airborne mapping of benthic reflectance spectra with Bayesian linear mixtures. Remote Sens. Environ. 2017, 200, 18–30. [Google Scholar] [CrossRef]
  45. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  46. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  47. John, G.H.; Langley, P. Estimating continuous distributions in Bayesian classifiers. arXiv 2013, arXiv:1302.4964. [Google Scholar]
  48. Chehata, N.; Guo, L.; Mallet, C. Airborne lidar feature selection for urban classification using random forests. Laser Scanning 2009 IAPRS 2009, XXXVIII-3/W8, 207–212. [Google Scholar]
  49. Weinmann, M.; Jutzi, B.; Hinz, S.; Mallet, C. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS J. Photogramm. Remote Sens. 2015, 105, 286–304. [Google Scholar] [CrossRef]
  50. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
Figure 1. Workflow for the machine learning (ML) and deep learning (DL) framework comparison.
Figure 1. Workflow for the machine learning (ML) and deep learning (DL) framework comparison.
Ijgi 09 00535 g001
Figure 2. Three-dimensional features used to train the ML and DL classifiers. The colour of the plot represents the feature scale. The used search radii are reported in brackets.
Figure 2. Three-dimensional features used to train the ML and DL classifiers. The colour of the plot represents the feature scale. The used search radii are reported in brackets.
Ijgi 09 00535 g002
Figure 3. Modified EdgeConv layer for DGCNN-based approaches.
Figure 3. Modified EdgeConv layer for DGCNN-based approaches.
Ijgi 09 00535 g003
Figure 4. Ground Truth and predicted point clouds, by using best approaches on Trompone’s Test side.
Figure 4. Ground Truth and predicted point clouds, by using best approaches on Trompone’s Test side.
Ijgi 09 00535 g004
Figure 5. Manual annotations used to train the ML algortihms for the Sacro Monte Varallo (SMV) Scene.
Figure 5. Manual annotations used to train the ML algortihms for the Sacro Monte Varallo (SMV) Scene.
Ijgi 09 00535 g005
Figure 6. Section of Ground Truth (a) and the best Predictions (bd) of the SMV scene. Please note that the point clouds deriving from the DL approach are subsampled.
Figure 6. Section of Ground Truth (a) and the best Predictions (bd) of the SMV scene. Please note that the point clouds deriving from the DL approach are subsampled.
Ijgi 09 00535 g006
Figure 7. Manual annotations used to train the ML algortihms for the Sacred Mount of Ghiffa (SMG) Scene.
Figure 7. Manual annotations used to train the ML algortihms for the Sacred Mount of Ghiffa (SMG) Scene.
Ijgi 09 00535 g007
Figure 8. Ground Truth (a) and the best Predictions (bd) of the SMG scene. Please note that the point clouds deriving from the DL approach are subsampled.
Figure 8. Ground Truth (a) and the best Predictions (bd) of the SMG scene. Please note that the point clouds deriving from the DL approach are subsampled.
Ijgi 09 00535 g008
Figure 9. Overall Accuracy of all tests carried out.
Figure 9. Overall Accuracy of all tests carried out.
Ijgi 09 00535 g009
Figure 10. F1-Score of the different classes for the SMV scene with the different approaches.
Figure 10. F1-Score of the different classes for the SMV scene with the different approaches.
Ijgi 09 00535 g010
Figure 11. Normalised comparison of times required for the different scenarios test. NN (t0) represents the first scenario in which the whole dataset has been manually labeled and the DGCNN-based methods have been trained on all the scenes. NN (t1), on the other hand, represents the next scenario in which it is possible to use the weights from the pre-trained neural network and conduct directly the data preparation (feature extraction, scaling, blocks creation, subsampling...) and the final test for the prediction.
Figure 11. Normalised comparison of times required for the different scenarios test. NN (t0) represents the first scenario in which the whole dataset has been manually labeled and the DGCNN-based methods have been trained on all the scenes. NN (t1), on the other hand, represents the next scenario in which it is possible to use the weights from the pre-trained neural network and conduct directly the data preparation (feature extraction, scaling, blocks creation, subsampling...) and the final test for the prediction.
Ijgi 09 00535 g011
Table 1. Experiments performed with relative test and training sets.
Table 1. Experiments performed with relative test and training sets.
ExperimentTest SetTraining Set
MLDL
1
Overall Results in Table 2 and Figure 4
Detailed Results in Table A1
Trompone Church
- symmetrical half part -
Remaining half partRemaining half part
(Training and Validation)
/
2
Overall Results in Table 3 and Figure 6
Detailed Results in Table A2
SMV scene
(Sacred Mount of Varallo)
16% of the test scene10 scenes for Training
and 1 for Validation
14 scenes for Training and 1 for Validation
(whole ArCH dataset)
Results in Table A4
3
Overall Results in Table 4 and Figure 8
Detailed Results in Table A3
SMG scene
(Sacred Mount of Ghiffa)
20% of the test scene10 scenes for Training
and 1 for Validation
14 scenes for Training
and 1 for Validation
(whole ArCH dataset)
Results in Table A5
Table 2. Weighted metrics computed for the Test set of the Trompone scene divided into 3 parts: Training, Validation, Test.
Table 2. Weighted metrics computed for the Test set of the Trompone scene divided into 3 parts: Training, Validation, Test.
ModelOverall AccuracyPrecisionRecallF1-Score
kNN0.74380.73370.74380.7345
NB0.66390.64060.66390.6364
DT0.83450.83130.83450.8312
RF0.88040.87960.88040.8754
DGCNN0.71170.74000.71170.7040
DGCNN-Mod0.73130.73440.73130.6963
DGCNN-3Dfeat0.87230.87050.87230.8676
DGCNN-Mod+3Dfeat0.82900.82710.82900.8215
Table 3. Weighted metrics computed for the Test set of the SMV scene.
Table 3. Weighted metrics computed for the Test set of the SMV scene.
ModelOverall AccuracyPrecisionRecallF1-Score
kNN0.81020.85880.81020.8248
NB0.73310.79700.73310.7584
DT0.80410.85220.80410.8180
RF0.83690.87360.83690.8467
DGCNN0.56080.68500.56080.5602
DGCNN-Mod0.82940.82160.82950.8192
DGCNN-3Dfeat0.78900.77760.78900.7720
DGCNN-Mod+3Dfeat0.84520.82870.84520.8343
Table 4. Weighted metrics computed for the Test set of the SMG scene.
Table 4. Weighted metrics computed for the Test set of the SMG scene.
ModelOverall AccuracyPrecisionRecallF1-Score
kNN0.60780.65650.60780.6262
NB0.71860.79670.71860.7422
DT0.89520.90140.89520.8971
RF0.92660.92390.92660.9243
DGCNN0.85140.85280.85140.8474
DGCNN-Mod0.89510.88870.89510.8860
DGCNN-3Dfeat0.87360.88870.87370.8776
DGCNN-Mod+3Dfeat0.91350.91650.91350.9125
Table 5. Comparative overview table with the key differences between the two proposed frameworks in the CH domain. From low (*) to high (***).
Table 5. Comparative overview table with the key differences between the two proposed frameworks in the CH domain. From low (*) to high (***).
Machine LearningDeep Learning
Training Set Size Dependencies****
Programming Skills****
Feature Engineering****
Algorithm Structure****
Interpretability****
Training Time***
Hyperparameter Tuning******
Processing Power and Expensive hardware (GPUs)*****

Share and Cite

MDPI and ACS Style

Matrone, F.; Grilli, E.; Martini, M.; Paolanti, M.; Pierdicca, R.; Remondino, F. Comparing Machine and Deep Learning Methods for Large 3D Heritage Semantic Segmentation. ISPRS Int. J. Geo-Inf. 2020, 9, 535. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9090535

AMA Style

Matrone F, Grilli E, Martini M, Paolanti M, Pierdicca R, Remondino F. Comparing Machine and Deep Learning Methods for Large 3D Heritage Semantic Segmentation. ISPRS International Journal of Geo-Information. 2020; 9(9):535. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9090535

Chicago/Turabian Style

Matrone, Francesca, Eleonora Grilli, Massimo Martini, Marina Paolanti, Roberto Pierdicca, and Fabio Remondino. 2020. "Comparing Machine and Deep Learning Methods for Large 3D Heritage Semantic Segmentation" ISPRS International Journal of Geo-Information 9, no. 9: 535. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi9090535

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop