Next Article in Journal
Weakly Supervised Ternary Stream Data Augmentation Fine-Grained Classification Network for Identifying Acute Lymphoblastic Leukemia
Next Article in Special Issue
Deep Learning-Based Four-Region Lung Segmentation in Chest Radiography for COVID-19 Diagnosis
Previous Article in Journal
Titin-Related Dilated Cardiomyopathy: The Clinical Trajectory and the Role of Circulating Biomarkers in the Clinical Assessment
Previous Article in Special Issue
Development of a Machine Learning Model to Distinguish between Ulcerative Colitis and Crohn’s Disease Using RNA Sequencing Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cluster Analysis of Cell Nuclei in H&E-Stained Histological Sections of Prostate Cancer and Classification Based on Traditional and Modern Artificial Intelligence Techniques

1
Department of Computer Engineering, u-AHRC, Inje University, Gimhae 50834, Korea
2
Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae 50834, Korea
3
School of Computing & IT, Sri Lanka Technological Campus, Paduka 10500, Sri Lanka
4
Department of Pathology, Yonsei University Hospital, Seoul 03722, Korea
*
Author to whom correspondence should be addressed.
Submission received: 4 November 2021 / Revised: 14 December 2021 / Accepted: 20 December 2021 / Published: 22 December 2021
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in Korea)

Abstract

:
Biomarker identification is very important to differentiate the grade groups in the histopathological sections of prostate cancer (PCa). Assessing the cluster of cell nuclei is essential for pathological investigation. In this study, we present a computer-based method for cluster analyses of cell nuclei and performed traditional (i.e., unsupervised method) and modern (i.e., supervised method) artificial intelligence (AI) techniques for distinguishing the grade groups of PCa. Two datasets on PCa were collected to carry out this research. Histopathology samples were obtained from whole slides stained with hematoxylin and eosin (H&E). In this research, state-of-the-art approaches were proposed for color normalization, cell nuclei segmentation, feature selection, and classification. A traditional minimum spanning tree (MST) algorithm was employed to identify the clusters and better capture the proliferation and community structure of cell nuclei. K-medoids clustering and stacked ensemble machine learning (ML) approaches were used to perform traditional and modern AI-based classification. The binary and multiclass classification was derived to compare the model quality and results between the grades of PCa. Furthermore, a comparative analysis was carried out between traditional and modern AI techniques using different performance metrics (i.e., statistical parameters). Cluster features of the cell nuclei can be useful information for cancer grading. However, further validation of cluster analysis is required to accomplish astounding classification results.

1. Introduction

Many techniques are used for analysis, color enhancement, segmentation, and classification of medical images, such as those yielded by magnetic resonance (MR), positron emission tomography (PET), and microscopic biopsy; many internal bodily structures can be imaged non-invasively. Computers can be used for image gain, storage, presentation, and communication. Clinical, biochemical, and pathological images are used to diagnose and stage PCa; computer scientists are very active in this field. However, the sensitivity and specificity of the techniques remain controversial [1]. PCa diagnosis requires prostate MR and microscopic biopsy images. A traditional cancer diagnosis is subjective; pathologists examine biopsy samples under a microscope. It is difficult to objectively describe tissue texture, tissue color, and cell morphology.
Despite recent advances, PCa remains a major medical issue among males, being associated with the overtreatment of inherently benign disease and inadequate treatment of metastases [2]. The prostate has a pseudostratified epithelium with three types of terminally differentiated epithelial cells: luminal, basal, and neuroendocrine [3]. Other cells of the epithelium include fibroblasts, smooth muscle cells, endothelial cells, immune cells, autonomic nerve fibers, and associated ganglia [4]. Malignant transformation is a multistage process; prostatic intraepithelial neoplasia (PIN) triggers localized PCa followed by adenocarcinoma characterized by local invasion and, finally, metastatic PCa. The most common PCa grading system is the Gleason system, which has been refined since it was first introduced in 1974 [5]; the system is widely used to score PCa aggressiveness. However, there are problems, including inter- and intra-observer variation. In addition, most biopsy samples are negative [6,7,8]. Here, we evaluate histopathological images of cancerous tissues. PCa grading was performed by a pathologist based on structural changes in stained sections.
Computer-based algorithms can perform cluster analyses of cell nuclei; available methods include traditional MST [9,10,11]. MST cluster analysis, derived from graph theory, explores nuclear distributions. A tree is used to represent binary relationships; the connected components constitute a subtree representing an independent cluster. The identification of cancer cell abnormalities is essential for early cancer detection. Today, ML and deep learning (DL) algorithms are used for medical image analysis, feature classification, and pattern recognition. ML algorithms are usually accurate, fast, and customizable. ML iteration is essential; new data must be received and assimilated. Supervised learning is commonly used during ML training and testing; a model is trained using labeled data in a training set, and the knowledge thus acquired is used to evaluate unforeseen labeled data in a test set [12]. On the other hand, unsupervised learning is not commonly used for the prediction of the diagnosis of different diseases. It is essential in the real-world environment and discovers hidden patterns using the unlabeled datasets. Therefore, unsupervised learning is also a trustworthy method but computationally complex.
In this study, four state-of-the-art approaches were proposed for color normalization, cell nuclei segmentation, feature selection, and ML classification. Histopathology samples were collected from two different centers and created two datasets for binary (grade 3 vs. grade 5) and multiclass (grade 3 vs. grade 4 vs. grade 5) classification. Before we perform the segmentation, stain normalization and deconvolution techniques were carried out as a preprocessing step. After stain deconvolution, the image hematoxylin channel was selected for extracting the cell nuclei tissue components. Furthermore, we used an advanced method (i.e., marker-controlled watershed algorithm) to separate the overlapping cell nuclei. Next, we use an MST algorithm to perform cluster analysis and extract significant information for AI classification. The cell nuclei clusters were separated, and their features are evaluated heuristically. Cluster analysis was performed to better capture the proliferation and community structure of cell nuclei. These methods are making their way into pathology via various computer-aided detection (CAD) systems to assist pathologic diagnosis. Then, we proposed a majority voting method by combining filter and wrapper-based techniques for selecting the most significant features. Finally, we use state-of-the-art algorithms (i.e., stacked ML ensemble and k-medoids clustering) to perform supervised and unsupervised PCa classification. The performance metrics used for evaluating the results are accuracy, precision, recall, and F1-score.
The remainder of this paper are as follows: Section 2 presents the related work of the past study where we discussed different state-of-the-art methods for PCa analysis. Section 3 illustrates the materials and methods of the study where we mentioned the process of data collection and state-of-the-art techniques used in this study. In Section 4, we presented the results of AI models and discussed the overall implication of the study. Lastly, the paper is concluded in Section 5.

2. Related Work

Histopathology image analysis of PCa is quite problematic compared to other cancer types. Many researchers are still working on it and trying to develop new techniques for detecting and treating PCa. It is very difficult to analyze PCa under a microscope based on the Gleason grading system because the tissue pattern, formation of the gland, and distribution of cell nuclei is quite similar in some regions (i.e., score 3 and 4) of the whole slide image (WSI). Most of the existing research performed texture and morphological analysis to differentiate cancer scoring using histopathology images. Table 1 shows the summary of the significant papers that used microscopy biopsy tissue images for the analysis of PCa.
The studies in Table 1 confirm the success of the analysis of histopathological images for the classification of PCa such as benign vs. malignant and low- vs. high-grade cancer. It has been analyzed from the above-mentioned studies that most of the authors performed morphological and texture feature analysis for PCa classification. However, it has also been shown that morphological analysis of cell nuclei is not significant for PCa diagnosis because the shape and size of the cell nucleus are almost similar in all the grades (i.e., grade 3, grade 4, and grade 5), and AI models can produce unsatisfactory results. Therefore, in the present study, we performed the PCa analysis only based on the cluster features of the cell nuclei. The features extracted from the clusters are provided in Section 3.2.4.

3. Materials and Methods

3.1. Data Acquisition

Dataset 1 (grade 3, grade 4, and grade 5 WSIs) was collected from the Yonsei University Severance Hospital, Korea. WSIs were scanned into a computer at 40× optical magnification using a 0.3 NA objective, fitted to a C-3000 digital camera (Olympus, Tokyo, Japan) attached to a BX-51 microscope (Olympus). The tissue samples had been sectioned to a thickness of 4   μ m ; then, the sections were deparaffinized, rehydrated, and stained with H&E (staining blue and red, respectively). The WSIs used for this research were acquired from 80 patients.
Dataset 2 (grade 3, grade 4, and grade 5 WSIs) was collected from the Kaggle repository, available at https://www.kaggle.com/c/prostate-cancer-grade-assessment (accessed on 25 March 2021). The WSIs were analyzed and prepared at Radboud University medical center. All the slides were scanned using 3DHistech Panoramic Flash II 250 scanner at 20× magnification (pixel resolution 0.48 μ m ). All cases were retrieved from the pathology achieves of the Radboud University Medical Center. Patients with a pathologist’s report between 2012 and 2017 were eligible for inclusion. The WSIs used for this research were acquired from 60 patients.
A total of 900 H&E-stained patch images of size 512 × 512 pixels were generated by tiling the pathology annotated slides. Furthermore, the acquired samples were divided equally into three cancer grades (300 grade 3, 300 grade 4, and 300 grade 5). For supervised classification, the dataset was divided into two subsets: train set (80%) and test set (20%). On the other hand, unsupervised classification was performed using the whole dataset. Examples of histopathological images of datasets 1 and 2 are shown in Figure 1. The binary classification was defined (grade 3 vs. grade 5) as was multiclass classification (grade 3 vs. grade 4 vs. grade 5). Appendix A, Figure A1, Figure A2 and Figure A3 show the illustration of the Gleason grading process. Each of the grades is assigned according to the Gleason grading system as follows:
  • Grade 3: Gleason score 4 + 3 = 7. Distinctly infiltrative margin.
  • Grade 4: Gleason score 4 + 4 = 8. Irregular masses of neoplastic glands. Cancer cells have lost their ability to form glands.
  • Grade 5: Gleason score 4 + 5, 5 + 4, or 5 + 5 = 9 or 10. Only occasional gland formation. Sheets of cancer cells throughout the tissue.

3.2. Research Pipeline

The patch images of size 512 × 512 pixels were extracted to perform AI classification. Figure 2 illustrates the entire methodology for AI classification to distinguish between the grades of PCa. The pipeline plotted below consisted of seven phases, which include slide tiling, image preprocessing, nuclei segmentation, cluster analysis, feature extraction, feature selection, and AI classification.

3.2.1. Image Preprocessing

Our observations on H&E-stained images show that there is a problem of color constancy, and it is a critical issue for segmentation. Therefore, stain normalization represents a vital step for balancing the color intensity in the histological section. We applied stain normalization and stain deconvolution techniques as a preprocessing step. To perform stain normalization, we selected an image from the dataset as a reference image to match the color intensity with the source images in the dataset. Therefore, the stain normalization approach was applied by transforming both the source and reference image to the LAB color space, and the mean and standard deviation of the reference image are harmonized to that of the source image. Figure 3 shows the source, reference, and normalized images. Based on the statistics of the source and reference images, each image channel was normalized. However, to improve the quality of the images, the computation process of stain normalization has been slightly modified from the original equations and can be expressed as:
N o r m L m a p = ( ( L s r c L ¯ s r c ) × ( L ^ t a r L ^ s r c ) ) + ( L s r c + L ¯ t a r ) / 2
N o r m A m a p = ( ( A s r c A ¯ s r c ) × ( A ^ t a r A ^ s r c ) ) + ( A s r c + A ¯ t a r ) / 2
N o r m B m a p = ( ( B s r c B ¯ s r c ) × ( B ^ t a r B ^ s r c ) ) + ( B s r c + B ¯ t a r ) / 2
N o r m m a p = c o n c a t e a t e ( N o r m L m a p , N o r m A m a p , N o r m B m a p )
where L ¯ , A ¯ , and B ¯ are the channel means and L ^ , A ^ , and B ^ are the channel standard deviation, s r c is the source image, t a r is the target image, and N o r m m a p is the normalized LAB image, which was further converted to RGB color space. The end part of Equations (1)–(3) has been modified from the original equations [23].
On the other hand, stain deconvolution [24] was applied to transform the RGB color image into stain color spaces (i.e., H&E). Examples of separated stain images are shown in Figure 4. All color values on the normalized image I N are converted to their corresponding optical density (OD) values and the computation of OD for each (Red, Green, and Blue) channel can be expressed as follows:
I O = 255 O D = log ( I N I O )
where I O is the background brightfield (i.e., the intensity of light entering the image).
The stain matrix M H , E = [ R e d G r e e n B l u e 0.587 0.754 0.294 0.136 0.833 0.536 ] was estimated using the Qupath open-source software based on the reference image used for stain normalization. Here, M H is the hematoxylin stain matrix [0.587 0.754 0.294] and M E is the Eosin stain matrix [0.136 0.833 0.536]. The normalized image is transformed into an optical density space to determine the concentration of the individual stain in RGB channels. Furthermore, estimated stain vector channels were recombined to obtain the stained images. The computation process for determining the stain concentration and recombining the stain vector channels can be expressed as:
S t a i n   C o n c e n t r a t i o n H , E = O D / M H , E
S t a i n   I m a g e H = I O × e ( S t a i n   C o n c e n t r a t i o n H ) × ( M H )  
S t a i n   I m a g e E = I O × e ( S t a i n   C o n c e n t r a t i o n E ) × ( M E ) .

3.2.2. Nuclear Segmentation of Cancer Cells

To perform cell nuclear segmentation, image preprocessing was carried out as discussed in the previous section. The hematoxylin-stained image separated from the normalized image was converted to HSI (i.e., Hue—H, Saturation—S, and Intensity—I) color space. Furthermore, the image of the S-channel (8-bit/pixel) was selected for the segmentation purpose because the cell nucleus is more apparent. Next, the contrast adjustment (i.e., specifying the contrast limit) was performed to remove the inconstancy intensity from the background. Then, the global threshold method was applied to the saturation-adjusted image to convert it into a pure binary image (1-bit/pixel). Finally, the marker-controlled watershed algorithm was applied to separate the overlapping nuclei [18,25,26,27,28,29]. After separating the touching nuclei, some artifacts and objects were rejected (considered as noise), and morphological operations (i.e., closing and opening) were applied to remove the peripheral brightness and smooth the membrane boundary of the cell nucleus. Figure 5 shows the complete process for nuclear segmentation of cancer cells.

3.2.3. Cluster Analysis

This study performed an intra- and inter-cluster analysis using an MST algorithm that identifies inconsistent edges between the clusters. This is a graph-based method that creates a network by connecting m points in n dimensions. Here, we used an MST for cluster analysis of cell nuclei in the histological section. In the MST, the sum of the edge weights is less than or equal to the sum of the edge weights of every other spanning tree [15,30,31]. An MST sub-graph traverses all vertices of the full graph in a cycle-free manner, yielding the minimum sum of weights of all included edges, as shown in Figure 6.
The MST usefully identifies nuclear clusters; the centroids connecting all nuclei create a graph that can be used to extract different kinds of features. Each center point of the cell nucleus, called a “vertex”, is connected to at least one other through a line segment, which is called an “edge”. We used the Euclidean minimum distance algorithm to measure the length between the two vertices its joins and construct the MST graph. The edges (distances) are sorted in ascending order and then listed. The edges pass through all vertices; if an edge connects a vertex coordinate that was not linked previously, that edge will be included in the tree [32,33]. To create separate vertices (nuclei), we used a maximum distance/weight threshold of 10 pixels. Any longer edge distance was considered inconsistent and thus removed, as shown in Figure 6a. If there are K vertices, the complete tree has (K − 1) edges. As shown in Figure 6b, the graph contains 10 groups of clusters formed by cutting links longer than a threshold value.
Next, we performed inter- and intra-cluster analyses; we computed the distances between objects in different clusters and objects in the same clusters. Cluster analysis does not require a specific algorithm; several methods are explored on a case-by-case basis to obtain the desired output. It is important to efficiently locate the clusters. Inter- and intra-cluster similarity are vital for clustering, as shown in Figure 6b,c, respectively. Cluster analysis identifies nuclear patterns and community structure in the histological sections and identifies similar groups in datasets. Data are clustered based on their similarity [34,35]. The Euclidean distance measure used to compute the distance between two data points can be expressed as:
d i s t e ( x 1 , x 2 ) =   ( x 1 x 2 ) 2
d i s t i n t e r ( C 1 , C 2 ) = [ { ( 1 | C 1 | x 1 c 1 x 1 ) , ( 1 | C 2 | x 2 c 2 x 2 ) } d i s t e ( x 1 , x 2 ) ]
d i s t i n t r a ( C 1 ) = [ ( 1 | C 1 | x 1 , x 2 c 1 x 1 , x 2 ) d i s t e ( x 1 , x 2 ) ]
where d i s t e ( x 1 , x 2 ) is the Euclidean distance, x 1 , x 2 are the centroid points, and d i s t i n t e r ( C 1 , C 2 ) and d i s t i n t r a ( C 1 ) are the inter- and intra-cluster distances, respectively.
Figure 7 shows the flowchart of MST construction and the detailed algorithm is composed of the following steps:
  • Create an adjacent grid matrix using the input image.
  • Calculate the total grid numbers in the rows and columns.
  • Generate a graph from an adjacent matrix, which must contain the minimum and maximum weights of all vertices.
  • Create an MST-set to track all vertices.
  • Find a minimum weight for all vertices in the input graph.
  • Assign that weight to the first vertex.
  • As the MST-set does not include all vertices:
    • Select a vertex u not present in the MST-set that has the minimum weight;
    • Add u to the MST-set;
    • Update the minimum weights of all vertices adjacent to u by iterating through all adjacent vertices. For every adjacent vertex v, if the weight of edge u-v is less than the previous key value of v, update that minimum weight;
  • Iterate step 7 until the MST is complete.

3.2.4. Feature Extraction and Selection

We now discuss morphological and distance-based features extracted from histological sections. Both morphological and distance-based features were used for supervised and unsupervised classification using traditional and modern AI techniques. The features were extracted as numbers based on the area and distance. A total of 26 features were extracted, which include the total intra-cluster total MST distance, total intra-cluster nucleus to nucleus maximum distance, inter-cluster centroid to centroid total distance, inter-cluster total MST distance, number of clusters, total intra-cluster maximum MST distance, average intra-cluster nucleus to nucleus minimum distance, average intra-cluster nucleus to nucleus maximum distance, average intra-cluster maximum MST distance, average cluster area, total intra-cluster nucleus to nucleus total distance, total intra-cluster minimum MST distance, total intra-cluster nucleus to nucleus minimum distance, inter-cluster maximum MST distance, average intra-cluster total MST distance, average intra-cluster minimum MST distance, total cluster area, inter-cluster average MST distance, average intra-cluster nucleus to nucleus average distance, inter-cluster centroid to centroid average distance, minimum area of a cluster, average intra-cluster nucleus to nucleus total distance, inter-cluster centroid to centroid minimum distance, inter-cluster centroid to centroid maximum distance, maximum area of a cluster, and inter-cluster minimum MST distance.
We checked the significance of each feature; this is important, because irrelevant features reduce model performance and lead to overfitting. The elimination of irrelevant features reduces model complexity and makes it easier to interpret. In addition, it enables the model to train faster and improves its performance. In this study, the combination of filter (Chi-Square, ANOVA, Information Gain, and Fisher Score) [36,37,38] and wrapper (recursive feature elimination, permutation importance, and Boruta) [39,40,41] methods were used to select the significant features. Filter methods use statistical techniques to evaluate the relationship between each input variable and the target variable, whereas the wrapper method uses machine learning algorithms and tries to fit on a given dataset and selects the combination of features that gives the optimal results. However, the best 16 features out of 26 were selected based on the majority votes. Here, we have set “minimum votes = 4” as a threshold, which signifies that the features to be selected must have at least a total of 4 votes from the seven feature selection methods, and below a total of 4 votes will be rejected, as shown in Table 2.

3.2.5. AI Classification

After performing feature extraction and selection, modern and traditional AI techniques were used for supervised and unsupervised classification, respectively. For supervised classification, we used ML algorithms, namely k-NN [42], RF [43], GBM [44], XGBoost [45], and LR [46]. On the other hand, for unsupervised classification, we used a traditional k-medoids clustering algorithm [47]. We subjected each model of supervised learning to five-fold cross-validation (CV); the training data were divided into five groups, and the accuracy was recorded after five trials. Similarly, the testing was also performed based on a five-fold technique. This approach is useful for assessing model performance and identifying hyperparameters that enhance accuracy and reduce error [48,49]. The histological grades were classified as binary and multiclass to compare the performance of the AI techniques.
The data were standardized across the entire dataset before classification. Every feature has a magnitude and standardized unit. Occasionally, feature scaling is required; here, we used the standard normal distribution for standard scalar scaling:
x s t a n d a r d i z e d = x ( i ) A v g [ x ( i ) ] V a r [ x ( i ) ]
where x ( i ) is the feature values, A v g [ x ( i ) ] is the mean ( μ ) values, and V a r [ x ( i ) ] is the standard deviations ( σ ) values.
We proposed an ensemble model for supervised classification, and it was designed by stacking five different machine learning algorithms. Figure 8 shows how four different classifiers get trained and tested. The initial predictions of all four base classifiers get stacked and are used as features to train and test the meta-clasifier, which makes the final prediction. The meta-classifier provides a smooth interpretation of the initial predictions made by the base classifiers. This ensemble model is developed for the higher predictive performance.

4. Experimental Results and Discussion

We performed qualitative and quantitative analyses to extract meaningful features and classify those using AI algorithms. Both multiclass and binary classifications were carried out to differentiate PCa grading. We subjected 900 images to preprocessing, segmentation, cluster analysis, feature extraction, and classification. The data were equally distributed among the three grades; the analyses were separate and independent. To perform supervised classification using modern AI techniques, we divided the dataset into training and testing datasets according to an 8:2 ratio. On the other hand, we used the whole dataset for unsupervised classification using a traditional AI technique. Table 3 shows the comparative analysis between supervised and unsupervised classification, and the results are based on the test dataset. Furthermore, the test and whole datasets were separated into five-split while testing our ensemble supervised model and performing k-medoids unsupervised classification for determining model generalizability. We used MATLAB (ver. R2020b; MathWorks, Natick, MA, USA) and Python programming language for stain normalization, nuclei segmentation, MST-based cluster analysis, feature extraction, and AI-based classification. The equations used for computing the performance metrics/statistical parameters can be expressed as:
Accuracy = ( T P + T N ) ( T P + T N + F P + F N ) × 100
Precision = ( T P ) ( T P + F P ) × 100
Recall = ( T P ) ( T P + F N ) × 100
F 1 score = 2 × ( P r e c i s i o n × R e c a l l ) ( P r e c i s i o n + R e c a l l )
where T P is a true positive (correct classification of positive samples), T N is a true negative (correct classification of negative samples), F P is a false positive (incorrect classification of positive samples), and F N is a false negative (incorrect classification of negative samples).
From the obtained results, we have analyzed that the supervised ensemble classification using modern AI techniques outperformed unsupervised classification using a traditional AI technique. However, both supervised and unsupervised performed well and achieved astounding results. Regarding multiclass classification using the supervised ensemble technique, the model performed the best at test split 1 and achieved an overall accuracy, precision, recall, and f1-score of 97.2%, 97.3%, 97.3%, and 97.3%, respectively. Moreover, in binary classification using the supervised technique, the model achieved amazing results of 100% for all the performance measures at test split 2. In contrast, for unsupervised multiclass classification, the k-medoids algorithm performed admirably at data split 2 and achieved an overall accuracy, precision, recall, and f1-score of 92.5%, 92.7%, 92.0%, and 92.3%, respectively. Likewise, in binary classification, the k-medoids algorithm performed exceptionally at data split 2 and achieved surprising results (i.e., accuracy: 96.7%, precision: 96.5%, recall: 96.5%, and f1-score: 97.0%). Figure 9 shows the confusion matrices generated to evaluate the performance of the supervised and unsupervised classification, and the results are based on the test dataset. We present the confusion matrices of both multiclass and binary classifications and show data that were correctly and erroneously classified during testing the ensemble model and unsupervised learning. In addition, we can observe from the confusion matrices that the high cancer grade (i.e., grade 5) was perfectly and accurately classified using supervised and unsupervised techniques. Figure 10 shows the bar graph of the accuracy score of each grade separately, and the scores were obtained from the confusion matrices, as shown in Figure 9.
The current study was not planned using clinical data; instead, we used image data of PCa. A total of 900 microscopic biopsy samples (i.e., 300 of grade 3, 300 of grade 4, and 300 of grade 5) were selected in the present study. The data samples were distributed equally among three grade groups of PCa, and therefore, our dataset had no issue with class imbalance. For ML-based supervised ensemble classification, the dataset was separated into two parts for training (720 data samples) and testing (180 data samples) according to an 8:2 ratio. On the other hand, the whole dataset was utilized for unsupervised classification instead of divided into training and testing. In the view of feature reduction, after performing a majority voting approach using statistical and ML techniques, the 16 best features were selected based on optimum performance and 10 were rejected, as shown in Table 2. Therefore, the final selected features were used for AI classification and differentiating between the grades of PCa. Figure 11 shows the bar graph of the best performance scores of supervised and unsupervised classifications.
There are many feature selection methods, and it is quite difficult to select the best one. In addition, we need to be very concerned about the features that are being fed to the model because ML follows the rules of “garbage in” and “garbage out”. We know that irrelevant features can increase computational cost and decrease the performance of the models. However, it is challenging to identify which method is the best for our dataset, and each method has a different way to select significant features. Therefore, the majority voting approach was proposed to solve this problem.
The MST cluster analysis method was applied on the PCa tissue samples of dataset 1 and dataset 2, and the visualization results of intra- and inter-cluster MST are shown in Figure 12. From the following figure, we can analyze that the structure and shape of the clusters in each grade are different from each other. It is quite challenging for researchers and doctors to analyze the microscopic biopsy images of PCa and identify suitable biomarkers compared to other common cancers.
The gold standard for the diagnosis of prostate cancer is a pathologist’s evaluation of prostate tissue. To potentially assist pathologists, DL-based cancer detection systems have been developed. Many of the state-of-the-art models are patch-based convolutional neural networks. Patch-based systems typically require detailed, pixel-level annotations for effective training. However, such annotations are seldom readily available in contrast to the clinical reports of pathologists, which contain slide-level labels. Our study sliced annotated and graded images from the pathologist, and we use an MST algorithm to perform cluster analysis and extract significant information for AI classification. The proliferation and cluster structure of cell nuclei, as shown in Appendix A, Figure A4 (Gleason pattern 3), Figure A5 (Gleason pattern 4), and Figure A6 (Gleason pattern 5), will help the pathologist to identify, classify, and grade more precisely the Gleason score assignment in the light of heterogeneity and variability.
In this era, deep learning-based algorithms are mostly used for cancer image analysis and classification. However, in this paper, we used traditional image processing algorithms to analyze PCa biopsy images and performed classification using modern and traditional AI techniques. In addition, we compared the performance of our proposed approach with the other state-of-the-art methods, as shown in Table 4.
The limitations of our study are as follows:
  • The size of the image datasets was too small to perform cluster analysis and apply deep learning-based algorithms, such as graph convolution neural network (GCNN) and LSTM network, and the study could be improved by increasing the data samples.
  • Cell nuclei segmentation using traditional-based algorithms is a major issue, but we can improve this problem gradually by performing cell-level analysis applying different state-of-the-art methods.
  • We know that unsupervised classification is very important in the real-world environment, the classifiers used in our study performed well but did not achieve astounding results compared to supervised classification. Therefore, we can improve this problem by analyzing the feature dissimilarities between the PCa grades.

5. Conclusions

In the paper, we focused principally on the cluster features of nuclei in tissue images, which facilitate cancer grading. Two-dimensional tissue images stained with H&E were subjected to cluster shape and size analyses. The distribution of cell nuclei and the shape and size of the clusters have changed as the cancer grade progressed. We developed multiple methods for histopathological image analysis (i.e., stain normalization, cell nuclei segmentation, cluster analysis, feature selection, and classification). The majority voting and stacking-based ensemble techniques are proposed for feature selection and classification, respectively. All the methods were executed successfully and achieved promising results. Cell-level analysis in the field of diagnostic cytopathology is important to analyze and differentiate the clusters of cell nuclei in each cancer grade. Although we performed several types of research, many challenges remain.
In conclusion, this research contributes useful information about the proliferation and community structure of cell nuclei that exist in the histological sections of PCa. Although we used several state-of-the-art methods and achieved astounding results, in-depth research is required for the segmentation and cluster analysis of cell nuclei using other state-of-the-art algorithms. Therefore, to overcome the challenges in the field of medical image analysis, we should think beyond the borderline. In the future, we will update this research work by performing cluster-based graph convolution neural network (GCNN) classification and apply our approach to other types of cancers.

Author Contributions

Formal analysis, S.B.; Funding acquisition, H.-K.C.; Investigation, S.B.; Methodology, S.B. and N.M.; Resources, N.-H.C.; Supervision, H.-C.K. and H.-K.C.; Validation, K.I. and Y.-B.H.; Visualization, Y.-B.H. and R.I.S.; Writing—original draft, S.B.; Writing—review and editing, K.S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MIST) (Grant No. 2021R1A2C2008576), and grant from the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (Grant No: HI21C0977).

Informed Consent Statement

For dataset 1, the requirement for written informed patient consent was waived by the Institutional Ethics Committee of the College of Medicine, Yonsei University, Korea (IRB number 1-2018-0044). Dataset 2 was anonymized for the PANDA challenge, and the need for informed consent was waived by the local ethics review board of the Radboud University Medical Center, Netherland (IRB 2016-2275).

Data Availability Statement

Dataset 1 is not available online, cannot be transferred without an internal permission procedure. It is only available on request from the corresponding author. Dataset 2 is openly available online in the Kaggle repository at https://www.kaggle.com/c/prostate-cancer-grade-assessment (accessed on 25 March 2021). Code, test data, and pre-trained models for supervised ensemble classification are available in the Github repository at https://github.com/subrata001/Prostate-Cancer-Classification-Based-On-Ensemble-Machine-Learning-Techniques (accessed on 7 September 2021).

Acknowledgments

Firstly, we would like to thank Nam-Hoon Cho from the Severance Hospital of Yonsei University for providing the materials for the research. Secondly, we would like to thank JLK Inc., Korea, http://www.jlkgroup.com/ (accessed on 7 September 2021), for cooperating in the project and research work. Special thanks to Heung-Kook Choi for his support and suggestions during the preparation of this paper. Also, special thanks to Hee-Cheol Kim.

Conflicts of Interest

The authors have declared no conflict of interest.

Appendix A

The pathology annotated WSIs used in this research to analyze the pattern and community structure of cell nuclei in grades 3, 4, and 5, shown in Figure A1, Figure A2 and Figure A3, respectively. The cluster analysis was performed successfully on histological images of PCa. For visualization of the community structure of cell nuclei, we plot the clusters in the annotated regions of grade 3, grade 4, and grade 5 in WSIs, shown in Figure A4, Figure A5 and Figure A6, respectively.
Figure A1. Prostate adenocarcinoma with Gleason scores 4 and 3 annotated with red and blue color, respectively.
Figure A1. Prostate adenocarcinoma with Gleason scores 4 and 3 annotated with red and blue color, respectively.
Diagnostics 12 00015 g0a1
Figure A2. Prostate adenocarcinoma with Gleason scores 4 annotated with red color.
Figure A2. Prostate adenocarcinoma with Gleason scores 4 annotated with red color.
Diagnostics 12 00015 g0a2
Figure A3. Prostate adenocarcinoma with Gleason scores 5 and 4 annotated with orange and red color, respectively.
Figure A3. Prostate adenocarcinoma with Gleason scores 5 and 4 annotated with orange and red color, respectively.
Diagnostics 12 00015 g0a3
Figure A4. The proliferation and community structure of cell nuclei in the annotated region of grade 3.
Figure A4. The proliferation and community structure of cell nuclei in the annotated region of grade 3.
Diagnostics 12 00015 g0a4
Figure A5. The proliferation and community structure of cell nuclei in the annotated region of grade 4.
Figure A5. The proliferation and community structure of cell nuclei in the annotated region of grade 4.
Diagnostics 12 00015 g0a5
Figure A6. The proliferation and community structure of cell nuclei in the annotated region of grade 5.
Figure A6. The proliferation and community structure of cell nuclei in the annotated region of grade 5.
Diagnostics 12 00015 g0a6

References

  1. Zhu, Y.; Williams, S.; Zwiggelaar, R. Computer Technology in Detection and Staging of Prostate Carcinoma: A Review. Med. Image Anal. 2006, 10, 179–199. [Google Scholar] [CrossRef] [PubMed]
  2. Wang, G.; Zhao, D.; Spring, D.J.; DePinho, R.A. Genetics and Biology of Prostate Cancer. Genes Dev. 2018, 32, 1105–1140. [Google Scholar] [CrossRef] [Green Version]
  3. Shen, M.M.; Abate-Shen, C. Molecular Genetics of Prostate Cancer: New Prospects for Old Challenges. Genes Dev. 2010, 24, 1967–2000. [Google Scholar] [CrossRef] [Green Version]
  4. Barron, D.A.; Rowley, D.R. The Reactive Stroma Microenvironment and Prostate Cancer Progression. Endocr.-Relat. Cancer 2012, 19, R187–R204. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Gleason, D.F.; Mellinger, G.T.; Veterans Administration Cooperative Urological Research Group. Prediction of Prognosis for Prostatic Adenocarcinoma by Combined Histological Grading and Clinical Staging. J. Urol. 2017, 197, S134–S139. [Google Scholar] [CrossRef] [PubMed]
  6. Cintra, M.L.; Billis, A. Histologic Grading of Prostatic Adenocarcinoma: Intraobserver Reproducibility of the Mostofi, Gleason and Böcking Grading Systems. Int. Urol. Nephrol. 1991, 23, 449–454. [Google Scholar] [CrossRef]
  7. Özdamar, Ş.O.; Sarikaya, Ş.; Yildiz, L.; Atilla, M.K.; Kandemir, B.; Yildiz, S. Intraobserver and Interobserver Reproducibility of WHO and Gleason Histologic Grading Systems in Prostatic Adenocarcinomas. Int. Urol. Nephrol. 1996, 28, 73–77. [Google Scholar] [CrossRef] [PubMed]
  8. Egevad, L.; Ahmad, A.S.; Algaba, F.; Berney, D.M.; Boccon-Gibod, L.; Compérat, E.; Evans, A.J.; Griffiths, D.; Grobholz, R.; Kristiansen, G.; et al. Standardization of Gleason Grading among 337 European Pathologists. Histopathology 2013, 62, 247–256. [Google Scholar] [CrossRef]
  9. Xu, Y.; Olman, V.; Xu, D. Minimum Spanning Trees for Gene Expression Data Clustering. Genome Inform. 2001, 12, 24–33. [Google Scholar] [CrossRef]
  10. Kruskal, J.B. On the Shortest Spanning Subtree of a Graph and the Traveling Salesman Problem. Proc. Am. Math. Soc. USA 1956, 7, 48–50. [Google Scholar] [CrossRef]
  11. Gower, J.C.; Ross, G.J.S. Minimum Spanning Trees and Single Linkage Cluster Analysis. Appl. Stat. 1969, 18, 54–64. [Google Scholar] [CrossRef] [Green Version]
  12. Pliner, H.A.; Shendure, J.; Trapnell, C. Supervised classification enables rapid annotation of cell atlases. Nat. Methods 2019, 16, 983–986. [Google Scholar] [CrossRef]
  13. Poojitha, U.P.; Lal Sharma, S. Hybrid Unified Deep Learning Network for Highly Precise Gleason Grading of Prostate Cancer. In Proceedings of the 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 899–903. [Google Scholar] [CrossRef]
  14. Jafari-Khouzani, K.; Soltanian-Zadeh, H. Multiwavelet Grading of Pathological Images of Prostate. IEEE Trans. Biomed. Eng. 2003, 50, 697–704. [Google Scholar] [CrossRef] [PubMed]
  15. Kwak, J.T.; Hewitt, S.M. Nuclear Architecture Analysis of Prostate Cancer via Convolutional Neural Networks. IEEE Access 2017, 5, 18526–18533. [Google Scholar] [CrossRef]
  16. Linkon, A.H.M.; Labib, M.M.; Hasan, T.; Hossain, M.; Jannat, M.-E. Deep Learning in Prostate Cancer Diagnosis and Gleason Grading in Histopathology Images: An Extensive Study. Inform. Med. Unlocked 2021, 24, 100582. [Google Scholar] [CrossRef]
  17. Wang, J.; Chen, R.J.; Lu, M.Y.; Baras, A.; Mahmood, F. Weakly Supervised Prostate Tma Classification Via Graph Convolutional Networks. In Proceedings of the IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 239–243. [Google Scholar] [CrossRef]
  18. Bhattacharjee, S.; Park, H.G.; Kim, C.H.; Prakash, D.; Madusanka, N.; So, J.H.; Cho, N.H.; Choi, H.K. Quantitative Analysis of Benign and Malignant Tumors in Histopathology: Predicting Prostate Cancer Grading Using SVM. Appl. Sci. 2019, 9, 2969. [Google Scholar] [CrossRef] [Green Version]
  19. Bhattacharjee, S.; Kim, C.H.; Prakash, D.; Park, H.G.; Cho, N.H.; Choi, H.K. An Efficient Lightweight Cnn and Ensemble Machine Learning Classification of Prostate Tissue Using Multilevel Feature Analysis. Appl. Sci. 2020, 10, 8013. [Google Scholar] [CrossRef]
  20. Nir, G.; Hor, S.; Karimi, D.; Fazli, L.; Skinnider, B.F.; Tavassoli, P.; Turbin, D.; Villamil, C.F.; Wang, G.; Wilson, R.S.; et al. Automatic grading of prostate cancer in digitized histopathology images: Learning from multiple experts. Med. Image Anal. 2018, 50, 167–180. [Google Scholar] [CrossRef] [PubMed]
  21. Ali, S.; Veltri, R.; Epstein, J.A.; Christudass, C.; Madabhushi, A. Cell Cluster Graph for Prediction of Biochemical Recurrence in Prostate Cancer Patients from Tissue Microarrays. In Medical Imaging 2013: Digital Pathology; Gurcan, M.N., Madabhushi, A., Eds.; International Society for Optics and Photonics: Bellingham, WA, USA, 2013; p. 86760H. [Google Scholar] [CrossRef]
  22. Kim, C.-H.; Bhattacharjee, S.; Prakash, D.; Kang, S.; Cho, N.-H.; Kim, H.-C.; Choi, H.-K. Artificial Intelligence Techniques for Prostate Cancer Detection through Dual-Channel Tissue Feature Engineering. Cancers 2021, 13, 1524. [Google Scholar] [CrossRef]
  23. Reinhard, E.; Ashikhmin, M.; Gooch, B.; Shirley, P. Color Transfer between Images. IEEE Comput. Graph. Appl. 2001, 21, 34–41. [Google Scholar] [CrossRef]
  24. Ruifrok, A.C.; Johnston, D.A. Quantification of Histochemical Staining by Color Deconvolution. Anal. Quant. Cytol. Histol. 2001, 23, 291–299. [Google Scholar]
  25. Tan, K.S.; Mat Isa, N.A.; Lim, W.H. Color Image Segmentation Using Adaptive Unsupervised Clustering Approach. Appl. Soft Comput. 2013, 13, 2017–2036. [Google Scholar] [CrossRef]
  26. Azevedo Tosta, T.A.; Neves, L.A.; do Nascimento, M.Z. Segmentation Methods of H&E-Stained Histological Images of Lymphoma: A Review. Inform. Med. Unlocked 2017, 9, 34–43. [Google Scholar] [CrossRef]
  27. Song, J.; Xiao, L.; Lian, Z. Contour-Seed Pairs Learning-Based Framework for Simultaneously Detecting and Segmenting Various Overlapping Cells/Nuclei in Microscopy Images. IEEE Trans. Image Process. 2018, 27, 5759–5774. [Google Scholar] [CrossRef] [PubMed]
  28. Liu, C.; Shang, F.; Ozolek, J.; Rohde, G. Detecting and Segmenting Cell Nuclei in Two-Dimensional Microscopy Images. J. Pathol. Inform. 2016, 7, 42–50. [Google Scholar] [CrossRef] [PubMed]
  29. Xu, H.; Lu, C.; Mandal, M. An Efficient Technique for Nuclei Segmentation Based on Ellipse Descriptor Analysis and Improved Seed Detection Algorithm. IEEE J. Biomed. Health Inform. 2014, 18, 1729–1741. [Google Scholar] [CrossRef]
  30. Guven, M.; Cengizler, C. Data Cluster Analysis-Based Classification of Overlapping Nuclei in Pap Smear Samples. Biomed. Eng. Online 2014, 13, 159–177. [Google Scholar] [CrossRef] [Green Version]
  31. Lv, X.; Ma, Y.; He, X.; Huang, H.; Yang, J. CciMST: A Clustering Algorithm Based on Minimum Spanning Tree and Cluster Centers. Math. Probl. Eng. 2018, 2018, 8451796. [Google Scholar] [CrossRef] [Green Version]
  32. Nithyanandam, G. Graph based image segmentation method for identification of cancer in prostate MRI image. J. Comput. Appl. 2011, 4, 104–108. [Google Scholar]
  33. Pike, R.; Lu, G.; Wang, D.; Chen, Z.G.; Fei, B. A Minimum Spanning Forest-Based Method for Noninvasive Cancer Detection with Hyperspectral Imaging. IEEE Trans. Biomed. Eng. 2016, 63, 653–663. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Ying, S.; Xu, G.; Li, C.; Mao, Z. Point Cluster Analysis Using a 3D Voronoi Diagram with Applications in Point Cloud Segmentation. ISPRS Int. J. Geo-Inf. 2015, 4, 1480–1499. [Google Scholar] [CrossRef]
  35. Nithya, S.; Bhuvaneswari, S.; Senthil, S. Robust Minimal Spanning Tree Using Intuitionistic Fuzzy C-Means Clustering Algorithm for Breast Cancer Detection. Am. J. Neural Netw. Appl. 2019, 5, 12–22. [Google Scholar] [CrossRef]
  36. Bommert, A.; Sun, X.; Bischl, B.; Rahnenführer, J.; Lang, M. Benchmark for Filter Methods for Feature Selection in High-Dimensional Classification Data. Comput. Stat. Data Anal. 2020, 143, 106839. [Google Scholar] [CrossRef]
  37. Karabulut, E.M.; Özel, S.A.; İbrikçi, T. A Comparative Study on the Effect of Feature Selection on Classification Accuracy. Procedia Technol. 2012, 1, 323–327. [Google Scholar] [CrossRef] [Green Version]
  38. Pirgazi, J.; Alimoradi, M.; Esmaeili Abharian, T.; Olyaee, M.H. An Efficient Hybrid Filter-Wrapper Metaheuristic-Based Gene Selection Method for High Dimensional Datasets. Sci. Rep. 2019, 9, 18580. [Google Scholar] [CrossRef] [PubMed]
  39. Zhao, G.; Wu, Y. Feature Subset Selection for Cancer Classification Using Weight Local Modularity. Sci. Rep. 2016, 6, 34759. [Google Scholar] [CrossRef] [PubMed]
  40. Sun, X.; Liu, Y.; Wei, D.; Xu, M.; Chen, H.; Han, J. Selection of Interdependent Genes via Dynamic Relevance Analysis for Cancer Diagnosis. J. Biomed. Inform. 2013, 46, 252–258. [Google Scholar] [CrossRef] [Green Version]
  41. Isabelle, G.; Jason, W.; Stephen, B.; Vladimir, V. Gene Selection for Cancer Classification Using Support Vector Machines. Mach. Learn. 2002, 46, 389–422. [Google Scholar]
  42. Zhang, S.; Li, X.; Zong, M.; Zhu, X.; Wang, R. Efficient KNN Classification with Different Numbers of Nearest Neighbors. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 1774–1785. [Google Scholar] [CrossRef]
  43. Toth, R.; Schiffmann, H.; Hube-Magg, C.; Büscheck, F.; Höflmayer, D.; Weidemann, S.; Lebok, P.; Fraune, C.; Minner, S.; Schlomm, T.; et al. Random Forest-Based Modelling to Detect Biomarkers for Prostate Cancer Progression. Clin. Epigenet. 2019, 11, 148. [Google Scholar] [CrossRef] [Green Version]
  44. Natekin, A.; Knoll, A. Gradient Boosting Machines, a Tutorial. Front. Neurorobot. 2013, 7, 21. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Ma, B.; Meng, F.; Yan, G.; Yan, H.; Chai, B.; Song, F. Diagnostic Classification of Cancers Using Extreme Gradient Boosting Algorithm and Multi-Omics Data. Comput. Biol. Med. 2020, 121, 103761. [Google Scholar] [CrossRef] [PubMed]
  46. Zhou, X.; Liu, K.Y.; Wong, S.T.C. Cancer Classification and Prediction Using Logistic Regression with Bayesian Gene Selection. J. Biomed. Inform. 2004, 37, 249–259. [Google Scholar] [CrossRef] [Green Version]
  47. Sahran, S.; Albashish, D.; Abdullah, A.; Shukor, N.A.; Hayati Md Pauzi, S. Absolute Cosine-Based SVM-RFE Feature Selection Method for Prostate Histopathological Grading. Artif. Intell. Med. 2018, 87, 78–90. [Google Scholar] [CrossRef] [PubMed]
  48. García Molina, J.F.; Zheng, L.; Sertdemir, M.; Dinter, D.J.; Schönberg, S.; Rädle, M. Incremental Learning with SVM for Multimodal Classification of Prostatic Adenocarcinoma. PLoS ONE 2014, 9, e93600. [Google Scholar] [CrossRef] [PubMed]
  49. Albashish, D.; Sahran, S.; Abdullah, A.; Shukor, N.A.; Pauzi, S. Ensemble Learning of Tissue Components for Prostate Histopathology Image Grading. Int. J. Adv. Sci. Eng. Inf. Technol. 2016, 6, 1134–1140. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Histologic findings for each grade of prostate cancer. (ac) Dataset 1: grade 3, grade 4, and grade 5, respectively. (df) Dataset 2: grade 3, grade 4, and grade 5, respectively.
Figure 1. Histologic findings for each grade of prostate cancer. (ac) Dataset 1: grade 3, grade 4, and grade 5, respectively. (df) Dataset 2: grade 3, grade 4, and grade 5, respectively.
Diagnostics 12 00015 g001
Figure 2. Analytical pipeline for the cluster analysis and AI classification of cancer grades observed in histological sections.
Figure 2. Analytical pipeline for the cluster analysis and AI classification of cancer grades observed in histological sections.
Diagnostics 12 00015 g002
Figure 3. Stain normalization. (a) Raw image. (b) Reference image. (c) Normalized image.
Figure 3. Stain normalization. (a) Raw image. (b) Reference image. (c) Normalized image.
Diagnostics 12 00015 g003
Figure 4. Stain deconvolution. (a) Normalized image. (b) Hematoxylin channel. (c) Eosin channel.
Figure 4. Stain deconvolution. (a) Normalized image. (b) Hematoxylin channel. (c) Eosin channel.
Diagnostics 12 00015 g004
Figure 5. The complete process for nuclear segmentation of cancer cells. (a) Hematoxylin channel extracted after performing stain deconvolution. (b) HSI color space converted from (a). (c) Saturation channel extracted from (b). (d) Contrast adjusted image extracted from (c). (e) Binary image after applying global thresholding on (d). (f) Nuclei segmentation after applying the watershed algorithm on (e). Some small objects and artifacts were removed before and after applying the watershed algorithm.
Figure 5. The complete process for nuclear segmentation of cancer cells. (a) Hematoxylin channel extracted after performing stain deconvolution. (b) HSI color space converted from (a). (c) Saturation channel extracted from (b). (d) Contrast adjusted image extracted from (c). (e) Binary image after applying global thresholding on (d). (f) Nuclei segmentation after applying the watershed algorithm on (e). Some small objects and artifacts were removed before and after applying the watershed algorithm.
Diagnostics 12 00015 g005
Figure 6. Examples of MST cluster analysis. (a) An MST is based on the minimum distances between vertex coordinates. The red dashed lines indicate the removal of inconsistent edges. (b) An intra-cluster MST was obtained after removal of the nine longest edges from (a); the red circles indicate inter- and intra-cluster similarity. (c) The inter-cluster MST was obtained from (b).
Figure 6. Examples of MST cluster analysis. (a) An MST is based on the minimum distances between vertex coordinates. The red dashed lines indicate the removal of inconsistent edges. (b) An intra-cluster MST was obtained after removal of the nine longest edges from (a); the red circles indicate inter- and intra-cluster similarity. (c) The inter-cluster MST was obtained from (b).
Diagnostics 12 00015 g006
Figure 7. Flow chart of MST construction.
Figure 7. Flow chart of MST construction.
Diagnostics 12 00015 g007
Figure 8. Machine learning stacking-based ensemble classification. The data were scaled before training and testing. The classification was carried out in two steps: initial and final predictions using base and meta classifiers, respectively.
Figure 8. Machine learning stacking-based ensemble classification. The data were scaled before training and testing. The classification was carried out in two steps: initial and final predictions using base and meta classifiers, respectively.
Diagnostics 12 00015 g008
Figure 9. Confusion matrices of the supervised and unsupervised classification using test and whole datasets, respectively. (a,b) Confusion matrices of multiclass and binary classification using supervised ensemble technique based upon the test split 1 and 2 in Table 3A, respectively. (c,d) Confusion matrices of multiclass and binary classification using an unsupervised technique based upon the data split 2, respectively.
Figure 9. Confusion matrices of the supervised and unsupervised classification using test and whole datasets, respectively. (a,b) Confusion matrices of multiclass and binary classification using supervised ensemble technique based upon the test split 1 and 2 in Table 3A, respectively. (c,d) Confusion matrices of multiclass and binary classification using an unsupervised technique based upon the data split 2, respectively.
Diagnostics 12 00015 g009
Figure 10. Bar charts of the accuracy scores of unsupervised and supervised classifications. (a) Multiclass classification. (b) Binary classification. The performance of each PCa grade was obtained from the confusion matrices.
Figure 10. Bar charts of the accuracy scores of unsupervised and supervised classifications. (a) Multiclass classification. (b) Binary classification. The performance of each PCa grade was obtained from the confusion matrices.
Diagnostics 12 00015 g010
Figure 11. Bar chart of the overall performance scores of supervised and unsupervised classifications.
Figure 11. Bar chart of the overall performance scores of supervised and unsupervised classifications.
Diagnostics 12 00015 g011
Figure 12. The visualization of intra- and inter-cluster MST graphs. (ac) The intra-cluster MST of grade 3, grade 4, and grade 5, respectively. (df) The inter-cluster MST was generated from a, b, and c, respectively. The dotted red circle indicates the cluster of cell nuclei. Different color lines in a-c and d-f indicate intra- and inter-clusters, respectively.
Figure 12. The visualization of intra- and inter-cluster MST graphs. (ac) The intra-cluster MST of grade 3, grade 4, and grade 5, respectively. (df) The inter-cluster MST was generated from a, b, and c, respectively. The dotted red circle indicates the cluster of cell nuclei. Different color lines in a-c and d-f indicate intra- and inter-clusters, respectively.
Diagnostics 12 00015 g012
Table 1. Summary of some existing papers that performed PCa analysis using histopathology images.
Table 1. Summary of some existing papers that performed PCa analysis using histopathology images.
AuthorTechniquesClassification TypesDescription and Performance
Uthappa et al., 2019 [13]CNN-based
texture analysis
Multiclass
(grade 2, 3, 4, and 5)
Developed a hybrid unified deep learning network to grade the PCa and achieved an accuracy of 98.0%
Khouzani et al., 2003 [14]Handcrafted-based
texture analysis
Multiclass
(grade 2, 3, 4, and 5)
Calculated energy and entropy features of multiwavelet coefficients of the image and used ML classifier to classify each image to the appropriate grade. They achieved an accuracy of 97.0%
Kwak et al.,
2017 [15]
CNN-based texture and nuclear architectural analysisBinary class
(benign and cancer)
The author presented a CNN approach to identify PCa. In addition, they extracted handcrafted nuclear architecture features and performed ML classification. The performance of their CNNs (0.95 AUC) was significantly better than that of other ML algorithms
Linkon et al.,
2021 [16]
Different techniques related to PCa detection and histopathology image analysis have been discussedN/AThe author discussed recent advances in CAD systems using DL for automatic detection and recognition. In addition, they discussed the current state and existing techniques as well as unique insights in PCa detection and described research findings, current limitations, and future scope for research
Wang et al.,
2020 [17]
Morphological, texture, and contrastive predictive coding feature analysisBinary class
(score 3 + 3 and 3 + 4)
The author proposed a weakly supervised approach for grade classification in tissue micro-arrays using graph CNN. An accuracy of 88.6% and an AUC of 0.96 were achieved using their proposed model
Bhattacharjee et al., 2019 [18]Morphological
analysis
Binary class
(benign vs. malignant, grade 3 vs. grade 4, 5, and grade 4 vs. grade 5) Multiclass
(benign, grade 3, grade 4, and grade 5)
The author used histopathology images to perform morphological analysis of cell nucleus and lumen and carried out multiclass and binary classification. The best accuracy of 92.5% was achieved for binary classification (grade 4 vs. grade 5 using support vector machine classifier
Bhattacharjee et al., 2020 [19]Handcrafted and non-handcrafted feature analysis using AI techniquesBinary class
(benign vs. malignant)
The author introduced two lightweight CNN models for histopathology image classification and performed a comparative analysis with other state-of-the-art models. An accuracy of 94.0% was achieved using the proposed DL model
Nir et al.,
2018 [20]
Glandular-, nuclear-, and image-based feature analysisBinary class
(benign vs. all grades)
and
(grade 3 vs. grade 4, 5)
Proposed some novel features based on intra- and inter-nuclei properties for classification using ML and DL algorithms and achieved the best accuracy of 91.6% for benign vs. all grades using linear discriminant analysis
Ali et al.,
2013 [21]
Morphological and architectural feature analysis from cell cluster graphBinary class
(no recurrence vs. recurrence)
The author defined cells clusters as a node and constructed a novel graph called Cell Cluster Graph (CCG). In addition, they extracted global and local features from the CCG that best capture the morphology of the tumor. A randomized three-fold cross-validation was applied via support vector machine classifier and achieved an accuracy of 83.1%
Kim et al.,
2021 [22]
Texture analysis using DL and ML techniquesBinary class
(benign vs. malignant)
and
(low- vs. high-grade)
The author used DL (long short-term memory network) and ML (logistic regression, bagging tree, boosting tree, and support vector machine) techniques to classify dual-channel tissue features extracted from hematoxylin and eosin tissue images
Table 2. Feature selection based on majority voting. The most significant features were selected based on majority “True”. True: Selected, False: Not selected, χ2: Chi-Square Test, FS: Fisher Score, IG: Information Gain, RFE: Recursive Feature Elimination, and PI: Permutation Importance.
Table 2. Feature selection based on majority voting. The most significant features were selected based on majority “True”. True: Selected, False: Not selected, χ2: Chi-Square Test, FS: Fisher Score, IG: Information Gain, RFE: Recursive Feature Elimination, and PI: Permutation Importance.
Features χ 2 FSIGANOVARFEPIBorutaVotesSelect/Reject
total intra-cluster total MST distanceTrueTrueTrueTrueTrueTrueTrue7Select
total intra-cluster nucleus to nucleus maximum distanceTrueTrueTrueTrueTrueTrueTrue7Select
inter-cluster centroid to centroid total distanceTrueFalseTrueTrueTrueTrueTrue6Select
inter-cluster total MST distanceTrueTrueTrueTrueTrueFalseTrue6Select
number of clustersTrueTrueTrueTrueTrueFalseTrue6Select
total intra-cluster maximum MST distanceTrueTrueTrueTrueTrueFalseTrue6Select
average intra-cluster nucleus to nucleus minimum distanceFalseTrueTrueTrueTrueFalseTrue5Select
average intra-cluster nucleus to nucleus maximum distanceFalseTrueTrueTrueTrueFalseTrue5Select
average intra-cluster maximum MST distanceFalseTrueTrueTrueTrueFalseTrue5Select
average cluster areaTrueTrueFalseFalseTrueTrueTrue5Select
total intra-cluster nucleus to nucleus total distanceTrueFalseFalseTrueTrueTrueTrue5Select
total intra-cluster minimum MST distanceTrueTrueTrueTrueFalseFalseTrue5Select
total intra-cluster nucleus to nucleus minimum distanceTrueTrueTrueTrueFalseFalseTrue5Select
inter-cluster maximum MST distanceTrueTrueFalseFalseTrueFalseTrue4Select
average intra-cluster total MST distanceFalseTrueTrueFalseTrueFalseTrue4Select
average intra-cluster minimum MST distanceFalseTrueTrueTrueFalseFalseTrue4Select
total cluster areaTrueFalseFalseFalseFalseTrueTrue3Reject
inter-cluster average MST distanceFalseFalseTrueTrueFalseFalseTrue3Reject
average intra-cluster nucleus to nucleus average distanceFalseFalseTrueTrueFalseFalseTrue3Reject
inter-cluster centroid to centroid average distanceFalseTrueFalseFalseTrueFalseFalse2Reject
minimum area of clusterTrueFalseFalseFalseTrueFalseFalse2Reject
average intra-cluster nucleus to nucleus total distanceTrueFalseFalseFalseFalseFalseTrue2Reject
inter-cluster centroid to centroid minimum distanceFalseFalseFalseFalseFalseFalseTrue1Reject
inter-cluster centroid to centroid maximum distanceFalseFalseFalseFalseFalseFalseTrue1Reject
maximum area of clusterTrueFalseFalseFalseFalseFalseFalse1Reject
inter-cluster minimum MST distanceFalseFalseFalseFalseFalseFalseTrue1Reject
Table 3. Comparative analysis of the performance of supervised and unsupervised classification using test and whole datasets, respectively. A five-fold technique was used for both supervised and unsupervised classification. Split 1 and 2 from supervised and split 2 from unsupervised shows the best results marked in bold.
Table 3. Comparative analysis of the performance of supervised and unsupervised classification using test and whole datasets, respectively. A five-fold technique was used for both supervised and unsupervised classification. Split 1 and 2 from supervised and split 2 from unsupervised shows the best results marked in bold.
(A) Supervised Ensemble Classification—Modern AI Techniques
Multiclass Classification (Grade 3 vs. Grade 4 vs. Grade 5)
Test SplitAccuracyPrecisionRecallF1-Score
Split 197.2%97.3%97.3%97.3%
Split 291.7%92.0%91.7%91.7%
Split 397.2%97.3%97.3%97.3%
Split 494.4%94.7%94.7%94.7%
Split 591.7%91.7%91.7%91.7%
Average Split94.4%94.7%94.3%94.7%
Binary Classification (Grade 3 vs. Grade 5)
Test SplitAccuracyPrecisionRecallF1-Score
Split 191.7%91.60.9160.916
Split 2100%100%100%100%
Split 395.8%96.2%95.8%95.9%
Split 495.8%96.2%95.8%95.9%
Split 591.7%92.8%91.6%92.2%
Average Split95.0%95.0%95.0%95.0%
(B) K-Medoids Unsupervised Classification—Traditional AI Technique
Multiclass Classification (Grade 3 vs. Grade 4 vs. Grade 5)
Data SplitAccuracyPrecisionRecallF1-Score
Split 186.1%87.0%86.0%86.3%
Split 292.3%92.7%92.0%92.3%
Split 386.7%88.3%86.7%87.0%
Split 488.3%88.3%88.3%88.0%
Split 591.6%91.7%91.7%91.7%
Average Split88.5%89.7%88.3%88.7%
Binary Classification (Grade 3 vs. Grade 5)
Data SplitAccuracyPrecisionRecallF1-Score
Split 181.7%82.0%81.5%81.5%
Split 296.7%96.5%96.5%97.0%
Split 389.2%89.5%89.0%89.0%
Split 486.7%87.5%86.5%86.5%
Split 593.3%93.5%93.5%93.5%
Average Split88.3%88.5%88.5%88.5%
Table 4. Comparison with other state-of-the-art approaches. AUC: Area under the curve, DL: Deep learning, ML: Machine learning.
Table 4. Comparison with other state-of-the-art approaches. AUC: Area under the curve, DL: Deep learning, ML: Machine learning.
AuthorsMethodsClassification TypePerformance
Uthappa et al., 2019 [13]Hybrid DLMulticlass (grade 2, 3, 4, and 5)98.0% (Accuracy)
Khouzani et al., 2003 [14]MLMulticlass (grade 2, 3, 4, and 5)97.0% (Accuracy)
Kwak et al., 2017 [15]CNNBinary (benign and cancer)0.95 (AUC)
Wang et al., 2020 [16]Graph CNNBinary (score 3 + 3 and 3 + 4)88.6% (Accuracy)
Bhattacharjee et al., 2019 [18]MLBinarybenign vs. malignant88.7% (Accuracy)
grade 3 vs. grade 4, 585.0% (Accuracy)
grade 4 vs. grade 592.5% (Accuracy)
Bhattacharjee et al., 2020 [19]DLBinary (benign vs. malignant)94.0% (Accuracy)
Nir et al., 2018 [20]MLBinarybenign vs. all grades88.5% (Accuracy)
grade 3 vs. grade 4, 573.8% (Accuracy)
Ali et al., 2013 [21]MLBinary (no recurrence vs. recurrence)83.1% (Accuracy)
Kim et al., 2021 [22]DLBinarybenign vs. malignant98.6% (Accuracy)
low- vs. high-grade93.6% (Accuracy)
ProposedMLBinary (Split 2)grade 3 vs. grade 5100% (Accuracy)
Multiclass (Split 1)grade 3 vs. grade 4 vs. grade 597.2% (Accuracy)
K-Medoids ClusteringBinary (Split 2)grade 3 vs. grade 596.7% (Accuracy)
Multiclass (Split 2)grade 3 vs. grade 4 vs. grade 592.3% (Accuracy)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bhattacharjee, S.; Ikromjanov, K.; Carole, K.S.; Madusanka, N.; Cho, N.-H.; Hwang, Y.-B.; Sumon, R.I.; Kim, H.-C.; Choi, H.-K. Cluster Analysis of Cell Nuclei in H&E-Stained Histological Sections of Prostate Cancer and Classification Based on Traditional and Modern Artificial Intelligence Techniques. Diagnostics 2022, 12, 15. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12010015

AMA Style

Bhattacharjee S, Ikromjanov K, Carole KS, Madusanka N, Cho N-H, Hwang Y-B, Sumon RI, Kim H-C, Choi H-K. Cluster Analysis of Cell Nuclei in H&E-Stained Histological Sections of Prostate Cancer and Classification Based on Traditional and Modern Artificial Intelligence Techniques. Diagnostics. 2022; 12(1):15. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12010015

Chicago/Turabian Style

Bhattacharjee, Subrata, Kobiljon Ikromjanov, Kouayep Sonia Carole, Nuwan Madusanka, Nam-Hoon Cho, Yeong-Byn Hwang, Rashadul Islam Sumon, Hee-Cheol Kim, and Heung-Kook Choi. 2022. "Cluster Analysis of Cell Nuclei in H&E-Stained Histological Sections of Prostate Cancer and Classification Based on Traditional and Modern Artificial Intelligence Techniques" Diagnostics 12, no. 1: 15. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12010015

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop