Next Article in Journal
Continuous Fiber-Reinforced Material Extrusion with Hybrid Composites of Carbon and Aramid Fibers
Previous Article in Journal
Semi-Supervised Transfer Learning Methodology for Fault Detection and Diagnosis in Air-Handling Units
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Techniques for Diagnosis with WSIs for Early Detection of Cervical Cancer Based on Fusion Features

by
Badiea Abdulkarem Mohammed
1,*,
Ebrahim Mohammed Senan
2,3,
Zeyad Ghaleb Al-Mekhlafi
4,
Meshari Alazmi
4,
Abdulaziz M. Alayba
4,
Adwan Alownie Alanazi
4,
Abdulrahman Alreshidi
4 and
Mona Alshahrani
5
1
Department of Computer Engineering, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
2
Department of Computer Science & Information Technology, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad 431004, India
3
Department of Computing and Artificial Intelligence, Modern Specialized College for Medical and Technical Sciences, Sana’a, Yemen
4
Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
5
National Center for Artificial Intelligence (NCAI), Saudi Data and Artificial Intelligence Authority (SDAIA), Riyadh 12391, Saudi Arabia
*
Author to whom correspondence should be addressed.
Submission received: 13 August 2022 / Revised: 30 August 2022 / Accepted: 31 August 2022 / Published: 2 September 2022
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Cervical cancer is a global health problem that threatens the lives of women. Liquid-based cytology (LBC) is one of the most used techniques for diagnosing cervical cancer; converting from vitreous slides to whole-slide images (WSIs) allows images to be evaluated by artificial intelligence techniques. Because of the lack of cytologists and cytology devices, it is major to promote automated systems that receive and diagnose huge amounts of images quickly and accurately, which are useful in hospitals and clinical laboratories. This study aims to extract features in a hybrid method to obtain representative features to achieve promising results. Three proposed approaches have been applied with different methods and materials as follows: The first approach is a hybrid method called VGG-16 with SVM and GoogLeNet with SVM. The second approach is to classify the cervical abnormal cell images by ANN classifier with hybrid features extracted by the VGG-16 and GoogLeNet. A third approach is to classify the images of abnormal cervical cells by an ANN classifier with features extracted by the VGG-16 and GoogLeNet and combine them with hand-crafted features, which are extracted using Fuzzy Color Histogram (FCH), Gray Level Co-occurrence Matrix (GLCM) and Local Binary Pattern (LBP) algorithms. Based on the mixed features of CNN with features of FCH, GLCM, and LBP (hand-crafted), the ANN classifier reached the best results for diagnosing abnormal cells of the cervix. The ANN network achieved with the hybrid features of VGG-16 and hand-crafted an accuracy of 99.4%, specificity of 100%, sensitivity of 99.35%, AUC of 99.89% and precision of 99.42%.

1. Introduction

Cancer is the deadliest lesion in humanity in the 21st century, and it is due to the development of abnormal and random cells. Cancer kills about 9.6 million people annually [1]. Cervical cancer is a common type of cancer among women, as it is the fourth type of cancer that affects women. Human papillomavirus (HPV) is the main cause of cervical cancer, accounting for about 95% of infections. HPV is a common germ in the reproductive system; therefore, cervical cancer is associated with HPV viral infection. In addition, women most at risk of cervical cancer are those infected with HIV [2]. Early diagnosis of cervical cancer is very important for a cure, especially in the first stage. The stages of its development differ from one woman to another, according to the immune system, the speed of their reception of diagnosis and treatment, and their age. Cervical cancer spreads to areas surrounding the uterus, such as the vagina, in the second stage, and there is still a high chance of treatment because it is trapped in the pelvis. In the third stage, it spreads to the lymph nodes of the pelvis and the lower area of the vagina and causes swelling of the kidneys, and the chance of survival is reduced at this stage. If cervical cancer is not treated, it will spread outside the pelvis and reach the bladder and other parts of the body, and this is the fourth stage, which is difficult to treat and leads to death [3]. There are many modern techniques for conducting examinations, such as LBC, computer microscopy, automated examination devices, digital colposcopy and HPV testing. When performing smears from the cervical, which are then placed on slides from glass, the required cells will mix blood, secretions and debris, and therefore the analysis and diagnosis process is poor. However, LBC technology solves this challenge, preserves the cells of interest, removes blood, secretions and debris, and is one of the most used techniques for cervical cancer screening [4]. LBC plays an essential role in detecting abnormal cells in the cervix and controlling the development of cervical cancer. When an LBC slide is magnified to 400× under a microscope, this magnification results in thousands of cells in each slide that microcytologists must examine [5]. Because of the lack of health care resources, the examination is limited to a specific number of slides per day. However, LBC analysis requires highly experienced doctors to diagnose abnormal cervical cells accurately. Examination of slides is a tedious task and takes a long time, and the different diagnoses and the differing opinions of doctors worsen the patient’s health and lead to death [6,7]. Thus, the use of artificial intelligence to diagnose abnormal cells of cervical cancer solves the shortcomings of manual diagnosis. The glass slides are converted into whole-slide images (WSIs) by scanning the slides’ digital signals. Thus, automated analysis with the help of artificial intelligence techniques for WSIs is an efficient way to help pathologists identify abnormal cells [8]. Because of the similar and complex features of cervical cells and the necessity to achieve satisfactory accuracy, deep learning models have the great potential to improve the analysis and diagnosis of cervical cancer. Deep learning is distinguished by its high ability to identify patterns, extract high-level representative features from a large data set, extract interrelationships between complex features, and extract and identify ambiguous features that are difficult for experts and specialists to perceive [9]. Because of the similarity of features between cervical cancer types, this study aims to extract the features using deep learning, fuse with the features of other algorithms and diagnose them by neural network.
The main contributions to this study are as follows:
  • Application of PCA algorithm to reduce features produced using CNN models in all the proposed methods.
  • Application of the hybrid techniques, namely VGG-16 + SVM and GoogLeNet + SVM to diagnose WSIs of abnormal cells in the cervix.
  • Combining the features of the VGG-16 with the GoogLeNet. Then classifying them using an ANN classifier to diagnose WSIs of abnormal cells of the cervix.
  • Diagnosis of cervical abnormal cell images by ANN with the fusion features technique that combines the features of VGG-16 and GoogLeNet separately and fuses them with the Handcrafted features.
The rest of this study is organized as: Section 2 discusses a group of related papers. Section 3 presents methods for analyzing WSIs of abnormal cells of the cervix. Section 4 summarizes the results of the systems. Section 5 discusses the performance of the systems. Section 6 concludes the study.

2. Related Work

This section reviews a set of research papers on the diagnosis of WSIs to detect abnormal cells of the cervix.
ZIQUAN et al. designed an effective framework with a lightweight INCNET network. The network extracts features inside the cell through multi-range connections to collect features. The network achieved an AUC of 87.2% [10]. Rohit et al. presented the SUGENO Fuzzy Integral system that collects the results of three CNN models. The system works to observe Capable from each classifier and change the importance of each workbook in an adaptive manner and extract features from each of them, which leads to a superior classification [11]. Fahdi et al. presented deep learning for the WSIs image classification quickly and efficiently. The model achieved AUC accuracy between 89% and 96% [12]. Xia et al. designed a new framework based on the Fastery RCNN-FPN to diagnose abnormal cervical cells. They added convolution layers to improve the expansion of the extraction of features. The spatial connection between the introduction and the background was enhanced by providing ProPosal Network regions with the Global Contenxute AWare unit [13]. Tingzhen et al. presented a network that includes a self-supervised method with a Multiple Instance method for the ability to deal with Annotations as labels for the WSIs data group. The network has reached a good performance in diagnosing cervical cancer [14]. Manish et al. designed multiple CNN model classifiers to show regions of interest in WSIs containing abnormal cervical cells. The systems reached the best rating with an accuracy of 94% [15]. Hua et al. reported development of the CytoBrain system for cervical cancer diagnosis. The system works in three stages: first, segmentation of abnormal cells to properly extract them from WSIs. Second, building cell classes for classification using a CompactVGG network. Third, a diagnostic unit that automatically diagnoses WSIs. They concluded that the CompactVGG model achieved better results than the VGG11 model [16]. Shenghua et al. developed a network for lesion cell diagnosis by combining low- and high-resolution WSIs for feature extraction and classification by a recurrent neural network. The network reached a sensitivity of 95.1% and a specificity of 93.5% [17]. Ching-Wei et al. developed a framework based on deep learning to detect cervical lesions at high speed compared to the SegNet and U-Ne standards. They demonstrated that the system was able to accurately segment regions of interest, achieving an accuracy of 93%, a Jaccard of 84% and an F-measure of 88% [18]. Antoine et al. proposed a classifier for interpreting WSIs that could be used to assist pathologists. The classifier was circulated to perform localization to detect at least one malignant cell from WSIs. The classifier got an accuracy of 80.4% and KAPPA of 87% [19]. Xiaohong et al. applied hybrid DCNN-RF technology to predict the risk of cervical malignant cells. DCNN and model generalization extracted features for WSIs diagnosis by random forest. The method achieved an accuracy of 90.32% and an AUC of 83% [20]. Xiaoli et al. proposed to take paired samples from the image to extract representative negative samples for each malignant tumor to construct the sample pair images. Mixed samples were extracted to balance soft and hard samples. The method of image-level sampling extracts negative samples and creates a pair image to finally design a device to detect cervical cells. The proposal achieved a mean accuracy of 57.1% [21]. Ziquan et al. proposed PolarNet to learn the morphology of cervical stromal cells to resolve FP samples from large-scale gigapixel size WSIs. PolarNet extracts deep and reliable features of object models [22]. Hritam et al. provided a framework for deep feature extraction using CNN models and high-dimensional feature reduction. The optimal features were selected by the evolutionary optimization method. Finally, the optimal features were fed into an SVM classifier for the final classification of the cervical cancer dataset [23]. Ankur et al. presented three pre-trained CNN models, in which the framework used a merging of the classification layers according to the rank of the two nonlinear functions. The method provides a prediction on test stage samples with good accuracy [24]. Rishav et al. presented three CNN models and added some layers to learn the ideal features. They proposed a clustering approach to minimize the error between the expected and actual values based on three methods of measuring distance [25]. Mamunur et al. proposed a DeepCervix hybrid technique based on the fusion of features. This technique for classifying cervical cancer is based on the different DLs to obtain more information to improve the classification [26].
From the above, we conclude that there are unsatisfactory results for the detection of abnormal cells and discrimination between the different types of abnormal cells, due to the morphological similarity in the characteristics of abnormal cell types. Therefore, this study aimed to extract the features using deep learning and fused with the features extracted by many methods to show and discriminate the morphological characteristics of each type of abnormal cell. Moreover, this study targets five types of cervical cancer. To our knowledge, we did not find previous research for all types of abnormal cells targeted in this study.

3. Methods and Materials

This section describes the methods proposed in this study for analyzing WSIs of abnormal cells of the cervix. First, images of the cervical cancer data set were enhanced to obtain enhanced images to feed into the proposed approaches. The PCA algorithm was applied after each method of VGG-16 and GoogLeNet for reducing the high-dimensional features. The first proposed approach is a hybrid technique. The second approach for diagnosing cervical cancer data set through ANN classifier according to incorporating deep features of both models VGG-16 and GoogLeNet. The third approach for diagnosing cervical cancer data set through ANN classifier according to integrating deep features with hand-crafted as shown in Figure 1.

3.1. Description of the Data Set

This section describes the cervical cancer data set used to evaluate the systems of this study. The data set was obtained from a Kaggle consisting of 25,000 WSIs of LBC glass slide samples. The data set was divided into five types of cervical cancer, evenly distributed 5000 WSIs for each type as follows: cervix_dyk (Dyskeratotic), cervix_koc (Koilocytotic), cervix_mep (Metaplastic), cervix_pab (Parabasal), and cervix_sfi (Superficial-Intermediate). Thus, the systems in this study make early diagnoses of five types of cervical cancer [27]. The size of all WSIs of LBC samples was 512 × 512 pixels. Figure 2 (Before) shows a set of images representing all cervical malignancies targeted in this study.

3.2. Enhancing WSIs of the Cervix Cancer

The glass slides of LBC samples contain unrelated artifacts of abnormal cervical cells and solutions added to the slide when analyzed. Therefore, when acquiring WSIs from slices of LBC samples, these artifacts lead to improper diagnostic results [28]. This study used the WSIs data set, which requires enhancement to remove noise and reveal cell edges. The RGB color space was averaged to adjust for each color. A 5*5 pixels average filter was applied to remove noise [29]. The filter replaced each central pixel with 24 adjacent pixels, as in Equation (1). The filter continued to move on the image until all image pixels were targeted.
z x = 1 p i = 0 p 1 s x 1  
where z x means input, s x 1   means prior input, and p is pixels of the average filter.
To increase the contrast of the abnormal cells of the cervix, the Contrast limited adaptive histogram equalization (CLAHE) method was used. The technology distributes the luminous pixels to illuminate the dark areas, thus improving the illumination of the edges of the abnormal cells [30]. The mechanism of the method occurs each time the target pixel is compared with the neighboring pixels; according to the neighboring pixels, the improvement is done as follows: When the value of the neighboring pixels is greater than the target pixel, the contrast decreases; when the value of neighboring pixels is less than the target pixel, then contrast increases. Thus, the contrast of the edges of the abnormal cells was increased. Finally, all WSIs were optimized for the cervical cancer data set. Figure 2 (After) shows samples from the cervical cancer data set representing five types of abnormal cells.

3.3. CNN Models with SVM

When applying pre-trained CNN models to WSIs of abnormal cells of the cervix, the diagnostic results are unsatisfactory; this is time-consuming and requires expensive hardware resources. Thus, a hybrid approach was applied to solve the previous challenges [31]. The main idea of this technique is to replace the last classification layers of CNN and replace with the SVM. The technique extracts the features from CNN and stores them in a feature matrix to be fed to the SVM for super-accuracy with high speed. In this study, VGG-16 and GoogLeNet models were applied to extract and classify features through SVM. Thus, the hybrid techniques are called VGG-16 + SVM and GoogLeNet + SVM.

3.3.1. Extracting Feature Maps

In recent years, CNN techniques have been applied, which have greatly served humanity in various fields, including medical fields, such as early diagnosis of tumors and various diseases or prediction of the spread of diseases and epidemics before their outbreak. CNN technologies are distinguished from traditional networks by their superior ability to extract feature maps with promising accuracy. CNN models contain many layers interconnected by millions of weights and connections between neurons [32]. CNN models receive the optimized images and pass them to the following layers in which complex computations are performed to perform specific tasks. The most critical layers in which feature maps are extracted are convolutional layers and auxiliary layers that follow some convolutional layers.
Convolutional layers are essential layers of CNNs, and the mission varies from layer to layer. Its performance varies based on three primary parameters: filter size for each filter convolution layer of a different size, zero-padding (P) to preserve the original image size, and p-step (S) to set the number of move (step) that the filter can move on the image. The filter size is determined, and then according to the filter size f (t), a set of pixels is selected on the image x (t) [33]. The filter moves around the image, and each time a group of image pixels is selected, the method is repeated until all pixels are processed, as in Equation (2). Each convolutional layer has an output as input to the next layer; the output of convolutional layers controls the input image size L × W × D, P is zero-padding, and p-step is S as in Equation (3).
W t = x f   t = x a f t a   d a  
W (t) indicates to output, x (t) indicates to input and f (t) indicates to filter.
O   N = W K + 2 P S + 1  
O (N) indicates size of output neuron.
The pooling layers are for reducing the features of high-dimensional. CNN models produce millions of weights and parameters that require complex calculations, so the pooling layer solves these challenges and facilitates complex calculations. The pooling layers have two types, Average and Max. Each type has a specific method for reducing high dimensions [34]. The average-pooling selects a group of image pixels the same as the filter size, then calculates average values and replaces all selected values by average calculated as shown in Equation (4). The Max pooling layers select a group of image pixels the same as the filter size, take the Max value, and replace all selected values by it as in Equation (5).
z i ;   j = 1 k 2 m , n = 1 . k f i 1 p + m ;     j 1 p + n  
z i ;   j = m a x m , n = 1 . k   f i 1 p + m ;     j 1 p + n  
where m, n means the site in the matrix, p denotes the step, f means the filter size and k denotes the matrix size.
In this section, feature maps of WSIs of LBC slice samples of abnormal cells in the cervical are extracted using VGG-16 [35] and GoogLeNet [36] models. The CNN produce high-dimensional feature maps, so the PCA was used to reduce the high-dimensional features and select the most important representative features for each class of abnormal cell of cervical.

3.3.2. SVM Algorithm

SVM is an algorithm for solving regression and classification tasks. SVM aims to find a line or hyperplane in N dimensions to classify the features of WSIs clearly [37]. The best option for the algorithm to choose a hyperplane is the maximum margin between classes; in other words, the best hyperplane that has the maximum distance from it to the data points close to the classes, which is called the hard margin or maximum hyperplane margin [38]. SVM has the advantage of ignoring outliers (outliers) by finding the best hyperplane that increases the margin. So SVM finds the maximum margin between classes and handles any type of feature; this margin is called soft margin. But if the data are not linearly separable, the algorithm creates a new variable by the kernel with the creation of a new variable function of distance to the original. The nonlinear function is known as the kernel [39]. The kernel is a function that converts low-dimensional spaces into high-dimensionality, transforming nonlinear separable data into separable data [40]. Figure 3 illustrates the framework of the hybrid technique for diagnosing WSIs of abnormal cell types for the cervical cancer data set. Note that the VGG-16 and GoogLeNet models receive optimized WSIs - and extract high-dimensional feature maps. The PCA method [41] was used to reduce high-dimensional features and store them in feature vectors, to classify feature vectors by feeding them to the SVM classifier.

3.4. Fusion Approach of Deep Features Maps

Implementing pre-trained CNN models still faces challenges such as satisfactory accuracy, high-cost computers and time consumption [42]. In addition, the features extracted from one of the VGG-16 and GoogLeNet models are insufficient to reach a promising accuracy. Therefore, to solve these challenges, we applied a modern methodology to integrate the features extracted by VGG-16 and GoogLeNet models into the same feature vectors and their diagnosis by the ANN algorithm. Figure 4 describes the basic idea of this approach as follows: First, feed the enhanced cervical cancer data set to the VGG-16 and GoogLeNet models. Second, extract 2048 features from the VGG-16 model and store in vectors of the feature so the data set becomes 25,000 × 2048. Third, extract 2048 features from the GoogLeNet model and store in feature vectors so the data set becomes 25,000 × 2048. Fourth, integrate vectors of feature for both VGG-16 and GoogLeNet models in new feature vectors to 25,000 × 4096 size. Fifth, use the PCA to reduce the features to 25,000 × 720 data set size. Sixth, feed the low-dimensional features into the ANN to classify them with better accuracy and speed than CNN models.
Figure 5 describes the basic architecture of the ANN, consisting of an input layer that has 720 input units according to the number of features sent after high-dimension reduction by the PCA algorithm. Many hidden layers perform complex tasks to solve the required problems. The network is set to 15 hidden layers, which is the best performance for diagnosing WSIs of cervical cancer. The output layer is represented by the activation function that classifies the characteristics of each input image into five classes (types of abnormal cells) of cervical cancer.

3.5. Approach to Merging Features of CNN with Hand-Crafted

In this section, a novel approach is applied to diagnose WSIs of abnormal cells of the cervical data set using ANN algorithm according to the fusion of features extracted using VGG-16 and GoogLeNet and their fusion with texture, color and shape features, which extracted by traditional algorithms (FCH, GLCM and LBP) [43].
The steps to implement this approach are: First, feed the enhanced cervical cancer data set to the VGG-16 and GoogLeNet models. Second, extract the features from the VGG-16 model and store it in vectors of feature so the data set becomes 25,000 × 2048, and similarly extract the features from the GoogLeNet model and store it in the feature vectors so that the size of the features of the data set becomes 25,000 × 2048. Third, features are high-dimensional; thus, the PCA was used for reducing the features of both CNN models so that the size of the data set features was 25,000 × 512 for both models VGG-16 and GoogLeNet. Fourth, extract texture, color and shape features by FCH, GLCM and LBP algorithms and combine them into feature vectors.
Texture, color and shape are the essential features for representing image data. The FCH algorithm is used to extract the color features. The color features are one of the most powerful features for diagnosing WSIs of LBC glass slide samples for cervical cancer [38]. Each local color has a histogram bin, and all colors of the histogram bin represent the colors of the region of interest (ROI). All colors in one histogram bin are the same (similar), and any colors in various histogram bins are various even if they are similar—the FCH method checks for color similarity by the membership value of each pixel. This algorithm extracts 16 color features from each image, and thus the method produces feature vectors of size 25,000 × 16.
The GLCM displays the ROI in a gray-level matrix. GLCM extracts texture features from ROI through the pixel values of a given region. A region is soft when it contains equal or close pixel values. In contrast, a region is coarse when it contains varying pixel values. Statistical metrics are extracted based on the spatial information of the GLCM to extract texture features. GLCM determines spatial information according to distance d and angle θ between pixels [44]. There four angles to measure the relationship between pixels are 0°, 45°, 90° and 135° based on measure d. The GLCM extracts 13 texture features, so the method produces feature vectors with a size of 25,000 × 13.
The LBP describes the texture of surfaces by measuring local anisotropy. The image is converted to a gray-level array, and the target (central) pixel ( g c ) is decomposed according to its neighboring pixels ( g p ). In this work, the algorithm was set in size 5*5 each time a new central pixel was selected and analyzed according to 24 contiguous pixels as Equation (6). The method was repeated until all pixels were replaced with a new value according to the LBP algorithm [45]. This algorithm extracts 203 features from each image, so the method produces feature vectors with a size of 25,000 × 203.
L B P R , P = p = 0 P 1 s g p g c 2 p  
where g c means the grey value of the center pixel, g p means the grey value of contiguous pixels, R is the radius for contiguous, and P is all contiguous pixels.
The features of the FCH, GLCM and LBP methods were fused into the same feature vectors so that the feature size of the data set became 25,000 × 232.
Finally, as shown in Figure 6, the features of the VGG-16 were fused with the hand-crafted features to become the size of 25,000 × 744, then fed into ANN network for classifying. Similarly, the features of the GoogLeNet were fused with the hand-crafted features to become the size of 25,000 × 744, then fed into ANN network for classifying.

4. Results of System Performance

4.1. Split of Data Set

This study included many different methodological systems between CNN models, machine learning and hybrid features extracted from many methods that aim to reach satisfactory results for diagnosing abnormal cell images of cervical cancer. The data set contains 25,000 WSIs of glass slide specimens of LBC equally divided between five classes (abnormal cell types) of cervical cancer, which are 5000 images for Dyskeratotic class, 5000 images for Koilocytotic class, 5000 images for Metaplastic class, 5000 images for Parabasal class, and 5000 images for Superficial-Intermediat class. We set 80% of the data set for training and validation, and the remaining 20% of the data set was for systems performance testing. Table 1 summarizes the splitting of the cervical cancer data set during all stages of implementing the systems.

4.2. Evaluation Metrics

All systems were evaluated to diagnose abnormal cell images of the cervical cancer data set using measures of accuracy, specificity, sensitivity, AUC and precision as in Equations (7) to (11). Each system yields a confusion matrix containing all samples of the data set called TP, TN, FP and FN. The equations contain variables TP and TN representing images that the systems have correctly diagnosed and variables FP and FN representing images that the systems have incorrectly diagnosed [46].
Accuracy = TN + TP TN + TP + FN + FP   100 %
Specificity = TN TN + FP   100  
Sensitivity = TP TP + FN   100 %  
AUC   = True   Positive   Rate False   Positive   Rate  
Precision = TP TP + FP   100 %  
where: TP Indicates WSIs of abnormal cells properly classified as cervical cancer. TN indicates WSIs of normal and properly classified cells. FP indicates WSIs of normal cells but is incorrectly classified as abnormal cells of cervical cancer. FN indicates WSIs of abnormal cells, but they were incorrectly classified as normal cells.

4.3. Results of CNN Models

The VGG-16 and GoogLeNet models extract deep features from convolutional layers and feed them to fully connected layers. The fully connected layers then diagnose the deep features and send them to a SoftMax activation function that sorts the features into five classes according to the number of classes in the cervical cancer data set.
When diagnosing WSIs for the cervical cancer data set using the VGG-16 and GoogLeNet models, the model yields less efficient diagnostic results than the hybrid and combined features methods. Table 2 shows the results of the VGG-16 and GoogLeNet for the classification of the cervical cancer data set. The VGG-16 model reached an accuracy of 91.22%, specificity of 97.6%, sensitivity of 91.41%, an AUC of 93.45% and precision of 91.68%. In contrast, the GoogLeNet model yielded an accuracy of 95.96%, specificity of 98.64%, sensitivity of 95.45%, an AUC of 96.32% and precision of 95.81%.

4.4. Results of CNN Models with SVM

The performance hybrid system VGG-16 + SVM and GoogLeNet + SVM is for diagnosing WSIs of LBC slice samples of abnormal cells of cervical cancer. The system consists of VGG-16 and GoogLeNet models, which are tasked with extracting features, and SVM, which is tasked with classifying features. It is worth noting that the SVM classifier has been replaced by the last classification layer in the VGG-16 and GoogLeNet models. This approach was used because of its high performance in obtaining satisfactory results and speed in training the data set and implementing it on a medium-cost computer.
Table 3 shows the results of the two-hybrid networks for the classification of the cervical cancer data set. The VGG-16 + SVM system is slightly better than the GoogLeNet + SVM system. The VGG-16 + SVM system yielded an accuracy of 97.5%, specificity of 99.4%, sensitivity of 97.6%, an AUC of 98.43% and precision of 97.72%. In contrast, the GoogLeNet + SVM system yielded an accuracy of 96.8%, specificity of 99.2%, sensitivity of 96.8%, an AUC of 98.21% and precision of 96.9%.
Figure 7 presents the diagnostic performance of the VGG-16 + SVM and GoogLeNet + SVM networks for classifying abnormal cells of cervical cancer.
Figure 8 shows the result of the hybrid method through the confusion matrix for diagnosing five types of abnormal cells of the cervix. The VGG-16 + SVM reached a diagnostic accuracy for each class by 97.3%, 98%, 97.1%, 97.1% and 98.2% for classifying cervix_dyk, cervix_koc, cervix_mep, cervix_pab and cervix_sfi, respectively. In contrast, GoogLeNet + SVM achieved a diagnostic accuracy for each class by 97.2%, 98.2%, 96%, 96% and 96.6% for classifying cervix_dyk, cervix_koc, cervix_mep, cervix_pab and cervix_sfi, respectively.
Regarding the samples that the VGG-16 + SVM system failed to diagnose correctly: For the cervix_dyk class, the system failed to diagnose 27 images correctly. For class cervix_koc, the system failed to diagnose 20 images correctly. For class cervix_mep, the system failed to diagnose 29 images correctly. For the class cervix_pab, the system failed to diagnose 29 images correctly. For the cervix_sfi class, the system failed to diagnose 18 images correctly.
Regarding the samples that GoogLeNet + SVM system failed to diagnose correctly: For the cervix_dyk class, the system failed to diagnose 28 images correctly. For class cervix_koc, the system failed to diagnose 18 images correctly. For class cervix_mep, the system failed to diagnose 40 images correctly. For the class cervix_pab, the system failed to diagnose 40 images correctly. For the cervix_sfi class, the system failed to diagnose 34 images correctly.

4.5. Results of Fusion Approach of Deep Features Maps

These are the results of performing diagnosis of WSIs of cervical abnormal cells by ANN according to integrating features extracted by VGG-16 and GoogLeNet models. Because CNN models obtained no satisfactory results, the features were extracted from both GoogLeNet and VGG-16 and combined into the same feature vectors. When the deep features of the VGG-16 and GoogLeNet models were combined, the hybrid feature vectors were high-dimensional. Therefore, the PCA method was used for reducing the fused features. Finally, ANN received feature vectors of low dimension for its classification.
Table 4 shows the results of the ANN with the hybrid features of both VGG-16 and GoogLeNet for diagnosing WSIs of abnormal cells for the cervical cancer data set. This technique reached an accuracy of 98.5%, specificity of 99.8%, sensitivity of 98.4%, an AUC of 98.92% and precision of 98.2%.
Figure 9 shows the confusion matrix generated by ANN performance with hybrid features extracted by VGG-16 and GoogLeNet models to classify five types of abnormal cervical cancer cells. Class 1 refers to cervix_dyk, class 2 to cervix_koc, class 3 to cervix_mep, class 4 to cervix_pab, and class 5 to cervix_sfi. The ANN network based on features fused achieved an accuracy for each class of 98.1%, 99.1%, 98.4%, 98.2% and 98.6% for the classification of cervix_dyk, cervix_koc, cervix_mep, cervix_pab and cervix_sfi, respectively.
Regarding samples that the ANN based on the fused of the CNN features failed to diagnose correctly: For the cervix_dyk class, the system failed to diagnose 19 images correctly. For class cervix_koc, the system failed to diagnose 9 images correctly. For class cervix_mep, the system failed to diagnose 16 images correctly. For the class cervix_pab, the system failed to diagnose 18 images correctly. For the cervix_sfi class, the system failed to diagnose 14 images correctly.

4.6. Results of Approach to Merging Features of CNN with Hand-Crafted

Regarding the results of diagnosing WSIs of abnormal cells of the cervical data set using ANN according to mixed features extracted by two methods: first, mixed features, which are extracted using VGG-16 and hand-crafted, and then stored in feature vectors. Second, mixed features, which are extracted using GoogLeNet and hand-crafted, and then stored in feature vectors. The PCA was used for reducing the features and then sending them to the ANN for classification.

4.6.1. Best Validation Performance

Cross-entropy is a tool that evaluates of the ANN for diagnosing WSIs of abnormal cells of the cervix. This tool evaluates the network’s performance through measuring the squared error between the expected and actual outputs. The tool produces the best performance of validation for evaluating cervical cancer data set images through training, validation and network evaluation [47]. When evaluating the classifier, the data set passes through many epochs, and each color for a specific stage: red through the testing, blue through the training, green through the validation, and the best performance of a network through the intersection dashed lines of the x-axis with the y-axis. Figure 10 shows the cross-entropy of the network on the cervical cancer data set. ANN with VGG-16 and hand-crafted features obtained the best evaluation, reaching the lowest error of 0.017533 at epoch 78. In contrast, the same network epochs with GoogLeNet and hand-crafted features received the best evaluation, with the lowest error of 0.013565 at 39.

4.6.2. Error Histogram

The error histogram is a tool that evaluates the ANN classifier for diagnosing WSIs of abnormal cells in the cervix. When an ANN is implemented, the network produces an error histogram that records the difference between the output and target values during all phases of the data set. The network identifies the error histogram tool by giving a specific color to each data set phase: the red color through the testing data, the blue color through the training data, the green color through the validation data, and the best performance when the error value between the output and target is zero as shown in the orange line [48]. The performance of the ANN for diagnosing abnormal cell images of the cervical cancer data set is shown in Figure 11. ANN with VGG-16 and hand-crafted features achieved the best evaluation at an error histogram with 20 bins between values −0.9445 and 0.945. In contrast, ANN with GoogLeNet and hand-crafted features achieved the best evaluation at an error histogram with 20 bins between −0.9088 and 0.9108.

4.6.3. Validation Checks and Gradient

A gradient is a tool for performing an ANN classifier for diagnosing WSIs of abnormal cells in the cervix. When implementing ANN, gradient and validation checks are calculated in each epoch, whereby the network records and saves the minimum square error between the expected and actual values. Then the network checks all the saved values, takes the least square error value, and records it at gradient and validation checks at any epoch. Figure 12 notes the network performance on images of abnormal cells of the cervical cancer data set. ANN with VGG-16 and hand-crafted features yielded the best evaluation with the lowest error value when the gradient is 0.010455 at epoch 84 and the validation check is 6 at epoch 84. In contrast, based on GoogLeNet and hand-crafted features, ANN yielded the best evaluation at the lowest error value at the gradient of 0.0021672 at epoch 45 and the validation check of 6 at epoch 45.

4.6.4. Confusion Matrix

The confusion matrix is the gold standard for evaluating all systems. In this paper, the performance of ANN for diagnosing mixed features between deep learning and hand-crafted features was evaluated. This technique is highly efficient in representing WSIs of cervical abnormal cells, fusing VGG-16, FCH, GLCM and LBP features into feature vectors, and fusing GoogLeNet, FCH, GLCM and LBP features into feature vectors. These vectors were fed to an ANN classifier to classify all features of cervical cancer images into five classes (five types of cervical cancer). ANN produced the confusion matrix as a quaternary matrix, in which the main diagonal represents the set of correctly categorized images. The rest of the matrix cells represent the incorrectly categorized images, as in Figure 13. ANN reached an accuracy of 99.4% when using the hybrid features of VGG-16 and hand-crafted. In contrast, the same network when using GoogLeNet and hand-crafted features achieved 99.3% accuracy.
Regarding samples that the ANN based on the features of VGG-16, FCH, GLCM and LBP failed to diagnose correctly: For the cervix_dyk class, the system failed to diagnose 7 images correctly. For class cervix_koc, the system failed to diagnose 10 images correctly. For class cervix_mep, the system failed to diagnose 4 images correctly. For the class cervix_pab, the system failed to diagnose 5 images correctly. For the cervix_sfi class, the system failed to diagnose 4 images correctly.
Regarding samples that the ANN based on the features of GoogLeNet, FCH, GLCM and LBP failed to diagnose correctly: For the cervix_dyk class, the system failed to diagnose 6 images correctly. For class cervix_koc, the system failed to diagnose 5 images correctly. For class cervix_mep, the system failed to diagnose 8 images correctly. For the class cervix_pab, the system failed to diagnose 7 images correctly. For the cervix_sfi class, the system failed to diagnose 7 images correctly.
These approaches are novel and have shown promising results in diagnosing abnormal cell images in the non-cancerous cervical cancer data set. Table 5 summarizes the performance of the ANN classifier with mixed features extracted using CNN and fused with the hand-crafted features. ANN classifier when using features of VGG-16 and hand-crafted yielded an accuracy of 99.4%, specificity of 100%, sensitivity of 99.35%, AUC of 99.89% and precision of 99.42%. In contrast, the same classifier (ANN) with GoogLeNet and hand-crafted features reached an accuracy of 99.3%, specificity of 100%, sensitivity of 99.12%, AUC of 99.56% and precision of 99.23%.
Figure 14 displays the executive evaluation of the ANN with features fused between CNN and hand-crafted features for diagnosing WSIs of abnormal cells in the cervical cancer data set.

5. Discussion of the Execution of the Systems

This study discussed the development of several hybrid systems in novelty and efficiency for the diagnosis of WSIs of LBC glass slide samples for five types of abnormal cells for the cervical cancer data set. Data set images were improved, artifacts removed, and abnormal cells edge contrast was increased. This study was divided into three approaches, each with two systems, with different techniques and methodologies. A first approach was a hybrid approach consisting of two parts: the first part for deep feature extraction using the GoogLeNet and VGG-16. It is worth noting that the deep features extracted by GoogLeNet and VGG-16 are high-dimensional. Therefore, the PCA was used after the GoogLeNet and VGG-16 in all the approaches in the study to reduce the features. The second part was to classify low-dimensional deep features by SVM classifier. These hybrid techniques, GoogLeNet + SVM and VGG-16 + SVM, yielded an overall accuracy of 97.5% and 96.8%, respectively, for diagnosing the cervical cancer data set. The second approach cervical cancer image diagnosis used ANN according to features extracted using the VGG-16 and combined with features of the GoogLeNet model. The ANN classifier according to the feature combination of the VGG-16 and GoogLeNet models achieved accuracy of 98.5%. The third approach was the diagnosis of cervical cancer images using ANN according to the fused features of the VGG-16 model and its fusion with the hand-crafted features, in addition to the fused features of the GoogLeNet model and its fusion with the hand-crafted features. The ANN classifier, when fed with mixed features of VGG-16 with hand-crafted features achieved an accuracy of 99.4%, while when fed with the mixed features of GoogLeNet with hand-crafted features obtained an accuracy of 99.3%. It is noted that the diagnostic results improved when integrating the deep features of the two models, VGG-16 and GoogLeNet instead of using the features of each model alone. We also noticed that the diagnostic results improved when combining the deep features with hand-crafted features. Thus, the best performance of the classifier occurred when the deep learning features were fused with the features of the FCH, GLCM and LBP methods.
Table 6 shows results of performing all the approaches in this study for diagnosing abnormal cells of the cervical data set. The table notes the overall diagnostic accuracy and accuracy for each type of abnormal cells of the cervix. The best overall accuracy achieved by the ANN with hybrid features of VGG-16, FCH, GLCM and LBP was 99.4%. As for the accuracy for each class, the cervix_dyk and cervix_koc classes were diagnosed with an accuracy of 99.4% and 99.5%, respectively, by an ANN according to the hybrid features of GoogLeNet and hand-crafted. In contrast, the best accuracy for cervix_mep, cervix_pab and cervix_sfi classes was 99.5%, 99.6% and 99.4%, respectively, by ANN according to the hybrid features of VGG-16 and hand-crafted.
Figure 15 displays the performance of all systems in this study to show the diagnostic accuracy of each type of abnormal cell of the cervical cancer data set.
Table 7 summarizes the results of the proposed methods and compares them with the results of previous methods of diagnosing WSIs for early detection of cervical cancer. It is noted that the performance of the proposed method is better than the previous methods. The previous methods obtained an accuracy between 83.1% and 98.27%; in contrast, the proposed method reached an accuracy of 99.4%. The previous methods achieved a specificity between 95.78% and 96%; in contrast, the proposed method reached a specificity of 99.4%. The previous methods obtained a sensitivity between 82% and 98.52%; in contrast, the proposed method reached a sensitivity of 99.35%. The previous methods obtained a precision between 82% and 97.4%, while the proposed method obtained a precision of 99.42%.

6. Conclusions

This study proposed several novel approaches to diagnosing WSIs of LBC vitreous slide samples to detect cervical abnormal cells. All WSIs were optimized before being fed to CNN models. This study aimed to represent WSIs by extracting the features from many methods and merging them to reach promising results. Three proposed approaches were implemented: the first approach was a hybrid method between VGG-16 and GoogLeNet for feature extraction and the SVM for classifying the extracted features. The second proposed approach was to extract the features using the VGG-16 model and combine them with the features of the GoogLeNet model, then classify the hybrid features using the ANN classifier. A third proposed approach was to classify WSIs of cervical abnormal cells based on hybrid features between VGG-16 and GoogLeNet models separately and hand-crafted features. The performance results of the proposed approaches demonstrated a significant improvement in detecting abnormal cells of the cervix and their superiority over modern diagnostic devices. The ANN classifier based on the features of VGG-16 and hand-crafted reached an accuracy by 99.4%, specificity of 100%, sensitivity of 99.35%, AUC of 99.89% and precision of 99.42%.

Author Contributions

Conceptualization, B.A.M., A.A.A. and Z.G.A.-M.; methodology, B.A.M. and E.M.S.; software, B.A.M., M.A. (Meshari Alazmi) and A.A.A.; validation, A.M.A., A.A. and Z.G.A.-M.; formal analysis, B.A.M.; investigation, M.A. (Meshari Alazmi) and M.A. (Mona Alshahrani); resources, Z.G.A.-M. and B.A.M.; data curation, E.M.S.; writing—original draft preparation, E.M.S.; writing—review and editing, B.A.M., A.A.A. and Z.G.A.-M.; visualization, E.M.S., M.A. (Mona Alshahrani) and Z.G.A.-M.; supervision, B.A.M.; project administration, B.A.M., A.M.A., A.A. and M.A. (Meshari Alazmi); funding acquisition, B.A.M. and Z.G.A.-M.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data supporting the performance results of the systems in this work were collected from a publicly available data set at this link: https://www.kaggle.com/datasets/obulisainaren/multi-cancer accessed on 13 February 2022.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Ferlay, J.; Colombet, M.; Soerjomataram, I.; Mathers, C.; Parkin, D.M.; Piñeros, M.; Bray, F. Estimating the global cancer incidence and mortality in 2018: GLOBOCAN sources and methods. Int. J. Cancer 2019, 144, 1941–1953. [Google Scholar] [CrossRef] [PubMed]
  2. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA A Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
  3. Bhatla, N.; Aoki, D.; Sharma, D.N.; Sankaranarayanan, R. Cancer of the cervix uteri. Int. J. Gynecol. Obstet. 2018, 143, 22–36. [Google Scholar] [CrossRef]
  4. Silva-López, M.S.; Ilizaliturri Hernández, C.A.; Navarro Contreras, H.R.; Rodríguez Vázquez, Á.G.; Ortiz-Dosal, A.; Kolosovas-Machuca, E.S. Raman Spectroscopy of Individual Cervical Exfoliated Cells in Premalignant and Malignant Lesions. Appl. Sci. 2022, 12, 2419. [Google Scholar] [CrossRef]
  5. Lee, A.L.S.; To, C.C.K.; Lee, A.L.H.; Li, J.J.X.; Chan, R.C.K. Model architecture and tile size selection for convolutional neural network training for non-small cell lung cancer detection on whole slide images. Inform. Med. Unlocked 2022, 28, 100850. [Google Scholar] [CrossRef]
  6. Chen, C.P.; Kung, P.T.; Wang, Y.H.; Tsai, W.C. Effect of time interval from diagnosis to treatment for cervical cancer on survival: A nationwide cohort study. PLoS ONE 2019, 14, e0221946. [Google Scholar] [CrossRef]
  7. WHO. Guideline for Screening and Treatment of Cervical Pre-Cancer Lesions for Cervical Cancer Prevention; World Health Organization: Geneva, Switzerland, 2021. [Google Scholar]
  8. Davey, E.; Barratt, A.; Irwig, L.; Chan, S.F.; Macaskill, P.; Mannes, P.; Saville, A.M. Effect of study design and quality on unsatisfactory rates, cytology classifications, and accuracy in liquid-based versus conventional cervical cytology: A systematic review. Lancet 2006, 367, 122–132. Available online: https://0-www-sciencedirect-com.brum.beds.ac.uk/science/article/pii/S0140673606679610 (accessed on 13 February 2022). [CrossRef]
  9. Charoenkwan, P.; Shoombuatong, W.; Nantasupha, C.; Muangmool, T.; Suprasert, P.; Charoenkwan, K. iPMI: Machine Learning-Aided Identification of Parametrial Invasion in Women with Early-Stage Cervical Cancer. Diagnostics 2021, 11, 1454. [Google Scholar] [CrossRef]
  10. Wei, Z.; Cheng, S.; Liu, X.; Zeng, S. An efficient cervical whole slide image analysis framework based on multi-scale semantic and spatial deep features. arXiv 2021, arXiv:2106.15113. [Google Scholar]
  11. Kundu, R.; Basak, H.; Koilada, A.; Chattopadhyay, S.; Chakraborty, S.; Das, N. Ensemble of CNN classifiers using Sugeno Fuzzy Integral Technique for Cervical Cytology Image Classification. arXiv 2021, arXiv:2108.09460. [Google Scholar]
  12. Kanavati, F.; Hirose, N.; Ishii, T.; Fukuda, A.; Ichihara, S.; Tsuneki, M. A Deep Learning Model for Cervical Cancer Screening on Liquid-Based Cytology Specimens in Whole Slide Images. Cancers 2022, 14, 1159. [Google Scholar] [CrossRef] [PubMed]
  13. Li, X.; Xu, Z.; Shen, X.; Zhou, Y.; Xiao, B.; Li, T.-Q. Detection of Cervical Cancer Cells in Whole Slide Images Using Deformable and Global Context Aware Faster RCNN-FPN. Curr. Oncol. 2021, 28, 3585–3601. [Google Scholar] [CrossRef] [PubMed]
  14. Li, T.; Feng, M.; Wang, Y.; Xu, K. Whole Slide Images Based Cervical Cancer Classification Using Self-supervised Learning and Multiple Instance Learning. In Proceedings of the 2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE), Nanchang, China, 26–28 March 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 192–195. Available online: https://0-ieeexplore-ieee-org.brum.beds.ac.uk/abstract/document/9389824/ (accessed on 13 February 2022).
  15. Gupta, M.; Das, C.; Roy, A.; Gupta, P.; Pillai, G.R.; Patole, K. Region of interest identification for cervical cancer images. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1293–1296. Available online: https://0-ieeexplore-ieee-org.brum.beds.ac.uk/abstract/document/9098587/ (accessed on 13 February 2022).
  16. Chen, H.; Liu, J.; Wen, Q.M.; Zuo, Z.Q.; Liu, J.S.; Feng, J.; Xiao, D. CytoBrain: Cervical cancer screening system based on deep learning technology. J. Comput. Sci. Technol. 2021, 36, 347–360. [Google Scholar] [CrossRef]
  17. Cheng, S.; Liu, S.; Yu, J.; Rao, G.; Xiao, Y.; Han, W.; Liu, X. Robust whole slide image analysis for cervical cancer screening using deep learning. Nat. Commun. 2021, 12, 1–10. Available online: https://0-www-nature-com.brum.beds.ac.uk/articles/s41467-021-25296-x (accessed on 13 February 2022).
  18. Wang, C.W.; Liou, Y.A.; Lin, Y.J.; Chang, C.C.; Chu, P.H.; Lee, Y.C.; Chao, T.K. Artificial intelligence-assisted fast screening cervical high grade squamous intraepithelial lesion and squamous cell carcinoma diagnosis and treatment planning. Sci. Rep. 2021, 11, 1–14. Available online: https://0-www-nature-com.brum.beds.ac.uk/articles/s41598-021-95545-y (accessed on 13 February 2022).
  19. Pirovano, A.; Almeida, L.G.; Ladjal, S.; Bloch, I.; Berlemont, S. Computer-aided diagnosis tool for cervical cancer screening with weakly supervised localization and detection of abnormalities using adaptable and explainable classifier. Med. Image Anal. 2021, 73, 102167. Available online: https://0-www-sciencedirect-com.brum.beds.ac.uk/science/article/pii/S1361841521002139 (accessed on 13 February 2022). [CrossRef]
  20. Zhou, X.; Li, W.; Wen, Z. Data Analysis for Risk Prediction of Cervical Cancer Metastasis and Recurrence Based on DCNN-RF. J. Phys. Conf. Ser. 2021, 1813, 012033. Available online: https://0-iopscience-iop-org.brum.beds.ac.uk/article/10.1088/1742-6596/1813/1/012033/meta (accessed on 13 February 2022). [CrossRef]
  21. Yan, X.; Zhang, Z. HSDet: A Representative Sampling Based Object Detector in Cervical Cancer Cell Images. In Proceedings of the International Conference on Bio-Inspired Computing: Theories and Applications, Taiyuan, China, 12–14 November 2021; Springer: Singapore, 2020; pp. 406–418. [Google Scholar] [CrossRef]
  22. Wei, Z.; Cheng, S.; Liu, X.; Zeng, S. Cervical Glandular Cell Detection from Whole Slide Image with Out-Of-Distribution Data. arXiv 2022, arXiv:2205.14625. [Google Scholar]
  23. Basak, H.; Kundu, R.; Chakraborty, S.; Das, N. Cervical cytology classification using PCA and GWO enhanced deep features selection. SN Comput. Sci. 2021, 2, 1–17. Available online: https://paperswithcode.com/paper/cervical-cytology-classification-using-pca (accessed on 27 August 2022).
  24. Manna, A.; Kundu, R.; Kaplun, D.; Sinitca, A.; Sarkar, R. A fuzzy rank-based ensemble of CNN models for classification of cervical cytology. Sci. Rep. 2021, 11, 1–18. Available online: https://paperswithcode.com/paper/a-fuzzy-rank-based-ensemble-of-cnn-models-for (accessed on 27 August 2022).
  25. Pramanik, R.; Biswas, M.; Sen, S.; de Souza Júnior, L.A.; Papa, J.P.; Sarkar, R. A fuzzy distance-based ensemble of deep models for cervical cancer detection. Comput. Methods Programs Biomed. 2022, 219, 106776. [Google Scholar] [CrossRef] [PubMed]
  26. Rahaman, M.M.; Li, C.; Yao, Y.; Kulwa, F.; Wu, X.; Li, X.; Wang, Q. DeepCervix: A deep learning-based framework for the classification of cervical cells using hybrid deep feature fusion techniques. Comput. Biol. Med. 2021, 136, 104649. Available online: https://paperswithcode.com/paper/deepcervix-a-deep-learning-based-framework (accessed on 27 August 2022). [CrossRef]
  27. Multi Cancer Dataset|Kaggle. Available online: https://www.kaggle.com/datasets/obulisainaren/multi-cancer (accessed on 27 June 2022).
  28. Prezja, F.; Pölönen, I.; Äyrämö, S.; Ruusuvuori, P.; Kuopio, T. H&E Multi-Laboratory Staining Variance Exploration with Machine Learning. Appl. Sci. 2022, 12, 7511. [Google Scholar] [CrossRef]
  29. Al-Mekhlafi, Z.G.; Senan, E.M.; Rassem, T.H.; Mohammed, B.A.; Makbol, N.M.; Alanazi, A.A.; Ghaleb, F.A. Deep Learning and Machine Learning for Early Detection of Stroke and Haemorrhage. Comput. Mater. Contin. 2022, 72, 775–796. Available online: http://eprints.bournemouth.ac.uk/36721/ (accessed on 13 February 2022). [CrossRef]
  30. Kim, Y.J.; Park, Y.R.; Kim, Y.J.; Ju, W.; Nam, K.; Kim, K.G. A performance comparison of histogram equalization algorithms for cervical cancer classification model. J. Biomed. Eng. Res. 2021, 42, 80–85. [Google Scholar] [CrossRef]
  31. Mohammed, B.A.; Senan, E.M.; Rassem, T.H.; Makbol, N.M.; Alanazi, A.A.; Al-Mekhlafi, Z.G.; Ghaleb, F.A. Multi-method analysis of medical records and MRI images for early diagnosis of dementia and Alzheimer’s disease based on deep learning and hybrid methods. Electronics 2021, 10, 2860. [Google Scholar] [CrossRef]
  32. Shrivastav, K.D.; Arambam, P.; Batra, S.; Bhatia, V.; Singh, H.; Jaggi, V.K.; Ranjan, P.; Abed, E.H.; Janardhanan, R. Earth Mover’s Distance-Based Tool for Rapid Screening of Cervical Cancer Using Cervigrams. Appl. Sci. 2022, 12, 4661. [Google Scholar] [CrossRef]
  33. Abunadi, I.; Senan, E.M. Multi-Method Diagnosis of Blood Microscopic Sample for Early Detection of Acute Lymphoblastic Leukemia Based on Deep Learning and Hybrid Techniques. Sensors 2022, 22, 1629. [Google Scholar] [CrossRef]
  34. Yuan, Z.; Wang, Y.; Hu, P.; Zhang, D.; Yan, B.; Lu, H.M.; Yang, Y. Accelerate treatment planning process using deep learning generated fluence maps for cervical cancer radiation therapy. Med. Phys. 2022, 49, 2631–2641. [Google Scholar] [CrossRef]
  35. Fu, L.; Xia, W.; Shi, W.; Cao, G.X.; Ruan, Y.T.; Zhao, X.Y.; Gao, X. Deep learning based cervical screening by the cross-modal integration of colposcopy, cytology, and HPV test. Int. J. Med. Inform. 2022, 159, 104675. [Google Scholar] [CrossRef]
  36. Ahmed, I.A.; Senan, E.M.; Rassem, T.H.; Ali, M.A.; Shatnawi, H.S.A.; Alwazer, S.M.; Alshahrani, M. Eye Tracking-Based Diagnosis and Early Detection of Autism Spectrum Disorder Using Machine Learning and Deep Learning Techniques. Electronics 2022, 11, 530. [Google Scholar] [CrossRef]
  37. Arezzo, F.; La Forgia, D.; Venerito, V.; Moschetta, M.; Tagliafico, A.S.; Lombardi, C.; Loizzi, V.; Cicinelli, E.; Cormio, G. A Machine Learning Tool to Predict the Response to Neoadjuvant Chemotherapy in Patients with Locally Advanced Cervical Cancer. Appl. Sci. 2021, 11, 823. [Google Scholar] [CrossRef]
  38. Senan, E.M.; Jadhav, M.E.; Kadam, A. Classification of PH2 images for early detection of skin diseases. In Proceedings of the 2021 6th International Conference for Convergence in Technology, Maharashtra, India, 2–4 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–7. [Google Scholar] [CrossRef]
  39. Fati, S.M.; Senan, E.M.; ElHakim, N. Deep and Hybrid Learning Technique for Early Detection of Tuberculosis Based on X-ray Images Using Feature Fusion. Appl. Sci. 2022, 12, 7092. [Google Scholar] [CrossRef]
  40. López-Cortés, X.A.; Matamala, F.; Venegas, B.; Rivera, C. Machine-Learning Applications in Oral Cancer: A Systematic Review. Appl. Sci. 2022, 12, 5715. [Google Scholar] [CrossRef]
  41. Maqsood, S.; Damaševičius, R.; Maskeliūnas, R. TTCNN: A Breast Cancer Detection and Classification towards Computer-Aided Diagnosis Using Digital Mammography in Early Stages. Appl. Sci. 2022, 12, 3273. [Google Scholar] [CrossRef]
  42. Fang, R.; Zeng, L.; Yi, F. Analysis of Compound Stained Cervical Cell Images Using Multi-Spectral Imaging. Appl. Sci. 2021, 11, 5628. [Google Scholar] [CrossRef]
  43. Fati, S.M.; Senan, E.M.; Azar, A.T. Hybrid and Deep Learning Approach for Early Diagnosis of Lower Gastrointestinal Diseases. Sensors 2022, 22, 4079. [Google Scholar] [CrossRef]
  44. Senan, E.M.; Jadhav, M.E. Diagnosis of dermoscopy images for the detection of skin lesions using SVM and KNN. In Proceedings of Third International Conference on Sustainable Computing; Springer: Singapore, 2022; pp. 125–134. [Google Scholar] [CrossRef]
  45. Senan, E.M.; Jadhav, M.E. Techniques for the Detection of Skin Lesions in PH 2 Dermoscopy Images Using Local Binary Pattern (LBP). In International Conference on Recent Trends in Image Processing and Pattern Recognition; Springer: Singapore, 2020; pp. 14–25. [Google Scholar] [CrossRef]
  46. Senan, E.M.; Abunadi, I.; Jadhav, M.E.; Fati, S.M. Score and Correlation Coefficient-Based Feature Selection for Predicting Heart Failure Diagnosis by Using Machine Learning Algorithms. Comput. Math. Methods Med. 2021, 2021, 1–16. [Google Scholar] [CrossRef]
  47. Mohammed, B.A.; Senan, E.M.; Al-Mekhlafi, Z.G.; Rassem, T.H.; Makbol, N.M.; Alanazi, A.A.; Almurayziq, T.S.; Ghaleb, F.A.; Sallam, A.A. Multi-Method Diagnosis of CT Images for Rapid Detection of Intracranial Hemorrhages Based on Deep and Hybrid Learning. Electronics 2022, 11, 2460. [Google Scholar] [CrossRef]
  48. Senan, E.M.; Mohammed Jadhav, M.E.; Rassem, T.H.; Aljaloud, A.S.; Mohammed, B.A.; Al-Mekhlafi, Z.G. Early Diagnosis of Brain Tumour MRI Images Using Hybrid Techniques between Deep and Machine Learning. Electronics 2022, 11, 2460. Available online: https://www.hindawi.com/journals/cmmm/2022/8330833/ (accessed on 13 February 2022). [CrossRef]
  49. Diniz, D.N.; Rezende, M.T.; Bianchi, A.G.C.; Carneiro, C.M.; Ushizima, D.M.; de Medeiros, F.N.S.; Souza, M.J.F. A Hierarchical Feature-Based Methodology to Perform Cervical Cancer Classification. Appl. Sci. 2021, 11, 4091. [Google Scholar] [CrossRef]
  50. Win, K.P.; Kitjaidure, Y.; Hamamoto, K.; Myo Aung, T. Computer-Assisted Screening for Cervical Cancer Using Digital Image Processing of Pap Smear Images. Appl. Sci. 2020, 10, 1800. [Google Scholar] [CrossRef]
  51. Chen, W.; Shen, W.; Gao, L.; Li, X. Hybrid Loss-Constrained Lightweight Convolutional Neural Networks for Cervical Cell Classification. Sensors 2022, 22, 3272. [Google Scholar] [CrossRef] [PubMed]
  52. Mohammed, M.A.; Abdurahman, F.; Ayalew, Y.A. Single-cell conventional pap smear image classification using pre-trained deep neural network architectures. BMC Biomed. Eng. 2021, 3, 11. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Structure framework for proposed approaches to the diagnosis of WSIs for the cervical cancer data set.
Figure 1. Structure framework for proposed approaches to the diagnosis of WSIs for the cervical cancer data set.
Applsci 12 08836 g001
Figure 2. Samples of the abnormal cells of cervical malignancies before and after enhancement.
Figure 2. Samples of the abnormal cells of cervical malignancies before and after enhancement.
Applsci 12 08836 g002
Figure 3. Framework of hybrid technique for diagnosing abnormal cell images for the cervical cancer data set.
Figure 3. Framework of hybrid technique for diagnosing abnormal cell images for the cervical cancer data set.
Applsci 12 08836 g003
Figure 4. Framework for a feature fusion approach of the VGG-16 and GoogLeNet models for cervical cancer data set diagnosis.
Figure 4. Framework for a feature fusion approach of the VGG-16 and GoogLeNet models for cervical cancer data set diagnosis.
Applsci 12 08836 g004
Figure 5. The architecture of the ANN algorithm for classifying cervical cancer data set.
Figure 5. The architecture of the ANN algorithm for classifying cervical cancer data set.
Applsci 12 08836 g005
Figure 6. Framework for diagnosing abnormal cells of the cervical cancer data set based on fused features.
Figure 6. Framework for diagnosing abnormal cells of the cervical cancer data set based on fused features.
Applsci 12 08836 g006
Figure 7. Display of the performance of CNN with SVM for classifying abnormal cervical cells.
Figure 7. Display of the performance of CNN with SVM for classifying abnormal cervical cells.
Applsci 12 08836 g007
Figure 8. Results of the diagnosis of WSIs of abnormal cells of the cervix by (a). VGG-16 + SVM and (b). GoogLeNet + SVM.
Figure 8. Results of the diagnosis of WSIs of abnormal cells of the cervix by (a). VGG-16 + SVM and (b). GoogLeNet + SVM.
Applsci 12 08836 g008
Figure 9. Results of ANN performance based on fused CNN features for diagnosing WSIs of cervical abnormal cells.
Figure 9. Results of ANN performance based on fused CNN features for diagnosing WSIs of cervical abnormal cells.
Applsci 12 08836 g009
Figure 10. Cross-entropy display for diagnosing abnormal cell images of the cervix by ANN based on the features of (a). VGG-16, FCH, GLCM and LBP and (b). GoogLeNet, FCH, GLCM and LBP.
Figure 10. Cross-entropy display for diagnosing abnormal cell images of the cervix by ANN based on the features of (a). VGG-16, FCH, GLCM and LBP and (b). GoogLeNet, FCH, GLCM and LBP.
Applsci 12 08836 g010
Figure 11. Error histogram display for diagnosing abnormal cell images of the cervix by ANN based on the features of (a). VGG-16, FCH, GLCM and LBP (b). GoogLeNet, FCH, GLCM and LBP.
Figure 11. Error histogram display for diagnosing abnormal cell images of the cervix by ANN based on the features of (a). VGG-16, FCH, GLCM and LBP (b). GoogLeNet, FCH, GLCM and LBP.
Applsci 12 08836 g011
Figure 12. Display gradient for diagnosing abnormal cell images of the cervix by ANN based on the features of (a). VGG-16, FCH, GLCM and LBP (b). GoogLeNet, FCH, GLCM and LBP.
Figure 12. Display gradient for diagnosing abnormal cell images of the cervix by ANN based on the features of (a). VGG-16, FCH, GLCM and LBP (b). GoogLeNet, FCH, GLCM and LBP.
Applsci 12 08836 g012
Figure 13. Confusion matrix for diagnosing abnormal cell images of the cervix by ANN based on the features of (a). VGG-16, FCH, GLCM and LBP and (b). GoogLeNet, FCH, GLCM and LBP.
Figure 13. Confusion matrix for diagnosing abnormal cell images of the cervix by ANN based on the features of (a). VGG-16, FCH, GLCM and LBP and (b). GoogLeNet, FCH, GLCM and LBP.
Applsci 12 08836 g013
Figure 14. Presentation of the executive of the ANN with fusion features for diagnosing abnormal cervical cells.
Figure 14. Presentation of the executive of the ANN with fusion features for diagnosing abnormal cervical cells.
Applsci 12 08836 g014
Figure 15. Presentation of the performance of all proposed approaches for diagnosing abnormal cervical cells.
Figure 15. Presentation of the performance of all proposed approaches for diagnosing abnormal cervical cells.
Applsci 12 08836 g015
Table 1. Division of the data set of abnormal cells of cervical cancer.
Table 1. Division of the data set of abnormal cells of cervical cancer.
PhaseTraining and Validation PhaseTesting Phase (20%)
ClassesTraining Phase (80%)Validation Phase (20%)
cervix_dyk38008001000
cervix_koc38008001000
cervix_mep38008001000
cervix_pab38008001000
cervix_sfi38008001000
Table 2. Results of the CNN models for cervical cancer data set diagnosis.
Table 2. Results of the CNN models for cervical cancer data set diagnosis.
MeasureVGG-16GoogLeNet
Accuracy %91.2295.96
Specificity %97.698.64
Sensitivity %91.4195.45
Precision %91.6895.81
AUC %93.4596.32
Table 3. Results of the CNN with SVM for cervical cancer data set diagnosis.
Table 3. Results of the CNN with SVM for cervical cancer data set diagnosis.
MeasureVGG-16 + SVMGoogLeNet + SVM
Accuracy %97.596.8
Specificity %99.499.2
Sensitivity %97.696.8
Precision %97.7296.9
AUC %98.4398.21
Table 4. Performance of the ANN according to fused features between the VGG-16 and GoogLeNet models.
Table 4. Performance of the ANN according to fused features between the VGG-16 and GoogLeNet models.
MeasureANN Based on Fused CNN
Accuracy %98.5
Specificity %99.8
Sensitivity %98.4
Precision %98.2
AUC %98.92
Table 5. Results for the ANN with fusion of CNN with hand-crafted features.
Table 5. Results for the ANN with fusion of CNN with hand-crafted features.
Fusion FeaturesVGG-16, FCH, GLCM and LBPGoogLeNet, FCH, GLCM
and LBP
Accuracy %99.499.3
Specificity %100100
Sensitivity %99.3599.12
Precision %99.4299.23
AUC %99.8999.56
Table 6. Performance of all systems for the diagnosis of five types of abnormal cells of cervical cancer.
Table 6. Performance of all systems for the diagnosis of five types of abnormal cells of cervical cancer.
ApproachesClassescervix_dykcervix_koccervix_mepcervix_pabcervix_sfiAccuracy %
Hybrid methodVGG-16 + SVM97.39897.197.198.297.5
GoogLeNet + SVM97.298.2969696.696.8
ANN classifierVGG-16 + GoogLeNet98.199.198.498.298.698.5
Fusion featuresANN classifierVGG-16, FCH, GLCM and LBP99.39999.699.599.699.4
GoogLeNet, FCH, GLCM and LBP99.499.599.299.399.399.3
Table 7. Comparison of the results of the proposed system with the results of previous related systems.
Table 7. Comparison of the results of the proposed system with the results of previous related systems.
Previous StudiesAccuracy (%)Specificity (%)Sensitivity %Precision %AUC %
Ankur et al. [24]95.43-98.52--
Mamunur et al. [25]98.27----
Débora et al. [49]94968282-
Kyi et al. [50]94.09----
Wen et al. [51]83.195.7883.2283.41-
Mohammed et al. [52]99-97.497.4-
Proposed model99.410099.3599.4299.89
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mohammed, B.A.; Senan, E.M.; Al-Mekhlafi, Z.G.; Alazmi, M.; Alayba, A.M.; Alanazi, A.A.; Alreshidi, A.; Alshahrani, M. Hybrid Techniques for Diagnosis with WSIs for Early Detection of Cervical Cancer Based on Fusion Features. Appl. Sci. 2022, 12, 8836. https://0-doi-org.brum.beds.ac.uk/10.3390/app12178836

AMA Style

Mohammed BA, Senan EM, Al-Mekhlafi ZG, Alazmi M, Alayba AM, Alanazi AA, Alreshidi A, Alshahrani M. Hybrid Techniques for Diagnosis with WSIs for Early Detection of Cervical Cancer Based on Fusion Features. Applied Sciences. 2022; 12(17):8836. https://0-doi-org.brum.beds.ac.uk/10.3390/app12178836

Chicago/Turabian Style

Mohammed, Badiea Abdulkarem, Ebrahim Mohammed Senan, Zeyad Ghaleb Al-Mekhlafi, Meshari Alazmi, Abdulaziz M. Alayba, Adwan Alownie Alanazi, Abdulrahman Alreshidi, and Mona Alshahrani. 2022. "Hybrid Techniques for Diagnosis with WSIs for Early Detection of Cervical Cancer Based on Fusion Features" Applied Sciences 12, no. 17: 8836. https://0-doi-org.brum.beds.ac.uk/10.3390/app12178836

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop