Next Article in Journal
Automatic Quantitative Assessment of Lens Opacities Using Two Anterior Segment Imaging Techniques: Correlation with Functional and Surgical Metrics
Next Article in Special Issue
Application of Deep Learning to IVC Filter Detection from CT Scans
Previous Article in Journal
Detecting Parkinson’s Disease through Gait Measures Using Machine Learning
Previous Article in Special Issue
Ensemble Transfer Learning for Fetal Head Analysis: From Segmentation to Gestational Age and Weight Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Assisted Automated Assessment of Thalassaemia from Haemoglobin Electrophoresis Images

by
Muhammad Salman Khan
1,*,
Azmat Ullah
2,3,
Kaleem Nawaz Khan
4,
Huma Riaz
5,6,
Yasar Mehmood Yousafzai
7,
Tawsifur Rahman
1,
Muhammad E. H. Chowdhury
1 and
Saad Bin Abul Kashem
8
1
Department of Electrical Engineering, College of Engineering, Qatar University, Doha P.O. BOX 2713, Qatar
2
Artificial Intelligence in Healthcare, Intelligent Information Processing Lab, National Center of Artificial Intelligence, University of Engineering and Technology Peshawar, Peshawar 25000, Pakistan
3
Department of Biomedical Sciences and Engineering, Koc University Istanbul, Istanbul 34450, Turkey
4
Department of Computer Science, University of Engineering and Technology Mardan, Mardan 23200, Pakistan
5
Department of Pathology and Laboratory Medicine, Lady Reading Hospital, Peshawar 25000, Pakistan
6
Department of Pathology and Blood Bank, Hayatabad Medical Complex/Khyber Girls Medical College, Peshawar 25000, Pakistan
7
Institute of Pathology and Diagnostic Medicine, Khyber Medical University, Peshawar 25100, Pakistan
8
Department of Computer Science, AFG College with the University of Aberdeen, Doha P.O. BOX 33199, Qatar
*
Author to whom correspondence should be addressed.
Submission received: 21 May 2022 / Revised: 18 July 2022 / Accepted: 23 July 2022 / Published: 3 October 2022
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)

Abstract

:
Haemoglobin (Hb) electrophoresis is a method of blood testing used to detect thalassaemia. However, the interpretation of the result of the electrophoresis test itself is a complex task. Expert haematologists, specifically in developing countries, are relatively few in number and are usually overburdened. To assist them with their workload, in this paper we present a novel method for the automated assessment of thalassaemia using Hb electrophoresis images. Moreover, in this study we compile a large Hb electrophoresis image dataset, consisting of 103 strips containing 524 electrophoresis images with a clear consensus on the quality of electrophoresis obtained from 824 subjects. The proposed methodology is split into two parts: (1) single-patient electrophoresis image segmentation by means of the lane extraction technique, and (2) binary classification (normal or abnormal) of the electrophoresis images using state-of-the-art deep convolutional neural networks (CNNs) and using the concept of transfer learning. Image processing techniques including filtering and morphological operations are applied for object detection and lane extraction to automatically separate the lanes and classify them using CNN models. Seven different CNN models (ResNet18, ResNet50, ResNet101, InceptionV3, DenseNet201, SqueezeNet and MobileNetV2) were investigated in this study. InceptionV3 outperformed the other CNNs in detecting thalassaemia using Hb electrophoresis images. The accuracy, precision, recall, f1-score, and specificity in the detection of thalassaemia obtained with the InceptionV3 model were 95.8%, 95.84%, 95.8%, 95.8% and 95.8%, respectively. MobileNetV2 demonstrated an accuracy, precision, recall, f1-score, and specificity of 95.72%, 95.73%, 95.72%, 95.7% and 95.72% respectively. Its performance was comparable with the best performing model, InceptionV3. Since it is a very shallow network, MobileNetV2 also provides the least latency in processing a single-patient image and it can be suitably used for mobile applications. The proposed approach, which has shown very high classification accuracy, will assist in the rapid and robust detection of thalassaemia using Hb electrophoresis images.

1. Introduction

Haemoglobin, an essential component of red blood cells, carries oxygen from the lungs to various tissues of the body. The adult haemoglobin molecule (haemoglobin A or HbA) is composed of two alpha and two beta chains. A variant of haemoglobin, HbA2 (composed of two alpha and two delta chains), is present in minor proportions in the blood (less than 3.5% of total Hb in normal individuals). Genetic mutations in beta-globin genes result in reduced production or the lack of production of adult haemoglobin. Depending on the type and severity of mutations, symptoms vary from mild subclinical anaemia (beta thalassaemia trait) to severe life-threatening anaemia requiring repeated blood transfusions (beta thalassaemia major). In Pakistan, an estimated 80–90 thousand patients are registered in various thalassaemia treatment centers, managed by various public and private sector organizations [1]. It is important to screen for silent carriers of thalassaemia (termed beta thalassaemia trait) as the offspring of two silent carrier parents have a 25% chance of inheriting the severe transfusion-dependent form of thalassaemia [2]. The diagnosis of thalasssaemia requires careful evaluation of patient history and clinical examination, microscopic examination of the peripheral blood film, and the assessment of various haemoglobin variants using haemoglobin electrophoresis. Electrophoresis testing is a common practice for thalassaemia screening, involving the separation of molecules that have the capability to acquire electric charges from a nearby electric field. This process is conducted in the presence of suspended ions, called a buffer solution. In the buffer solution, ions migrate between two electrodes, i.e., they travel towards the oppositely charged electrodes. In a normal adult, HbA is the most dominant form of haemoglobin (at 96–98%), whereas HbA2 occurs at a level of about 2–3.5%. In beta thalassaemia trait, a compensatory increase in the HbA2 level is seen, which may rise up to 7% HbA2. In beta thalassaemia major, almost all of the red blood cells consist of the fetal variant of haemoglobin (HbF) [3]. The interpretation of Hb electrophoresis remains a challenge in most low- and middle-income countries (LMICs) due to a lack of standardized protocols and expertise. To tackle these issues, in the current approach to the interpretation of Hb electrophoresis testing, computer-aided detection based on Hb electrophoresis test images is required. The automated system should be able to accurately interpret the electrophoresis test images for the diagnosis of thalassaemia traits in patients.
Thalassaemia is the most common single-gene disorder globally. A high prevalence of thalassaemia and other haemoglobin disorders appears to be an evolutionary mechanism against malaria infection. The prevalence of thalassaemia is high in sub-Saharan Africa, the Mediterranean region, the Middle East, South East Asia and South Asia. However, with increased global migration, a higher number of cases are being seen in Europe and the Americas. At a genetic and molecular level, beta-globin chain production is controlled by beta chain genes located on each of the pair of chromosome 11. More than 200 mutations have been reported in beta-globin genes. Mutation in one of the two genes results in mild, often subclinical reductions in cell volume and haemoglobin levels. Such persons are labeled ‘carriers’ or ‘heterozygous’. Persons carrying two mutations are labeled ‘homozygous’. A mutation may severely reduce or completely block the production of beta chains ( β + or β 0 thalassaemia, respectively). Consequently, based on the type of mutation, and whether both genes are affected, the disease’s clinical phenotype ranges from asymptomatic to mild, moderate and severe [4].
In patients with homozygous beta thalassaemia, symptoms begin to appear after the first trimester of life, when HbF (fetal) production ceases and is not replaced by normal HbA (adult). If untreated, infants develop severe anaemia, enlargement of the liver and spleen, and failure to thrive. With appropriate treatment, patients have been shown to survive until the fifth or sixth decades of life. However, in low–middle-income countries, the average life expectancy is still not above 20 years. Patients are dependent on blood transfusions for their entire lives. Repeated blood transfusions cause the accumulation of excessive iron, causing organ damage. Furthermore, transfusion-transmitted infections (TTIs), such as viral hepatitis B and C and human immunodeficiency virus (HIV), are quite common. The only long-term sustainable solution is the prevention of thalassaemia major through the identification of carriers of beta thalassaemia [4].
Thalasssaemia is a genetic disorder and is a chronic disease. The body of the thalassaemic patient makes an inadequate or an abnormal form of haemoglobin. This leads to a life-threatening form of anaemia and for survival the patients require blood transfusions on a regular basis [5]. The World Health Organization (WHO) has declared the control of haemoglobin disorder, particularly β -thalassaemia, in developing countries, as a national and international health priority [6]. In Pakistan, every year an estimated 5000–9000 children are born with β -thalassaemia and there are almost 9.8 million carriers at a national level [7]. However, its spread can be controlled through the early detection of thalassaemia carriers, i.e., by avoiding marriages between people who are carriers of thalassaemia traits [8]. In order to prevent β -thalassaemia, collective measures can be carried out such as carrier identification, prenatal diagnosis and, more importantly, genetic counseling. For the screening and assessment of thalassaemia patients, different techniques are discussed in the following sections.
The severity and high impact of thalassaemia on human life demands accurate and efficient solutions for thalassaemia diagnosis. Researchers and medical experts are equally involved in the process of experimentation to devise a computer-aided and automated solution for the detection of this deadly disease. Generally, for the detection of thalassaemia, complete blood count (CBC) and haemoglobin tests are performed [9]. There are numerous approaches and techniques [10,11], in which different CBC parameters, including white blood cells (WBC), red blood cells (RBC), haemoglobin (HB), haematocrit (HCT), mean corpuscular volume (MCV), mean corpuscular haemoglobin (MCH) and mean corpuscular haemoglobin concentration (MCHC), are used as features with machine learning classifiers, such K-nearest neighbor (KNN), multi-layer perceptron (MLP), decision tree support vector machine (SVM) and the latest deep learning paradigm, i.e., convolutional neural networks (CNNs), to devise an automated solution for thalassaemia detection.
For several decades, Hb electrophoresis has been used for the screening and assessment of thalassaemia in clinical practices [12]. Although high-performance liquid chromatography (HPLC) [13] and polymerase chain reaction (PCR) analysis can also be used for thalassaemia diagnosis [14], Hb electrophoresis is the cheapest and most commonly used method. In routine clinical diagnostics in developing countries, Hb electrophoresis is the most commonly used method. In more developed countries, it is being replaced by high-performance liquid chromatography (HPLC), and in some cases polymerase chain reaction (PCR) for thalassaemia-specific mutation analysis.
The test procedure of Hb electrophoresis is simple; however, multiple steps are involved. Briefly, red cells are lysed to liberate the haemoglobin into a solution. This lysate is then applied on a suitable electrophoresis medium (e.g., a cellulose acetate strip) to perform electrophoresis. Due to variations in ionic charge, the various Hb variants travel different distances. This strip is then dipped into a Ponceau staining solutionto highlight the bands. Hb electrophoresis test images contain the blood samples of more than one patient (up to eight patients) on a single cellulose sheet, as shown in Figure 1A,B. Figure 1A shows the Raw Hb electrophoresis slide images of eight subjects, where normal patients are denoted by N and thalassaemia patients are denoted by T, and Figure 1B shows standard normal and beta thalassaemia patterns of Hb electrophoresis images. In the figure, haemoglobin A (HbA), also known as adult haemoglobin, haemoglobin A1, is the most common human haemoglobin tetramer, accounting for over 97% of the total red blood cell haemoglobin. Haemoglobin is an oxygen-binding protein, found in erythrocytes, which transports oxygen from the lungs to the tissues. HbA2, composing two chains α and two δ chains, is a minor component of the haemoglobin present in normal adult red blood cells, accounting for about 2.5% of the total haemoglobin in healthy individuals. For thalassaemia patients, its content above 3.5%. Moreover, HbF is the predominant form of haemoglobin in red cells during fetal life. Just after birth, the level of HbF decreases gradually to <1%, and is replaced mainly by adult haemoglobin (HbA) (97%). To automate the process, each patient’s sample lane has to be extracted for further processing, which can be achieved using lane extraction techniques. In the following paragraph, recent works on lane extraction are discussed.
Ivan Bajla et al. [15] proposed a gel image analysis method for band detection, which is performed manually, i.e., users have to select or reject the extracted lane images manually. In [16], Helena et al. proposed a method for the removal of distortion from the gel images, which resolves the problem of non-uniformity in the process of electrophoresis via contrast adjustment and median filtering. They obtained the peaks based on the relative intensity. Abeykoon et al. [17] presented a lane extraction method on the basis of the size of the molecular weight and its distance traveled in the respective lane. Samira et al. [18] presented a lane extraction method on the basis of the size of the molecular weight and its distance traveled in the respective lane. In [19], another lane extraction method was presented, which estimates the average lane width of gel electrophoresis images via k-means clustering. Local maxima are calculated on a small portion of the image as potential lane centers then lanes are segmented using local minima. Yang et al. [20] performed lane extraction on gel electrophoresis images, in which the image was preprocessed by removing the grid texture from the background, then a watershed algorithm was applied to calculate the centerlines of the bands, and finally the shortest path algorithm was used to recover the shape. All these existing techniques perform well, with reasonable accuracy, but these techniques require input from the user to adjust the region of interest and other different parameters manually.
It is apparent from the above discussions that challenges exist with the current electrophoresis technique used for the detection of thalassaemia patients using computer-aided diagnostics systems with minimal human intervention. One is to automatically extract the slide image for a single patient and classify the slide with a reasonable degree of accuracy based on the electrophoresis image. However, thalassaemia diagnosis with electrophoresis images still requires human experts to interpret the images by comparing them with standard normal images. Several novel contributions are reported in this work:
  • A relatively large electrophoresis image dataset was created from 824 patients (normal and thalassaemia);
  • An automatic lane extraction technique has been proposed, which efficiently detects and extracts lanes from the given cellulose sheet and extract a single patient’s electrophoresis image;
  • Seven different pre-trained CNN models were investigated for the classification of electrophoresis images; and
  • Image visualization techniques have proposed to investigate where the CNN model is learning from.
The article is organized into the following sections: Section 1 discusses the motivation, background of novelty of this work. Section 2 discusses the background of popular CNN models and Section 3 describes the visualization techniques. Section 4 illustrates the methodology of the work. Section 5 illustrates the results and provides a discussion of the experiments carried out in this study, whereas Section 6 concludes the work.

2. Deep Convolutional Neural Networks

Deep CNNs have been popularly used in image classification due to their superior performance compared to other machine learning paradigms. The network structure automatically extracts the spatial and temporal features of an image. The approach of transfer learning has been successfully incorporated in many applications [21,22], especially where large datasets can be hard to find. Thus, this has created the opportunity to utilize smaller datasets and has also reduced the time required to develop a deep learning algorithm from scratch [23,24]. For COVID-19 detection, nine pre-trained deep learning CNNs, such as ResNet18, ResNet50, ResNet101 [25], DenseNet201 [26], and InceptionV3 [27,28], were pre-dominantly used in the literature. Residual networks (in short ResNets), which have several variants, solve vanishing gradient and degradation problems [25] and learn from residuals instead of features [29]. A dense convolutional network (in brief, DenseNet) requires a smaller number of parameters than a conventional CNN, as it does not learn redundant feature maps. DenseNet has layers with direct access to the original input image and loss function gradients. InceptionV3 is a CNN architecture from the inception family that makes several improvements, including using label smoothing, factorized 7 × 7 convolutions, and the use of an auxiliary classifier to propagate label information lower down the network (along with the use of batch normalization for layers in the side head). This network scales in ways that strive to use the added computation as effectively as possible through correctly factorized convolutions and aggressive regularization [28]. SqueezeNet involves three main strategies: (1) replacing 3 × 3 filters with 1 × 1 filters, (2) decreasing the number of input channels to 3 × 3 filters and (3) down-sampling the network so that convolution layers have large activation maps. Strategies 1 and 2 are based on judiciously decreasing the quantity of parameters in a CNN, while attempting to preserve accuracy. Strategy 3 entails maximizing accuracy on a reasonable number of parameters. In MobileNetV2, each block contains a 1 × 1 expansion layer, in addition to a depthwise and a pointwise convolutional layer. Unlike V1, the pointwise convolutional layer of V2, known as the projection layer, projects data with a high number of channels into a tensor with a much lower number of channels. The bottleneck residual block has the output of each block as a bottleneck. A 1 × 1 expansion convolutional layer will expand the number of channels depending on the expansion factor in the data before they undergo the depthwise convolution process. The second novel component in MobileNetV2 is the residual connection. The residual connection exists to help the flow of gradients through the network. Each layer of MobileNetV2 involves batch normalization and a rectilinear unit (ReLU) as the activation function. However, the output of the projection layer does not have an activation function. The full MobileNetV2 architecture consists of 17 bottleneck residual blocks in a row, followed by a regular 1 × 1 convolution, a global average pooling layer. In this work, seven different pre-trained CNN models (three variants of ResNet, InceptionV3, DenseNet201, SqueezeNet and MobileNetV2) were investigated.

3. Visualization Techniques

Increasing interest in the internal mechanisms of CNNs and the reasoning behind the specific decisions made by networks has led to the development of visualization techniques. The visualization techniques help in obtaining a better visual representation in order to interpret the decision-making processes of CNNs. These also increase the model’s transparency by visualizing the logic behind the inference that can be interpreted in a way that is easily understandable to humans, thereby increasing confidence in the outcomes of neural networks. Amongst the various visualization techniques, such as SmoothGrad [30], Grad-CAM [31], Grad-CAM++ [32] and Score-CAM [33], the recently proposed Score-CAM was used in this work due to its promising performance. Score-CAM avoids the dependence on gradients by obtaining the weight of each activation map through its forward passing score on the target class.The final result is obtained through a linear combination of weights and activation maps. A sample image visualization obtained using Score-CAM is shown in Figure 2, where the heat map indicates regions that dominantly contributed in the decision-making of the CNN. This can be helpful in understanding how the network is making its decision and also in improving the confidence of the end-user when it can be confirmed that at all times the network is making decisions using the Hb electrophoresis images. In regard to specific lane images from the dataset, it is clearly visiblethat the model was making a decision based on the HbF and HbA bands, as this part is reddish. The blue zone part means this part made less of a contribution to the decision made by the network.

4. Methodology

In this section, a deep learning method for the automated assessment of thalassaemia using electrophoresis images is presented. The proposed methodology includes database development, pre-processing of electrophoresis images, object detection, lane extraction and finally the classification of normal and thalassaemia classes. The steps of the proposed methodology are shown in Figure 3, and are detailed in the following subsections.

4.1. Database Development

The dataset used in this research was recorded over the course of two years at Lady Reading Hospital in Peshawar, Pakistan. The strips were scanned and anonymized. The assessment and labeling of Hb electrophoresis images were performed by three independent haematologists. These were selected based on their post-graduate qualifications as haematologists. All three were responsible for diagnostic clinical work in their respective hospitals. Figure 4 shows sample raw electrophoresis images from the dataset. The database consisted of 103 strips containing electrophoresis images from 824 subjects. Each strip contained 8 patients’ images. A total of 824 images were analyzed. Images with a clear consensus on the quality of electrophoresis were included. Cases with poor images or diagnoses other than beta thalassaemia trait, as well as some normal images causing a data imbalance, were excluded. Finally, a total of 524 images were included in the analysis. A selection process was performed in order to have 50% in each class (thalassaemia or normal). After some preparations, each subject’s sample was applied on the test strip at the point of application, as shown in Figure 5. The bands were formed by Hb variants according to their weights. HbA moves faster and makes the first band, due to its lighter weight. Similarly, HbA2 moves slowly and is left behind, thus making the last band in the image due to its heavier weight. After the separation of haemoglobin, the diagnosis can be made on the basis of the percentage of each variant in the image.

4.2. Pre-Processing

Several pre-processing steps were applied to the raw images to extract the slides for each patient. The most common steps were (i) filtering and thresholding, (ii) object detection, (iii) erosion and dilation and (iv) boundary detection and lane extraction. Figure 6 shows different filtering and thresholding steps, depicting raw images (A), RGB to grayscale conversion (B), complementing the image (C) and image thresholding (D). Each step is separately discussed in the following subsections.

4.2.1. Filtering and Thresholding

In electrophoresis images, most of the information lies in the low frequencies, whereas noise is present in high-frequency pixel values. A Gaussian filter, also known as blur filter, with a kernel size of 5 × 5 was applied to the complimented images. Gaussian filters are good for noise removal but the filtered image becomes blurred [34]. The noise removal capability and the associated blurring of the image can be adjusted via the selection of the appropriate parameters.
Thresholding is commonly used for the purpose of segmentation. In [35] a gradient analysis approach for the calculation of the threshold value was adopted. Otsu thresholding was used in this work to minimize the inter-class variance among the background and foreground. Afterwards, the optimum value of threshold was used for the conversion of grayscale images into binary images.

4.2.2. Object Detection

There were data from eight subjects in every scanned image of a strip. To separate each of them from one another, an object detection technique was used. This process involves two further steps, including erosion- dilation and boundary detection to better separate the lanes.

4.2.3. Erosion and Dilation

Erosion and dilation are two fundamental nonlinear operations related to the shape of the features in an image [36]. Dilation adds pixels to the boundary of the object in an image, whereas erosion removes pixels on the object boundaries. In the proposed method, an erosion operator was employed twice, using two different scales to erode the image. This was carried out because some of the blood molecules in consecutive lanes combined with one another and it became difficult to detect them separately. In order to address this problem, we suggested first applying the erosion operator, followed by dilation to compensate for the erosion. For this process, a structuring element of 5 × 5 was selected.

4.2.4. Boundary Detection

In the proposed methodology, boundary detection was performed using connected components. The main function of connected-component analysis is in distinguishing large-sized connected foreground regions of an image from the background regions of the image. The connected pixels were clustered together, as illustrated in Figure 7C. Subsequently, based on the object boundaries, the point of separation for each lane was predicted. The algorithm was used with an 8 pixel connectivity value.

4.2.5. Lane Extraction

For further analysis of electrophoresis test images, each lane had to be extracted from the image. For lane extraction, the first step was to determine the boundary pixels using boundary points obtained from the connected-component analysis, as shown in Figure 7B, and the next step was to crop the original RGB images using the obtained boundary pixels, as depicted in Figure 7C. As a result, the test image was divided into separated lanes, as shown in Figure 7D, where each lane represented the test result of an individual patient. The cropped images were converted into a fix-sized format. The chosen size was a 30 × 150 pixel RGB image (3-channels). This fixed sized images with cropped lanes were fed as inputs into the classifier.

4.3. Thalassaemia Detection

Seven different CNN models were trained, validated and tested separately using thalassaemia images for the classification of thalassaemia and non-thalassaemia (normal) results. The complete set of images was divided into 80% training and 20% testing subsets for five-fold cross validation and 20% of the training data were used for validation. Table 1 shows the number of training, validation and test images used for the experiments. To make the training image sets for both classes, the training images were augmented 14 times to make the number of images 2835 per fold. Three augmentation strategies (rotation, scaling and translation) were utilized to balance the training images. The rotation was performed from 3 to 5 clockwise. The scaling operation involves the magnification or reduction of the frame size of the image; 2.5% to 10% image magnifications were used in this work. Image translation was accomplished by translating images horizontally and vertically by 5% to 10%.
All the CNNs were implemented using the PyTorch library with Python 3.7 on an Intel Xeon CPU E5-2697 v4 @ 2.30 GHz and 64 GB RAM, with a 16 GB NVIDIA GeForce GTX 1080 GPU. Three comparatively shallow networks (MobileNetv2, SqueezeNet and ResNet18) and four deep networks (InceptionV3, ResNet 50, ResNet101 and DenseNet201) were evaluated in this study to investigate whether shallow or deep networks were more suitable for this application. Three different variants of ResNet were used to compare specifically the impact of shallow and deep networks with similar structures. The pre-trained CNN models were trained using the same training parameters and stopping criteria, as shown in Table 2. The stopping criteria meant that the training would be stopped if the validation losses did not decrease for five consecutive epochs, and epoch patience was used to decrease the learning rate if the validation loss did not decrease for a few consecutive epochs. Fifteen back-propagation epochs were used for the classification problem. Five-fold cross-validation results were averaged to produce the final receiver operating characteristic (ROC) curve, confusion matrix and evaluation matrices. The use of image augmentation and having a validation image set helped in avoiding the overfitting of the models [37].
A c c u r a c y = T P + T N T P + F N + F P + T N
R e c a l l = T P T P + F N
S p e c i f i c i t y = T N T N + F P
P r e c i s i o n = T P T P + F P
F 1 S c o r e = 2 × T P 2 × T P + F P + F N

4.4. Performance Matrix

The performance of different CNNs on the testing dataset was evaluated after the completion of the training and validation phases and was compared using six performance metrics: accuracy, recall, specificity, precision, area under the curve (AUC) and F1 scores. The matrices were calculated using Equations (1)–(5) with 95% confidence intervals (CIs). Accordingly, the CI for each evaluation metric was computed, as shown in Equation (6).
C o n f i d e n c e i n t e r v a l s : r = z × m e t r i c × ( 100 m e t r i c ) 100 2 × N
where N is the number of test samples and z is the level of significance, that is, 1.96 for 95%.
Here, true positive (TP), true negative (TN), false positive (FP) and false negative (FN) values were used to denote the number of thalassaemia images identified as thalassaemia, the number of normal images identified as normal, the number of normal images incorrectly identified as thalassaemia images and the number of thalassaemia images incorrectly identified as normal images, respectively. In addition, the networks could be compared in terms of the processing time for each test image. The processing time here refers to the time elapsed per image by a network when processing the images as a batch with parallelization. The processing time is represented in Equation (7), where t 1 and t 2 are the start and end time for a network to classify an image I, respectively. All time is measured in seconds. Based on the results, it is evident that the method can work on a low-end computer, even if there is no GPU, on an embedded computer or on mobile technology in negligible time compared to the time of electrophoresis.
δ t e = t 2 t 1

5. Results and Discussion

Seven different models were trained, validated and tested for the classification of thalassaemia and normal (non-thalassaemia) cases. The comparative performance of different CNNs in two-class classification between thalassaemia and normal images is shown in Table 3. It is apparent from Table 3 that all the evaluated pre-trained models performed very well in classifying thalassaemia and normal images in the two-class problem. Among the networks trained with thalassaemia and non-thalassaemia normal images, Inceptiov3 and MobileNetV2 performed equally well in classifying the images. We did not necessary find that deeper networks would perform better; rather, MobileNetV2 is a very good example of transfer learning and it outperformed other networks in this task. ResNet18 showed comparatively better performance than ResNet50 and 101, which are much deeper networks than ResNet18. It can be seen that apart from SqueezeNet, most of the shallow networks performed well in this task. This could be because electrophoresis-image-based classification was a comparatively straightforward problem for these pre-trained CNN models. However, the AUCs for InceptionV3, Densnet201 and Resnet18 were close to each other. However, that of InceptionV3 was slightly better than others, as shown in Table 3.
Figure 8 clearly shows that the ROC curves for InceptionV3 were the best among the seven different CNN models. However, it is obvious that most of the tested models performed very well in distinguishing thalassaemia and normal images.This reflects the fact that shallow or deep CNNs can distinguish between thalassaemia and normal images with very high reliability. Although all the CNNs exhibited better performance in classification, InceptionV3 showed the most outstanding performance in classifying thalassaemia and normal images. Even though, performance-wise, InceptionV3 was the best performer, it was comparatively slow in processing the test images, being the deepest network in the group. Figure 9 shows the training and validation accuracy versus epochs and loss versus epochs for the best-performing network (InceptionV3). It can also be seen that the network reached the lowest loss value relatively early.
In summary, InceptionV3 demonstrated the highest classification accuracy of 95.8% in the classification of non-thalassaemia and thalassaemia images. Figure 10 shows the confusion matrix for the high-performing InceptionV3 model with thalassaemia and normal images. It can be noted that for the best-performing network, 15 out of 262 thalassaemia images were misclassified as normal images and 7 out of 262 normal images were misclassified as thalassaemia images. This is clearly an outstanding performance for any computer-aided classifier and this can significantly help in the fast screening of thalassaemia by clinicians immediately after acquiring the images. It can be seen that MobileNetV2 was the best-performing network according to elapsed time per image ( δ te), as shown in Figure 11, and it had the lowest number of parameters, as shown in Table 4. Finally, it can be observed that MobileNetV2, being a shallow network, produced very good accuracy, precision and recall values of 95.72%, 95.73% and 95.72%, respectively, which were very close to the accuracy, precision and recall of 95.8%, 95.84%, and 95.8%, respectively, for InceptionV3, which is a deep network.
In order to visualize the decision-making processes of the CNNs, visualization methods can be used to obtain a better visual representation. These also improve the transparency of the model by visualizing the reasoning behind the inference, enabling it to be interpreted in a way that can be easily understood by humans, and thereby increasing trust in the results of the CNNs. In this study, Score-CAM was used due to its promising performance in the previous works of the authors. Figure 12 provides some examples of Score-CAM visualizations, highlighting the regions used by the CNNs in making decisions. This visualization can help to increasing the level of confidence in the reliability of CNN models by confirming that the decision-making was based on relevant regions of the images. Figure 12A shows the raw Hb electrophoresis images and Figure 12B shows the Score-CAM heat-maps for those images, obtained using the best-performing CNN model. It is clearly visible that models made their decisions based on the HbF, HbA and HbA2 bands in the Hb electrophoresis images. It is worth noting here that the blue color in the Score-CAM images represents a lower contribution, whereas the red color represents a higher contribution.
The state-of-the-art performance of our proposed method was compared with the recently published works in the same problem domain. Table 5 summarizes a comparison of the results presented in this paper with those of others in the detection of thalassaemia using Hb electrophoresis images. In a few recent studies [10,38,39], detection accuracies of 93.7% were reported, using databases consisting of a small number of images. However, in our study, we used larger datasets than others and obtained consistent results. We also used lane extraction techniques and evaluated classification performance using seven different CNN models, which makes our method more robust and versatile, with 95.8% accuracy. Moreover, the performance of our model was justified using the Score-CAM based visualization technique to emphasize the importance of learning on the basis of the dominant regions identified by the CNN-based classification tasks.
Despite the promising results obtained in this study, there are few limitations of the proposed approach which require further investigation and can be considered as an extension of this study. A few of them are described as follows:
  • The presence and concentration of HbA2 and HBF determines the severity of the disease. This classifier only classifies beta thalassaemia minor, which involves a specific concentration of Hb. In Southeast Asia, this affects about 15% of cases. These individuals normally do not have any clinically significant issues as long as they maintain Hb levels between 9 and 12 g/dL. There is no need for therapy. Moreover, since this is the first computer-aided diagnosis technique for electrophoresis images, this study is limited to binary classification.
  • The populations may not be the same between different studies (e.g., different countries with different levels of healthcare). High confidence intervals were obtained, and the results for the proposed approach were the best among nine other methods, and thus there was probably a slight over-evaluation. The ground truth used was an expert evaluation, and this may not have been the case for the other methods. The fact that the processing of Hb electrophoresis images with deep learning was the best among other techniques has to be confirmed through a blind study designed for this purpose, in which the techniques are applied to the same patients.

6. Conclusions

In this study, a novel deep-learning-based approach to the screening of thalassaemia has been investigated for the automated assessment of thalassaemia. The primary task was to automatically extract the lanes from strips of electrophoresis images and to identify human subjects as being normal or abnormal in relation to thelassaemia. The proposed method was tested on data from 524 patients, showing a thalassaemia classification accuracy of 95.8% when using the InceptionV3 network. This was the first attempt that has been made to classify electrophoresis test images using a deep learning classification model, and the method achieved reasonable accuracy. The proposed technique avoids manual assessment and incorporates automated assessment and detection in the thalassaemia screening pipeline. The results also showed that through the use of electrophoresis images with CNNs, researchers can achieve good results compared to other expensive tests and techniques that can be adopted for thalassaemia screening. In the future, we intend to increase the number of electrophoresis image data in the developed database. Moreover, we intend to develop a hardware solution for automated thalassaemia screening.

Author Contributions

Formal analysis, A.U., M.S.K. and T.R.; Investigation, H.R., Y.M.Y., M.S.K. and T.R.; Methodology, A.U., M.S.K. and T.R.; Resources, M.S.K. and S.B.A.K.; Supervision, M.S.K.and M.E.H.C.; Validation, M.E.H.C., M.S.K. and T.R.; Visualization, K.N.K., A.U. and T.R.; Writing—original draft, A.U, K.N.K. and T.R.; Writing—review & editing, M.S.K., M.E.H.C. and T.R. All authors have read and agreed to the published version of the manuscript.

Funding

A part of the research was funded by the Higher Education Commission of Pakistan through its funded project of Artificial Intelligence in Healthcare, Intelligent Information Processing Lab, National Center of Artificial Intelligence.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We gratefully acknowledge the support of Artificial Intelligence in Healthcare, Intelligent Information Processing Lab, National Center of Artificial Intelligence, University of Engineering and Technology, Peshawar, Pakistan.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tanveer, T.; Masud, H.; Butt, Z.A. Are people getting quality thalassemia care in twin cities of Pakistan? A comparison with international standards. Int. J. Qual. Health Care 2018, 30, 200–207. [Google Scholar] [CrossRef] [PubMed]
  2. Galanello, R.; Origa, R. Beta-thalassemia. Orphanet J. Rare Dis. 2010, 5, 11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Cao, A.; Galanello, R. Beta-thalassemia. Genet. Med. 2010, 12, 61–76. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Taher, A.T.; Weatherall, D.J.; Cappellini, M.D. Thalassaemia. Lancet 2018, 391, 155–167. [Google Scholar] [CrossRef]
  5. Weatherall, D.; Clegg, J.B. Inherited haemoglobin disorders: An increasing global health problem. Bull. World Health Organ. 2001, 79, 704–712. [Google Scholar]
  6. Angastiniotis, M.; Modell, B. Global epidemiology of hemoglobin disorders. Ann. N. Y. Acad. Sci. 1998, 850, 251–269. [Google Scholar] [CrossRef]
  7. Ahmed, S.; Saleem, M.; Modell, B.; Petrou, M. Screening extended families for genetic hemoglobin disorders in Pakistan. N. Engl. J. Med. 2002, 347, 1162–1168. [Google Scholar] [CrossRef] [Green Version]
  8. Bozkurt, G. Results from the north cyprus thalassemia prevention program. Hemoglobin 2007, 31, 257–264. [Google Scholar] [CrossRef]
  9. Shaikh, A.; Khurshid, M. Prevalence of thalassemia minor trait in Pakistani population presented at Akuh for complete blood count estimation (CBC). J. Pak. Med Assoc. 1993, 43, 98. [Google Scholar]
  10. Elshami, E.H.; Alhalees, A.M. Automated diagnosis of thalassemia based on datamining classifiers. In Proceedings of the International Conference on Informatics and Applications (ICIA2012), Kuala Terengganu, Malaysia, 3–5 June 2012; pp. 440–445. [Google Scholar]
  11. Purwar, S.; Tripathi, R.K.; Ranjan, R.; Saxena, R. Detection of microcytic hypochromia using cbc and blood film features extracted from convolution neural network by different classifiers. Multimed. Tools Appl. 2020, 79, 4573–4595. [Google Scholar] [CrossRef]
  12. Kan, Y.W.; Nathan, D.G. Mild thalassemia: The result of interactions of alpha and beta thalassemia genes. J. Clin. Investig. 1970, 49, 635–642. [Google Scholar] [CrossRef] [PubMed]
  13. Galanello, R.; Barella, S.; Gasperini, D.; Perseu, L.; Paglietti, E.; Sollaino, C.; Paderi, L.; Pirroni, M.; Maccioni, L.; Mosca, A. Evaluation of an automatic HPLC analyser for thalassemia and haemoglobin variants screening. J. Autom. Chem. 1995, 17, 73–76. [Google Scholar] [CrossRef] [PubMed]
  14. Kazazian, H.H. Use of PCR in the diagnosis of monogenic disease. In PCR Technology; Springer: Berlin/Heidelberg, Germany, 1989; pp. 153–169. [Google Scholar]
  15. Bajla, I.; Holländer, I.; Burg, K. Improvement of electrophoretic Gel image analysis. Meas. Sci. Rev. 2001, 1, 5–10. [Google Scholar]
  16. Skutkova, H.; Vitek, M.; Krizkova, S.; Kizek, R.; Provaznik, I. Preprocessing and classification of electrophoresis gel images using dynamic time warping. Int. J. Electrochem. Sci. 2013, 8, 1609–1622. [Google Scholar]
  17. Abeykoon, A.; Dhanapala, M.; Yapa, R.; Sooriyapathirana, S. An automated system for analyzing agarose and polyacrylamide gel images. Ceylon. J. Sci. (Biol. Sci.) 2015, 44, 45–54. [Google Scholar] [CrossRef]
  18. Khodabakhshi, S.; Hassanpour, H. Automatic lane extraction in hemoglobin and serum protein electrophoresis using image processing. J. Adv. Comput. Res. 2012, 3, 25–31. [Google Scholar]
  19. Park, S.C.; Na, I.S.; Han, T.H.; Kim, S.H.; Lee, G.S. Lane detection and tracking in PCR gel electrophoresis images. Comput. Electron. Agric. 2012, 83, 85–91. [Google Scholar] [CrossRef]
  20. Akay, A.; Dragomir, A.; Yardimci, A.; Canatan, D.; Yesilipek, A.; Pogue, B.W. A data-mining approach for investigating social and economic geographical dynamics of β-thalassemia’s spread. IEEE Trans. Inf. Technol. Biomed. 2009, 13, 774–780. [Google Scholar] [CrossRef] [PubMed]
  21. Christodoulidis, S.; Anthimopoulos, M.; Ebner, L.; Christe, A.; Mougiakakou, S. Multisource transfer learning with convolutional neural networks for lung pattern analysis. IEEE J. Biomed. Health Inform. 2016, 21, 76–84. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Akçay, S.; Kundegorski, M.E.; Devereux, M.; Breckon, T.P. Transfer learning using convolutional neural networks for object classification within x-ray baggage security imagery. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1057–1061. [Google Scholar]
  23. Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  25. Rahman, T.; Chowdhury, M.E.; Khandakar, A.; Islam, K.R.; Islam, K.F.; Mahbub, Z.B.; Kadir, M.A.; Kashem, S. Transfer Learning with Deep Convolutional Neural Network (CNN) for Pneumonia Detection using Chest X-ray. Appl. Sci. 2020, 10, 3233. [Google Scholar] [CrossRef]
  26. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  27. Rajpurkar, P.; Irvin, J.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.; Shpanskaya, K.; et al. Chexnet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv 2017, arXiv:1711.05225. [Google Scholar]
  28. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  29. LeCun, Y.; Kavukcuoglu, K.; Farabet, C. Convolutional networks and applications in vision. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems, Paris, France, 30 May–2 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 253–256. [Google Scholar]
  30. Smilkov, D.; Thorat, N.; Kim, B.; Viégas, F.; Wattenberg, M. Smoothgrad: Removing noise by adding noise. arXiv 2017, arXiv:1706.03825. [Google Scholar]
  31. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  32. Chattopadhay, A.; Sarkar, A.; Howlader, P.; Balasubramanian, V.N. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 839–847. [Google Scholar]
  33. Wang, H.; Wang, Z.; Du, M.; Yang, F.; Zhang, Z.; Ding, S.; Mardziel, P.; Hu, X. Score-CAM: Score-weighted visual explanations for convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 24–25. [Google Scholar]
  34. Gedraite, E.S.; Hadad, M. Investigation on the effect of a Gaussian Blur in image filtering and segmentation. In Proceedings of the ELMAR-2011, Zadar, Croatia, 14–16 September 2011; pp. 393–396. [Google Scholar]
  35. Yildiz, Z. A New Approach for Counting and Sizing the Objects: Image Weight Signal. Doctoral Dissertation, Ankara Yildirim Beyazit Universitesi Fen Bilimleri Enstitusu, Ankara, Turkey, 2016. [Google Scholar]
  36. Jawas, N.; Suciati, N. Image inpainting using erosion and dilation Operation. Int. J. Adv. Sci. Technol. 2013, 51, 127–134. [Google Scholar]
  37. Pera, M. Explorando redes Neuronales Convolucionales para Reconocimiento de Objetos en Imágenes RGB. 2020. Available online: https://repositorio.unican.es/xmlui/handle/10902/19259 (accessed on 22 July 2022).
  38. Kabootarizadeh, L.; Jamshidnezhad, A.; Koohmareh, Z. Differential diagnosis of iron-deficiency anemia from β-thalassemia trait using an intelligent model in comparison with discriminant indexes. Acta Inform. Med. 2019, 27, 78. [Google Scholar] [CrossRef]
  39. Das, R.; Datta, S.; Kaviraj, A.; Sanyal, S.N.; Nielsen, P.; Nielsen, I.; Sharma, P.; Sanyal, T.; Dey, K.; Saha, S. A decision support scheme for beta thalassemia and HbE carrier screening. J. Adv. Res. 2020, 24, 183–190. [Google Scholar] [CrossRef]
  40. Wongseree, W.; Chaiyaratana, N.; Vichittumaros, K.; Winichagoon, P.; Fucharoen, S. Thalassaemia classification by neural networks and genetic programming. Inf. Sci. 2007, 177, 771–786. [Google Scholar] [CrossRef]
  41. Setsirichok, D.; Piroonratana, T.; Wongseree, W.; Usavanarong, T.; Paulkhaolarn, N.; Kanjanakorn, C.; Sirikong, M.; Limwongse, C.; Chaiyaratana, N. Classification of complete blood count and haemoglobin typing data by a C4. 5 decision tree, a naïve Bayes classifier and a multilayer perceptron for thalassaemia screening. Biomed. Signal Process. Control 2012, 7, 202–212. [Google Scholar] [CrossRef]
  42. Paokanta, P.; Ceccarelli, M.; Srichairatanakool, S. The effeciency of data types for classification performance of Machine Learning Techniques for screening β-Thalassemia. In Proceedings of the 2010 3rd International Symposium on Applied Sciences in Biomedical and Communication Technologies (ISABEL 2010), Roma, Italy, 7–10 November 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1–4. [Google Scholar]
  43. Amendolia, S.R.; Brunetti, A.; Carta, P.; Cossu, G.; Ganadu, M.; Golosio, B.; Mura, G.M.; Pirastru, M.G. A real-time classification system of thalassemic pathologies based on artificial neural networks. Med. Decis. Mak. 2002, 22, 18–26. [Google Scholar] [CrossRef]
  44. HosseiniEshpala, R.; Langarizadeh, M.; KamkarHaghighi, M.; Banafsheh, T. Designing an expert system for differential diagnosis of β-Thalassemia minor and Iron-Deficiency anemia using neural network. Hormozgan Med. J. 2016, 20, 1–9. [Google Scholar]
  45. Marzuki, N.I.B.C.; Bin Mahmood, N.H.; Bin Abdul Razak, M.A. Identification of thalassemia disorder using active contour. Indones. J. Electr. Eng. Comput. Sci. 2017, 6, 160–165. [Google Scholar] [CrossRef]
  46. Borah, M.S.; Bhuyan, B.P.; Pathak, M.S.; Bhattacharya, P. Machine learning in predicting hemoglobin variants. Int. J. Mach. Learn. Comput. 2018, 8, 140–143. [Google Scholar] [CrossRef]
Figure 1. (A) Raw Hb electrophoresis images from eight subjects, and (B) standard normal and beta thalassaemia patterns in Hb electrophoresis images.
Figure 1. (A) Raw Hb electrophoresis images from eight subjects, and (B) standard normal and beta thalassaemia patterns in Hb electrophoresis images.
Diagnostics 12 02405 g001
Figure 2. Score-CAM heat map on Hb electrophoresis test image.
Figure 2. Score-CAM heat map on Hb electrophoresis test image.
Diagnostics 12 02405 g002
Figure 3. Overview of the complete methodology used in this study.
Figure 3. Overview of the complete methodology used in this study.
Diagnostics 12 02405 g003
Figure 4. Sample electrophoresis raw images from the dataset.
Figure 4. Sample electrophoresis raw images from the dataset.
Diagnostics 12 02405 g004
Figure 5. Description of a single electrophoresis image from a patient in an eight-patient strip image.
Figure 5. Description of a single electrophoresis image from a patient in an eight-patient strip image.
Diagnostics 12 02405 g005
Figure 6. Pre-processing techniques applied to a sample test image: (A) raw (B) RGB to grayscale conversion, (C) image complementation, (D) thresholding of electrophoresis images.
Figure 6. Pre-processing techniques applied to a sample test image: (A) raw (B) RGB to grayscale conversion, (C) image complementation, (D) thresholding of electrophoresis images.
Diagnostics 12 02405 g006
Figure 7. Pre-processing techniques of grayscale images for lane extraction: (A) grayscale image, (B) connected object detection, (C) extracted lane images and (D) cropped images.
Figure 7. Pre-processing techniques of grayscale images for lane extraction: (A) grayscale image, (B) connected object detection, (C) extracted lane images and (D) cropped images.
Diagnostics 12 02405 g007
Figure 8. Comparison of the ROC curves for normal and thalassaemia classification using CNN models.
Figure 8. Comparison of the ROC curves for normal and thalassaemia classification using CNN models.
Diagnostics 12 02405 g008
Figure 9. Training and validation (A) accuracy versus epoch and (B) losses versus epoch for the the best-performing trained CNN networks.
Figure 9. Training and validation (A) accuracy versus epoch and (B) losses versus epoch for the the best-performing trained CNN networks.
Diagnostics 12 02405 g009
Figure 10. Confusion matrix for normal and thalassaemia classification for the best-performing model (InceptionV3).
Figure 10. Confusion matrix for normal and thalassaemia classification for the best-performing model (InceptionV3).
Diagnostics 12 02405 g010
Figure 11. Comparison of accuracy versus the elapsed time per image using seven different networks for thalassaemia classification.
Figure 11. Comparison of accuracy versus the elapsed time per image using seven different networks for thalassaemia classification.
Diagnostics 12 02405 g011
Figure 12. Score-CAM visualization of (A) Hb electrophoresis images, and (B) corresponding Score-CAM heat-map for different subjects.
Figure 12. Score-CAM visualization of (A) Hb electrophoresis images, and (B) corresponding Score-CAM heat-map for different subjects.
Diagnostics 12 02405 g012
Table 1. Details of training, validation and test sets for classification problems.
Table 1. Details of training, validation and test sets for classification problems.
TypesTotal No.
of Hb
Electrophoresis
Images/Class
Training without and with
Image Augmentation
Training Set/ FoldValidation Set/FoldTest Set/ Fold
Normal262189 * 15 = 28352152
Thalassaemia262189 * 15 = 28352152
Table 2. Details of training parameters used for classification.
Table 2. Details of training parameters used for classification.
Training Parameters
batch size16
learning rate0.001
epochs15
epoch patience4
stopping criteria5
optimizerAdam
Table 3. Comparison of the performance of different CNN models in thalassaemia classification.
Table 3. Comparison of the performance of different CNN models in thalassaemia classification.
NetworkAccuracy (%)Precision (%)Recall (%)F1-Score (%)Specificity (%)Elapsed Time for Testing (perImage) ( δ te) (Seconds)Area under the Curve (AUC)
InceptionV395.80 ± 1.7295.84 ± 2.4295.80 ± 2.4395.80 ± 2.4395.80 ± 2.430.9796.88
MobileNetV295.61 ± 1.7595.66 ± 2.4795.60 ± 2.4895.63 ± 2.4895.60 ± 2.480.2495.60
Densnet20195.42 ± 1.7995.44 ± 2.5395.42 ± 2.5395.42 ± 2.5395.42 ± 2.530.6896.75
ResNet1895.23 ± 1.8295.26 ± 2.5795.22 ± 2.5895.22 ± 2.5895.22 ± 2.580.3196.65
ResNet5094.08 ± 2.0294.12 ± 2.8594.08 ± 2.8694.08 ± 2.8694.08 ± 2.860.4495.22
ResNet10194.85 ± 1.8994.96 ± 2.6594.85 ± 2.6894.84 ± 2.6894.85 ± 2.680.6093.85
SqueezeNet93.89 ± 2.0593.90 ± 2.9093.89 ± 2.9093.89 ± 2.9093.89 ± 2.900.2893.02
Table 4. Different trained CNN network parameters.
Table 4. Different trained CNN network parameters.
NetworkParameters (Millions)
InceptionV323.9
MobileNetV23.5
Densnet20120
ResNet1811.7
ResNet5025.6
ResNet10144.6
SqueezeNet1.24
Table 5. Comparison of the findings of this study with those of other recent similar works.
Table 5. Comparison of the findings of this study with those of other recent similar works.
ReferencePre-Processing/FeaturesClassifier/TechniqueAccuracy
Waranyu et al. [40] (2007)red blood cell, a reticulocyte and plateletANN and genetic programming87.22%
Damrongrit et al. [41] (2007)HPLCC4.5 decision tree, a naive Bayes classifier and MLP93.23%
Fatemeh Yousefian et al. [42] (2010)HbA2MLP87.40%
Paokanta et al. [43] (2011)HbA2 and GenotypeMLP, KNN and naive Bayes86.61%
Elshami et al. [10] (2012)CBCdecision tree, naive Bayes and ANN93.70%
Hosseini et al. [44] (2016)CBCMLP92%
Marzuki et al. [45] (2017)CBC and haemoglobinactive contour90%
Borah et al. [46] (2018)MCVKNN and SVM95.71%
Leila et al. [38] (2019)CBC, MCHANN92.50%
Reena et al. [39] (2020)haematological parametersdecision tree, naive Bayes and ANN91.74%
Proposed approachHb electrophoresis testCNN (InceptionV3)95.8%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Salman Khan, M.; Ullah, A.; Khan, K.N.; Riaz, H.; Yousafzai, Y.M.; Rahman, T.; Chowdhury, M.E.H.; Abul Kashem, S.B. Deep Learning Assisted Automated Assessment of Thalassaemia from Haemoglobin Electrophoresis Images. Diagnostics 2022, 12, 2405. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12102405

AMA Style

Salman Khan M, Ullah A, Khan KN, Riaz H, Yousafzai YM, Rahman T, Chowdhury MEH, Abul Kashem SB. Deep Learning Assisted Automated Assessment of Thalassaemia from Haemoglobin Electrophoresis Images. Diagnostics. 2022; 12(10):2405. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12102405

Chicago/Turabian Style

Salman Khan, Muhammad, Azmat Ullah, Kaleem Nawaz Khan, Huma Riaz, Yasar Mehmood Yousafzai, Tawsifur Rahman, Muhammad E. H. Chowdhury, and Saad Bin Abul Kashem. 2022. "Deep Learning Assisted Automated Assessment of Thalassaemia from Haemoglobin Electrophoresis Images" Diagnostics 12, no. 10: 2405. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12102405

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop