Next Article in Journal / Special Issue
Dynamic ICSP Graph Optimization Approach for Car-Like Robot Localization in Outdoor Environments
Previous Article in Journal
Construction and Performance Analysis of Image Steganography-Based Botnet in KakaoTalk Openchat
Previous Article in Special Issue
RNN-ABC: A New Swarm Optimization Based Technique for Anomaly Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Intelligent Computer-Aided Scheme for Classifying Multiple Skin Lesions

1
School of Computer Science, University of Nottingham, Jubilee Campus, Nottingham NG8 1BB, UK
2
Computer Science Department, King Khalid University, Abha 61421, Saudi Arabia
3
School of Arts and Communication, Leeds Trinity University, Leeds LS18 5HD, UK
4
Department of Computer Science, Shaheed Zulfikar Ali Bhutto Institute of Science and Technology, Karachi 75600, Pakistan
5
Faculty of Science and Engineering, Anglia Ruskin University, Cambridge CB1 1PT, UK
6
School of Computing and Digital Technology, Teesside University, Middlesbrough TS1 3BX, UK
*
Author to whom correspondence should be addressed.
Submission received: 19 July 2019 / Revised: 22 August 2019 / Accepted: 26 August 2019 / Published: 28 August 2019

Abstract

:
Skin diseases cases are increasing on a daily basis and are difficult to handle due to the global imbalance between skin disease patients and dermatologists. Skin diseases are among the top 5 leading cause of the worldwide disease burden. To reduce this burden, computer-aided diagnosis systems (CAD) are highly demanded. Single disease classification is the major shortcoming in the existing work. Due to the similar characteristics of skin diseases, classification of multiple skin lesions is very challenging. This research work is an extension of our existing work where a novel classification scheme is proposed for multi-class classification. The proposed classification framework can classify an input skin image into one of the six non-overlapping classes i.e., healthy, acne, eczema, psoriasis, benign and malignant melanoma. The proposed classification framework constitutes four steps, i.e., pre-processing, segmentation, feature extraction and classification. Different image processing and machine learning techniques are used to accomplish each step. 10-fold cross-validation is utilized, and experiments are performed on 1800 images. An accuracy of 94.74% was achieved using Quadratic Support Vector Machine. The proposed classification scheme can help patients in the early classification of skin lesions.

1. Introduction

Skin lesions cases are increasing day by day and are a major cause of an increased global disease burden. Skin lesions stand fourth among the major causes of the global disease burden [1]. The after-effects of the skin lesions are severe. The burden of skin lesions is multi-dimensional and includes social, financial and psychological consequences on the patient’s life and society [2]. People of all ages suffer from skin diseases, but young and elderly people suffer the most. Unemployment, self-harm, emotional distress, relationship loss, increased alcoholism and suicide are some of the prominent issues found in skin disease patients [3].
A huge difference exists between skin disease patients and the expertise to cope with them. The resources include skilled dermatologists, equipment, medicines and researchers. According to the World Health Organization, people living in rural areas suffer the most because of the lack of resources [4]. Due to this gross imbalance among the skin patients and the expertise, automated expert systems for early skin lesions classification are required. These classification systems can help in the early diagnosis of skin lesions and help patients living in resource-limited areas.
Fatal vs non-fatal, viral vs bacterial, etc. are some of the categorization of skin lesions. Acne, eczema, psoriasis and melanoma are among the top five most frequently occurring skin diseases [5]. Therefore, this research work investigates the multi-class method aimed at classifying the above-mentioned diseases. This research work is an extension of our already existing work [6]. Acne is a chronic skin lesion and mostly found in adults. It occurs mostly on the face, back and chest [7]. Acne contributes approximately 0.29% in the disability-adjusted life years (DALYs) [8]. The contribution of eczema towards the DALYs is 0.38% [8]. Eczema appears differently in different people. Small patches can be seen in some people suffering from eczema whereas, others may have eczema on full body. Eczema causes the skin to be red, sored, dry and cracked [9]. Malignant melanoma; a kind of skin cancer is a fatal disease and caused by the excessive growth of melanin in melanocytic cells [10]. Malignant melanoma is treatable if detected in the early stages. In 2018, approximately 99,550 new cases of malignant melanoma were diagnosed in the USA and 13,460 of them were incurable; leading to death [11]. Malignant melanoma contributes 0.06% towards the DALYs [8]. Psoriasis is a non-infectious skin problem which causes red patches having white scaly plaques with clear boundaries around them [12]. Contribution of psoriasis in DALYs is 0.19% [8].
Most of the existing work done on skin lesion classification considers a single disease [13,14,15,16,17,18] and inadequate work is done on multi-class skin lesions classification [19,20,21,22,23,24,25]. Due to the similar characteristics of skin diseases, the computational analysis of multi-class classification is very challenging. The core contribution of this research work is a novel intelligent expert classification scheme to classify multiple diseases to provide dermatological care in resource-limited areas. Another contribution of this research work is the bag of features that can be extracted from multiple skin lesions. The proposed classification scheme will be very beneficial for the people living all around the world in classifying skin lesions in their early stages.
The rest of the research article is structured as follows. State of the artwork is reviewed in Section 2. The details of the images and their collecting resources are described in Section 3. The methodology of the proposed classification scheme is presented in Section 4. Results are presented and discussed in Section 5 and conclusion and future work is provided in Section 6.

2. Literature Review

Since the 1990s, researchers are working on the automated skin lesions classification [13,14,15,16,17,18,19]. The majority of work done in the literature can classify skin tumors [15,26,27,28,29,30,31] and limited work is done on multi-disease classification [13,14,21,22,23,32,33]. Within this work, most of the work is performed on the biopsy extracted features [13,21,22,23,32,33]. Additionally, researchers who have worked on automated extracted features from images just considered single disease classification [19,20,34].
For classifying erythemato-squamous diseases, an automated classification scheme was proposed by Guvenir and his colleague [20] by using three different classifiers. The proposed expert system was trained on the biopsy features and 99.2% classification accuracy was achieved using the voting feature algorithm. Same nature of work was proposed by Ubeyli et al. [21] to classify erythemato-squamous diseases using a combined neural network approach. Their proposed methodology can classify the erythemato-squamous diseases with an accuracy of 97.7%.
Work done by Chang et al. [22] utilize decision tree and artificial neural network(ANN) for diagnosis of same diseases, and an accuracy of 92.62% was attained. For classifying erythemato-squamous lesions on features extracted after a painful method i.e., biopsy; Xie et al. [23], Kumar et al. [24], and Nanni et al. [25], proposed their classification schemes for multi-class skin lesions classification. The classification scheme by Xie et al., achieved an accuracy of 98.61%, whereas the classification accuracy of the other two approaches was 97.22% and 95%, respectively. As stated earlier, the above-mentioned work regarding the erythemato-squamous disease classification was done on the features extracted after a painful procedure, i.e., biopsy [35]. Clinical feature extraction is a painful, time-consuming and expensive procedure, which requires domain. It is very difficult to extract these features for the people living with limited resources.
To detect malignancy, Erol et al. [36] extracted texture features of the region within the lesion boundary; which was determined by active-contour segmentation. The extracted texture features contain homogeneity, SD, and mean of pixel values. Artificial Neural Network(ANN) and Support Vector Machine (SVM) classifiers were compared and the best performance they achieved was 78% specificity on a dataset consists of 900 images with 173 malignant lesions using ANN. Schnurle et al. [37] provide an automated approach to classify hand eczema. For balancing data, they used the oversampling technique and then extract colour, texture and histogram features from the provided images. For evaluating their approach, SVM was applied to the features extracted from 48 images. An F-score of 58.6% and 43.8% was achieved for the front and back side of hands respectively.
A computer-aided classification system is proposed by Hameed et al. [38] for classification of multiple skin lesions using a hybrid approach in which features are extracted using convolution neural network (CNN) and classification is performed using SVM. As the features are extracted using CNN, hence uninterpretable. Computer-aided classification systems presented by different scholars achieved good accuracy but having the limitation in covering the scope of multiples diseases. Limitations in the current literature indicate the demand for an intelligent classification system that can classify multiple skin lesions with high accuracy.

3. Materials

For classifying different skin lesions, dataset plays a vital role. For experiments, an image dataset is collected from different sources. Sources include online medical data repositories, research challenges and researchers working in this domain. The online data repositories include DermIS [26], DermQuest [27], DermNZ [28] and PH2 [29] dataset. “11k hands” publicly available dataset repository is used for healthy images. Some of the images related to eczema and healthy category are collected from researchers [30] working in the field of skin lesions classification. IEEE International Symposium on Biomedical Imaging (ISBI) skin lesion challenge [31] is an international skin lesion classification challenge organized every year since 2016. Some of the images related to benign and malignant class were used from ISBI skin lesions repository. Figure 1 graphically presents the images belonging to different categories. After collecting all the data from different sources, a uniformed dataset has been created for this work.
Data imbalacncing is an important issue that needs to be addressed while training the classification model as the model may incline towards the class having more images [1,32]. Considering this, a stratified sampling technique was used to balance the dataset. Dataset downloaded from the above-mentioned sources is organized based on the disease features and then a random down-sampling technique is applied. Psoriasis category has the minimum number of images (N = 300) so the dataset in other categories is downsampled to make the dataset balanced. After down-sampling, a total of 1800 images of size 227 × 227 × 3 were used to train and test the classification model. Detailed dataset division used in this research work is presented in Table 1.

4. Method

Pre-processing, segmentation, feature extraction and classification are the key phases of the CAD system for medical image classification [10]. The classification scheme for multi-class skin lesions classification is graphically illustrated in Figure 2, which comprises the phases of preprocessing, segmentation, feature extraction and classification.

4.1. Pre-Processing and Segmentation

Capturing and digitisation is a noisy process considering the facts of angle, lighting, camera resolution and dimensional alignment. Because of the noisy capturing process, pre-processing is the first step of the proposed classification scheme. In this stage, different kinds of noise are removed in the steps of resizing, hair removal and smoothening of the images. The gathered images are of different sizes and contain noise because they are captured using different devices in different environments. The noise present in the images is in the form of hair. As the images are of different size; therefore, for consistency, the images are resized into 227 × 227 × 3. For removing hairs from the images, an already well-known technique titled “Dull Razor” [33] is used. To remove the other noise, a filtering technique is applied and Gaussian filter with 3 × 3 filter size is used.
Segmentation of the multi-disease classification is very tough because of their different characteristics and their location on the human body. Malignant melanoma and benign lesions usually have a definite shape and boundary; therefore; shape and geometric features can be easily extracted from them [1]. Diseases like acne, eczema, and psoriasis may cover full body area and have no definite shape, therefore, extraction of geometric and boundary features is very challenging. Due to the above-mentioned problem, in this research, segmentation is performed with respect to human skin. Any non-skin area is discarded from the image and other part is extracted and considered as a region of interest (ROI). ROI is segmented by using the methodology proposed by Phung et al. [34]. The segmentation accuracy achieved is 81.24% as in some cases, the colour of the background and skin matches.

4.2. Feature Extraction

Feature extraction for multi-disease classification is a very challenging and difficult task as the different diseases may have similar features. It is also a challenging task due to the diverse nature of the skin lesions, e.g., extraction of shape features is easy from skin cancer images as they have a clear boundary and has a definite size, whereas same features are difficult to extract from acne, eczema, and psoriasis images as they may cover whole body area in the captured image and have no clear shape. In this research work, a bag of features that can be extracted from any skin lesion image is proposed. In the feature extraction step, 35 colour and texture features are extracted from the skin lesion images for multi-class classification.

4.2.1. Colour Features

In multi-disease classification, colour features play a vital role [33,34]. Colour features are one of the important features used to distinguish between different skin diseases. This work explores the RGB colour space, and different features are extracted from it. For this research work, minimum, maximum, mean, mode, standard deviation, skewness, energy, entropy, and kurtosis of red, green, and blue colour spaces are considered. The colour features along with their description and formulae are given in Table 2.

4.2.2. Texture Features

In the existing literature, Grey level co-occurrence matrix (GLCM) is mostly used to extract texture features [39]. In this research work, first the GLCM matrix [39] is computed and then contrast, correlation, energy and homogeneity is calculated from it. The extracted GLCM features along with their description and formula are given in Table 3.
Neighborhood grey-tone difference matrix (NGTDM) extracted features are also important and provide the human perception of texture [40]. These features are not fully investigated for the classification of multiple skin diseases. In this research, we have extracted four features from the NGTDM. NGTDM is a column matrix formed by the greyscale image. Let f(k, l) be the grey-tone of any pixel at (k, l) having grey-tone value i, the average grey-tone over a neighborhood is calculated using Equation (1).
A ¯ i = A ¯ ( k , l ) = 1 W 1 [ m = d d n = d d f ( k + m , l + n ) ] ,   ( m , n ) ( 0 , 0 )
where d specifies the neighborhood size and W = ( 2 d + 1 ) 2 Then the ith entry in the NGTDM is calculated using Equation (2).
( i ) = { | i A ¯ i | , f o r   i N i   i f   N i 0 0 o t h e r w i s e
where N i is the set of all pixels having grey tone i. After calculating NGTDM, busyness, complexity, contrast, and strength are extracted. The description along with their formula are given in Table 4.
All the colour and texture features are stored in the feature vector which is then passed to the classification step for training the classification model.

4.3. Classification

Classification is the last phase of the computer-aided classification model. Classification step is the step in which the inferences is made in order to produce a diagnosis about the input image. The classification model is trained on the feature vector using supervised learning. Experiments are performed using different classification models, and the one with the best performance is selected to develop the computer-aided classification application. Different classification models utilized in the classification step are Decision Tree, Support Vector Machine (SVM), K Nearest Neighbor (KNN) and Ensemble methods. For each classifier, different kernels are employed. For decision tree; fine, medium and coarse kernels are used. Linear, quadratic, cubic, fine Gaussian, coarse Gaussian kernels are used for SVM. Kernels for KNN include fine, medium, coarse, cosine, cubic and weighted and for ensemble classifier, boosted trees, bagged trees, subspace discriminant, subspace KNN and RUSBoosted tree kernels are used [41]. The different kernels for each classifier are given in Table 5.
The performance of the classifiers is calculated from the confusion matrix. As the proposed CAD system gives multi-class classification, a multi-class confusion matrix is obtained. First, the performance measure of each class is computed, and then the overall performance is calculated. To calculate the performance of the individual class, accuracy, sensitivity, and specificity are used. After calculating the individual class performance, performance of overall classification is computed. Macro averaging [42] is used to calculate the overall performance. The formulae to calculate the overall performance are given in Table 6.

5. Results and Discussion

The experiments were performed using the gathered dataset and the classification model was trained and tested on 1800 images. K-fold (k = 10) cross-validation technique was used for training and testing the classification model. In k-fold cross-validation, the data is divided into k equal subsets, and the holdout method is repeated k times. Each time, the kth subset is used for the testing and k-1 subsets are used for training, and finally, the average performance across all k trial is calculated. Using 35 colour and texture features, SVM with quadratic kernel performed best among all classifiers. As mentioned above, after performing classification, a multi-class confusion matrix was obtained for each classifier. The confusion matrix for fine tree, quadratic SVM, weighted KNN and bagged trees are provided in the Supplementary Material. The training time required by the SVM with the quadratic kernel was 3.0624 sec whereas the prediction speed was approximately 8400 obs/sec (observations per second). Among decision tree classifiers, fine tree gives the highest accuracy. The average per-class accuracy achieved by fine tree was 88.40%. The sensitivity and specificity obtained by fine tree was 70.24% and 93.04% respectively. The computational time for training the classification model was 3.4608 sec. The maximum number of splits used while using fine tree was 10. As mentioned earlier, among the SVM, Quadratic kernel performed better than others. The accuracy, sensitivity, and specificity achieved by quadratic SVM kernel was 94.74%, 84.23% and 96.85%. The training time for quadratic SVM was 3.0624 sec. For the KNN, weighted KNN performed better with an average per-class accuracy, sensitivity, and specificity of 92.80%, 78.38%, and 95.68% respectively. For weighted KNN, experiments were performed using Euclidean distance and 10 neighbors. The performance of the bagged trees was almost similar to quadratic SVM, and 94.16% accuracy, 82.48% sensitivity, and 96.49% specificity was attained. The results of the fine tree, quadratic SVM, weighted KNN and bagged trees are given in Table 7.
The dispersion boxplot for fine tree, quadratic SVM, weighted KNN and bagged trees is graphically presented in Figure 3 and a comparison of these classifiers is visually presented in Figure 4.
Based on the performance, the model trained using quadratic SVM is chosen, and the CAD system is developed. Two research works can be compared if they have used the same dataset. The proposed research work is compared with the existing research work and their comparison is illustrated in Table 8.
For classifying a new image, an unseen image is sent to the trained model and is classified in a fraction of a second. Currently, the proposed skin lesion classification system can only classify an image into one of the six non-over lapping classes, i.e., healthy, acne, eczema, psoriasis, benign and malignant. If a rarer image arises, it will be classified in one of the provided classes and hence the FPs and FNs will be generated, which can be considered as the limitation of the proposed work. However, it can be overcome by adding more classification diseases. Factors causing difficulties in segmentation and classification are also identified in this work. One of the main hurdles is noise. Noise is present in the form of hairs, black frames, circles, skin lines, etc. Homogenous characteristics of different skin lesions is another reason. Some lesions can have the same colour and texture, which may adversely affect the classification accuracy.

6. Conclusions

In the literature, most of the work done on automated skin lesion classification considered only malignant melanoma classification, and the area of multi-class skin lesions classification is neglected. A novel multi-class skin lesions classification framework is proposed in this work for classification of mostly occurred and prominent skin lesions. The proposed framework constitutes four steps; the first step is pre-processing where skin images are pre-processed, and noise is removed from the images. The second step is the segmentation where ROI is extracted from the provided skin lesion image. From the ROI, 35 different features are extracted for the third step, and finally different classifiers are used to train the classification model. Among the different classifiers, SVM with quadratic kernel performed better, with an accuracy of 94.74%. The proposed classification scheme performed very well on the images gathered from different sources. The proposed system can perform very well on new unseen images as it is trained on images collected from different sources.
Segmentation of multi-class skin lesion classification needs more research investigation in order to propose a unified classification scheme that can be applied to different skin lesions images. In this research work, a bag of features was extracted manually, which was time-consuming. Future studies are required for the automated feature extraction which can be easily understandable. The proposed classification scheme is designed for desktop use; more research is required to make this classification compatible with smartphone applications.

Supplementary Materials

The following are available online at https://0-www-mdpi-com.brum.beds.ac.uk/2073-431X/8/3/62/s1.

Author Contributions

Conceptualization, N.H., A.S. and A.H.; data curation, N.H. and A.H.; formal analysis, N.H. and S.K.; funding acquisition, A.H.; investigation, N.H., A.S., F.H. and S.K.; methodology, N.H., A.S., F.H. and A.H.; project administration, A.H.; resources, S.C. and A.H.; software, N.H.; supervision, A.S., S.C. and A.H.; validation, N.H. and S.C.; writing—original draft, N.H.; writing—review & editing, N.H., A.S., F.H. and S.K.

Funding

This research work was funded by Erasmus Mundus FUSION (Featured eUrope and South asIa mObility Network) project Grant reference number: 2013-32541/001001. Without their financial support it would be not possible for us to carry out this research work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Seth, D.; Cheldize, K.; Brown, D.; Freeman, E.E. Global burden of skin disease: Inequities and innovations. Curr. Dermatol. Rep. 2017, 6, 204–210. [Google Scholar] [CrossRef] [PubMed]
  2. Hay, R.J.; Augustin, M.; Griffiths, C.E. The global challenge for skin health. Br. J. Dermatol. 2015, 172, 1469–1472. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Picardi, A.; Lega, I.; Tarolla, E. Suicide risk in skin disorders. Clin. Dermatol. 2013, 31, 47–56. [Google Scholar] [CrossRef] [PubMed]
  4. World Health Organization. New Report Shows that 400 Million Do not Have Access to Essential Health Services; World Health Organization: Geneva, Switzerland, 2015. [Google Scholar]
  5. 5 Most Common Skin Disorders. Available online: http://www.foxnews.com/story/2009/12/15/5-most-common-skin-disorders.html (accessed on 1 June 2018).
  6. Hameed, N.; Shabut, A.; Hossain, M.A. A Computer-aided diagnosis system for classifying prominent skin lesions using machine learning. In Proceedings of the 10th Computer Science and Electronic Engineering (CEEC), Colchester, UK, 19–21 September 2018; pp. 186–191. [Google Scholar]
  7. Williams, H.C.; Dellavalle, R.P.; Garner, S. Acne vulgaris. Lancet 2012, 379, 361–372. [Google Scholar] [CrossRef]
  8. Karimkhani, C.; Dellavalle, R.P.; Coffeng, L.E.; Flohr, C.; Hay, R.J.; Langan, S.M.; Nsoesie, E.O.; Ferrari, A.J.; Erskine, H.E.; Silverberg, J.I.; et al. Global skin disease morbidity and mortality: An update from the global burden of disease study 2013. JAMA Dermatol. 2017, 153, 406–412. [Google Scholar] [CrossRef] [PubMed]
  9. Atopic Eczema. 2018. Available online: https://www.nhs.uk/conditions/atopic-eczema/ (accessed on 14 May 2018).
  10. Hameed, N.; Ruskin, A.; Hassan, K.A.; Hossain, M.A. A comprehensive survey on image-based computer aided diagnosis systems for skin cancer. In Proceedings of the 10th International Conference on Software, Knowledge, Information Management & Applications (SKIMA), Chengdu, China, 15–17 December 2016; pp. 205–214. [Google Scholar]
  11. Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer Statistics 2018. CA Cancer J. Clin. 2018, 68, 7–30. [Google Scholar] [CrossRef] [PubMed]
  12. Al Abbadi, N.K.; Dahir, N.S.; Al-Dhalimi, M.A.; Restom, H. Psoriasis Detection Using Skin Color and Texture Features 1. J. Comput. Sci. 2010, 6, 648–652. [Google Scholar] [CrossRef]
  13. Umbaugh, S.E.; Moss, R.H.; Stoecker, W.V. Applying artificial intelligence to the identification of variegated coloring in skin tumors. IEEE Eng. Med. Biol. Mag. 1991, 10, 57–62. [Google Scholar] [CrossRef]
  14. Ercal, F.; Chawla, A.; Stoecker, W.V.; Lee, H.C.; Moss, R.H. Neural network diagnosis of malignant melanoma from color images. IEEE Trans. Biomed. Eng. 1994, 41, 837–845. [Google Scholar] [CrossRef]
  15. Nischik, M.; Forster, C. Analysis of skin erythema using true-color images. IEEE Trans. Med. Imaging 1997, 16, 711–716. [Google Scholar] [CrossRef]
  16. Vasconcelos, C.N.; Vasconcelos, B.N. Experiments using deep learning for dermoscopy image analysis. Pattern Recognit. Lett. 2017. [Google Scholar] [CrossRef]
  17. Zhang, Z.; Stoecker, W.V.; Moss, R.H. Border detection on digitized skin tumor images. IEEE Trans. Med. Imaging 2000, 19, 1128–1143. [Google Scholar] [CrossRef] [PubMed]
  18. Dorj, U.O.; Lee, K.K.; Choi, J.Y.; Lee, M. The skin cancer classification using deep convolutional neural network. Multimed. Tools Appl. 2018, 77, 9909–9924. [Google Scholar] [CrossRef]
  19. Taufiq, M.A.; Hameed, N.; Anjum, A.; Hameed, F. m-Skin Doctor: A mobile enabled system for early melanoma skin cancer detection using support vector machine. In Health 360°; Springer: Cham, Switzerland, 2017; pp. 468–475. [Google Scholar]
  20. Güvenir, H.A.; Emeksiz, N. An expert system for the differential diagnosis of erythemato-squamous diseases. Expert Syst. Appl. 2000, 18, 43–49. [Google Scholar] [CrossRef]
  21. Übeyli, E.D. Multiclass support vector machines for diagnosis of erythemato-squamous diseases. Expert Syst. Appl. 2008, 35, 1733–1740. [Google Scholar] [CrossRef]
  22. Chang, C.L.; Chen, C.H. Applying decision tree and neural network to increase quality of dermatologic diagnosis. Expert Syst. Appl. 2009, 36, 4035–4041. [Google Scholar] [CrossRef]
  23. Xie, J.; Wang, C. Using support vector machines with a novel hybrid feature selection method for diagnosis of erythemato-squamous diseases. Expert Syst. Appl. 2011, 38, 5809–5815. [Google Scholar] [CrossRef]
  24. Kumar, V.B.; Kumar, S.S.; Saboo, V. Dermatological disease detection using image processing and machine learning. In Proceedings of the 2016 Third International Conference on Artificial Intelligence and Pattern Recognition (AIPR), Lodz, Poland, 19–21 September 2016. [Google Scholar]
  25. Nanni, L. An ensemble of classifiers for the diagnosis of erythemato-squamous diseases. Neurocomputing 2006, 69, 842–845. [Google Scholar] [CrossRef]
  26. DermIS. Available online: http://www.dermis.net/dermisroot/en/home/index.htm (accessed on 29 June 2017).
  27. Derm101 Image Library. Available online: https://www.derm101.com/image-library/ (accessed on 12 January 2018).
  28. DermNZ-Image Library. Available online: https://www.dermnetnz.org/image-library/ (accessed on 13 January 2018).
  29. PH2 Database. Available online: http://www.fc.up.pt/addi/ph2 database.html (accessed on 12 January 2018).
  30. Lam, M.N.; Munia, T.T.; Tavakolian, K.; Vasefi, F.; MacKinnon, N.; Fazel-Rezai, R. Automatic detection and severity measurement of eczema using image processing. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016. [Google Scholar]
  31. Codella, N.C.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 August 2018. [Google Scholar]
  32. Hameed, N.; Shabut, A.; Hameed, F.; Cirstea, S.; Hossain, M.A. An Intelligent Inflammatory Skin Lesions Classification Scheme for Mobile Devices. In Proceedings of the IEEE International Conference on Computing, Electronics & Communications Engineering, London, UK, 22–23 August 2019; pp. 83–88. [Google Scholar]
  33. Lee, T.; Ng, V.; Gallagher, R.; Coldman, A.; McLean, D. Dullrazor®: A software approach to hair removal from images. Comput. Biol. Med. 1997, 27, 533–543. [Google Scholar] [CrossRef]
  34. Phung, S.L.; Bouzerdoum, A.; Chai, D. Skin segmentation using color pixel classification: Analysis and comparison. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 1, 148–154. [Google Scholar] [CrossRef]
  35. UCI Machine Learning Repository: Dermatology Data Set. Available online: https://archive.ics.uci.edu/ml/datasets/Dermatology (accessed on 27 April 2017).
  36. Erol, R.; Bayraktar, M.; Kockara, S.; Kaya, S.; Halic, T. Texture based skin lesion abruptness quantification to detect malignancy. BMC Bioinform. 2017, 18, 51–60. [Google Scholar] [CrossRef] [PubMed]
  37. Schnürle, S.; Pouly, M.; vor der Brück, T.; Navarini, A.; Koller, T. On using Support Vector Machines for the Detection and Quantification of Hand Eczema. In Proceedings of the 9th International Conference on Agents and Artificial Intelligence (ICAART), Porto, Portugal, 24–26 February 2017; pp. 75–84. [Google Scholar]
  38. Hameed, N.; Shabut, A.M.; Hossain, M.A. Multi-Class Skin Diseases Classification Using Deep Convolutional Neural Network and Support Vector Machine. In Proceedings of the 12th International Conference on Software, Knowledge, Information Management & Applications (SKIMA), Phnom Penh, Cambodia, 3–5 December 2018. [Google Scholar]
  39. Texture Analysis Using the Gray-Level Co-Occurrence Matrix (GLCM). Available online: https://uk.mathworks.com/help/images/texture-analysis-using-the-gray-level-co-occurrence-matrix-glcm.html (accessed on 23 June 2018).
  40. Amadasun, M.; King, R. Textural features corresponding to textural properties. IEEE Trans. Syst. Man Cybern. 1989, 19, 1264–1274. [Google Scholar] [CrossRef]
  41. Choose Classifier Options. Available online: https://uk.mathworks.com/help/stats/choose-a-classifier.html (accessed on 12 January 2018).
  42. Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
Figure 1. Graphical representation of dataset used in the research study.
Figure 1. Graphical representation of dataset used in the research study.
Computers 08 00062 g001
Figure 2. Steps involved in the proposed classification framework (1. Preprocessing, 2. Segmentation, 3. Feature Extraction, 4. Classification).
Figure 2. Steps involved in the proposed classification framework (1. Preprocessing, 2. Segmentation, 3. Feature Extraction, 4. Classification).
Computers 08 00062 g002
Figure 3. Dispersion boxplot for (a) Fine Tree (b) Quadratic SVM (c) Weighted KNN (d) Bagged Trees.
Figure 3. Dispersion boxplot for (a) Fine Tree (b) Quadratic SVM (c) Weighted KNN (d) Bagged Trees.
Computers 08 00062 g003
Figure 4. Comparison of fine tree, quadratic SVM, weighted KNN and bagged trees.
Figure 4. Comparison of fine tree, quadratic SVM, weighted KNN and bagged trees.
Computers 08 00062 g004
Table 1. Number of images used in healthy, acne, eczema, psoriasis, benign and malignant categories.
Table 1. Number of images used in healthy, acne, eczema, psoriasis, benign and malignant categories.
CategoryNo. of Images
Healthy300
Acne300
Eczema300
Psoriasis300
Benign300
Malignant300
Total1800
Table 2. Different colour features extracted from red, green and blue colour space along with their description and formulae. The colour features include minimum, maximum, mean, mode, standard deviation, skewness, energy, entropy and kurtosis).
Table 2. Different colour features extracted from red, green and blue colour space along with their description and formulae. The colour features include minimum, maximum, mean, mode, standard deviation, skewness, energy, entropy and kurtosis).
Feature NameDescriptionFormula
MinMinimum pixel value of R, G and B colourMin(colour space)
MaxMaximum pixel value of R, G and B colourMax(colour space)
MeanMeasures image overall intensity M ( g ¯ ) = r c I ( r , c ) M
ModeGives information about the most occurring valueMode(colour space)
Standard DeviationPresents the spread of the data σ g = g = 0 W 1 ( g g ¯ ) 2 P ( g )
SkewnessMeasure asymmetry of the probability distribution = 1 σ 3 g = 0 W 1 ( g g ¯ ) 3 P ( g )
EnergyGives information about the spread of the pixel values = g = 0 W 1 [ P ( g ) ] 2
EntropyMeasure the required amount of information to code the image data = g = 0 w 1 [ P ( g ) log 2 P ( g ) ]
KurtosisMeasure of the peakness of the probability distribution of an image = 1 σ 4 g = 0 W 1 ( g g ¯ ) 4 P ( g )
Legends*: w is the number of intensity levels, g is the intensity level, r is the number of rows, c is the number of columns in the image, g ¯ is the mean, σ g is the standard deviation
Table 3. GLCM features with their description and formulae. GLCM features include contrast, correlation, energy and homogeneity.
Table 3. GLCM features with their description and formulae. GLCM features include contrast, correlation, energy and homogeneity.
NameDescriptionFormula
ContrastGLCMMeasure the local fluctuations of grey levels of neighbor pixels i , j = 0 W 1 P i j ( i j ) 2
CorrelationGLCM Measure the joint probability occurrence of specified pair pixels i , j = 0 W 1 P i j ( i μ ) ( j μ ) σ 2
EnergyGLCM Measure the sum of squared elements in the GLCM g = 0 w 1 [ P ( g ) log 2 P ( g ) ]
HomogeneityGLCM Measures the local uniformity i , j = 0 W 1 P i j 1 + ( i j ) 2
Table 4. Features extracted from the Neighborhood grey-tone difference matrix along with their description and formula.
Table 4. Features extracted from the Neighborhood grey-tone difference matrix along with their description and formula.
NameDescriptionFormula
BusynessMeasure changes in grey levels between neighboring voxels = i = 1 N g p ( i ) s ( i ) i = 1 N g j = 1 N g | i p ( i ) j p ( j ) | ,   p ( i ) 0 , p ( j ) 0
ComplexityMeasure the non-uniformity and rapid changes in grey-levels = 1 N v   i = 1 N g j = 1 N g | i j | p ( i ) s ( i ) + p ( j ) s ( j ) p ( i ) + p ( j ) , p ( i ) 0 , p ( j ) 0
ContrastMeasures the changes between voxels and their neighborhood = ( 1 N p ( 1 N p )   i = 1 N g j = 1 N g p ( i ) p ( j ) ( i j ) 2 ) ( 1 N v i = 1 N g s ( i ) )
StrengthMeasure the primitives in an image = i = 1 N g j = 1 N g [ p ( i ) + p ( j ) ] ( i j ) 2 ε + i = 1 N g s ( i ) ,   p ( i ) 0 , p ( j ) 0
Table 5. Different classifiers along with their kernels used in the experiments.
Table 5. Different classifiers along with their kernels used in the experiments.
Classifier Kernel
TreeFine Tree
Medium Tree
Coarse Tree
Support Vector MachineLinear
Quadratic
Cubic
Fine Gaussian
Coarse Gaussian
k-Nearest NeighborsFine
Medium
Coarse
Cosine
Cubic
Weighted
EnsembleBoosted Trees
Bagged Trees
Subspace Discriminant
Subspace KNN
RUSBoosted Trees
Table 6. Performance measures along with their formulae (TP = True Positive, TN = True Negative, FP = False Positive, FN = False Negative).
Table 6. Performance measures along with their formulae (TP = True Positive, TN = True Negative, FP = False Positive, FN = False Negative).
Measure FormulaDescription
A c c u r a c y i l T P i + T N i T P i + T N i + F P i + F N i Measure the number of correct classifications over the total number of examples evaluated
S e n s i t i v i t y i l T P i T P i + F N i l Measure the number of actual positive cases that are correctly identified
S p e c i f i c i t y i l T N i T N i + F P i l Measure the number of actual negative cases that are correctly identified
Legends:
i = Individual class i.e. Healthy, acne, eczema, psoriasis, benign and malignant
l = Total Number of classes = 6
Table 7. Performance of the different classifiers using the 10-fold cross-validation. Values depict the mean score (Standard deviation). Values in bold show the best accuracy, sensitivity and specificity score. All the score is in %.
Table 7. Performance of the different classifiers using the 10-fold cross-validation. Values depict the mean score (Standard deviation). Values in bold show the best accuracy, sensitivity and specificity score. All the score is in %.
ClassifierAccuracy (SD)Sensitivity (SD)Specificity (SD)
Fine Tree88.40 (0.27)70.24 (0.83)93.04 (0.17)
Quadratic SVM94.74 (0.11)84.23 (0.32)96.85 (0.06)
Weighted KNN92.80 (0.11)78.38 (0.33)95.68 (0.06)
Bagged Trees94.16 (0.13)82.48 (0.39)96.49 (0.07)
Table 8. Comparison of proposed classification framework with existing research work. All the results are in %.
Table 8. Comparison of proposed classification framework with existing research work. All the results are in %.
ReferenceAccuracy Sensitivity Specificity
[1]83 NANA
Proposed Work94.74 84.23 96.85

Share and Cite

MDPI and ACS Style

Hameed, N.; Hameed, F.; Shabut, A.; Khan, S.; Cirstea, S.; Hossain, A. An Intelligent Computer-Aided Scheme for Classifying Multiple Skin Lesions. Computers 2019, 8, 62. https://0-doi-org.brum.beds.ac.uk/10.3390/computers8030062

AMA Style

Hameed N, Hameed F, Shabut A, Khan S, Cirstea S, Hossain A. An Intelligent Computer-Aided Scheme for Classifying Multiple Skin Lesions. Computers. 2019; 8(3):62. https://0-doi-org.brum.beds.ac.uk/10.3390/computers8030062

Chicago/Turabian Style

Hameed, Nazia, Fozia Hameed, Antesar Shabut, Sehresh Khan, Silvia Cirstea, and Alamgir Hossain. 2019. "An Intelligent Computer-Aided Scheme for Classifying Multiple Skin Lesions" Computers 8, no. 3: 62. https://0-doi-org.brum.beds.ac.uk/10.3390/computers8030062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop