Next Article in Journal
Spatially and Spectrally Concatenated Neural Networks for Efficient Lossless Compression of Hyperspectral Imagery
Next Article in Special Issue
Deep Multimodal Learning for the Diagnosis of Autism Spectrum Disorder
Previous Article in Journal
Comparative Study of Contact Repulsion in Control and Mutant Macrophages Using a Novel Interaction Detection
Article

Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction

1
Department of Mathematics, University of Patras, GR 265-00 Patras, Greece
2
Department of Biomedical Engineering, University of West Attica, GR 122-43 Egaleo Athens, Greece
*
Author to whom correspondence should be addressed.
Received: 30 April 2020 / Revised: 26 May 2020 / Accepted: 26 May 2020 / Published: 28 May 2020
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Image classification is a very popular machine learning domain in which deep convolutional neural networks have mainly emerged on such applications. These networks manage to achieve remarkable performance in terms of prediction accuracy but they are considered as black box models since they lack the ability to interpret their inner working mechanism and explain the main reasoning of their predictions. There is a variety of real world tasks, such as medical applications, in which interpretability and explainability play a significant role. Making decisions on critical issues such as cancer prediction utilizing black box models in order to achieve high prediction accuracy but without provision for any sort of explanation for its prediction, accuracy cannot be considered as sufficient and ethnically acceptable. Reasoning and explanation is essential in order to trust these models and support such critical predictions. Nevertheless, the definition and the validation of the quality of a prediction model’s explanation can be considered in general extremely subjective and unclear. In this work, an accurate and interpretable machine learning framework is proposed, for image classification problems able to make high quality explanations. For this task, it is developed a feature extraction and explanation extraction framework, proposing also three basic general conditions which validate the quality of any model’s prediction explanation for any application domain. The feature extraction framework will extract and create transparent and meaningful high level features for images, while the explanation extraction framework will be responsible for creating good explanations relying on these extracted features and the prediction model’s inner function with respect to the proposed conditions. As a case study application, brain tumor magnetic resonance images were utilized for predicting glioma cancer. Our results demonstrate the efficiency of the proposed model since it managed to achieve sufficient prediction accuracy being also interpretable and explainable in simple human terms. View Full-Text
Keywords: interpretable/explainable machine learning; image classification; image processing; machine learning models; white box; black box; cancer prediction interpretable/explainable machine learning; image classification; image processing; machine learning models; white box; black box; cancer prediction
Show Figures

Figure 1

MDPI and ACS Style

Pintelas, E.; Liaskos, M.; Livieris, I.E.; Kotsiantis, S.; Pintelas, P. Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction. J. Imaging 2020, 6, 37. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6060037

AMA Style

Pintelas E, Liaskos M, Livieris IE, Kotsiantis S, Pintelas P. Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction. Journal of Imaging. 2020; 6(6):37. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6060037

Chicago/Turabian Style

Pintelas, Emmanuel; Liaskos, Meletis; Livieris, Ioannis E.; Kotsiantis, Sotiris; Pintelas, Panagiotis. 2020. "Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction" J. Imaging 6, no. 6: 37. https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging6060037

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop