Explainable Artificial Intelligence (XAI)

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 August 2022) | Viewed by 117766

Special Issue Editors


E-Mail Website
Guest Editor
Computer Science and Engineering Department, Universidad Carlos III de Madrid, Madrid, Spain
Interests: artificial intelligence; RoboCup and soccer robots; software agents; machine learning and robotics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computing and Communications Department, Lancaster University, Lancaster, UK
Interests: intelligent systems; computational intelligence; fuzzy systems; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Science and Engineering Department, Universidad Carlos III de Madrid, Madrid, Spain
Interests: ensemble of classifiers; artificial neural networks; machine learning; pattern recognition and robotics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Artificial intelligence (AI) is one of the most promising fields today, in many different research areas. However, while many of the current AI algorithms exhibit high performance, they are incomprehensible in terms of explainability. The concept of “black box” in machine learning is used in those cases in which the final decision of the AI algorithm cannot be explained. In this sense, AI algorithms are used in many areas, and a wrong output which cannot be analyzed may be fatal (for example, the output of an autonomous car). The different paradigms underlying this problem fall under the umbrella of the so-called explainable artificial intelligence (XAI). The term XAI is related with those algorithms and techniques which apply AI in a way that the solution can be understood by human users. Thus, the goal behind XAI systems is that the decisions made or suggested by such systems be explained with transparency.

This Special Issue aims not only to review the latest research progress in the field of XAI but also their application in the different research areas. We encourage submissions of conceptual, empirical, and literature review papers focusing on this field. Different types and approaches of XAI are welcome in this Special Issue.

Dr. Jose Antonio Iglesias Martinez
Prof. Dr. Plamen Angelov
Dr. María Paz Sesmero Lorente
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Explainable artificial intelligence
  • Human-understandable AI systems

Published Papers (28 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

32 pages, 1060 KiB  
Article
BEERL: Both Ends Explanations for Reinforcement Learning
by Ahmad Terra, Rafia Inam and Elena Fersman
Appl. Sci. 2022, 12(21), 10947; https://0-doi-org.brum.beds.ac.uk/10.3390/app122110947 - 28 Oct 2022
Cited by 2 | Viewed by 1711
Abstract
Deep Reinforcement Learning (RL) is a black-box method and is hard to understand because the agent employs a neural network (NN). To explain the behavior and decisions made by the agent, different eXplainable RL (XRL) methods are developed; for example, feature importance methods [...] Read more.
Deep Reinforcement Learning (RL) is a black-box method and is hard to understand because the agent employs a neural network (NN). To explain the behavior and decisions made by the agent, different eXplainable RL (XRL) methods are developed; for example, feature importance methods are applied to analyze the contribution of the input side of the model, and reward decomposition methods are applied to explain the components of the output end of the RL model. In this study, we present a novel method to connect explanations from both input and output ends of a black-box model, which results in fine-grained explanations. Our method exposes the reward prioritization to the user, which in turn generates two different levels of explanation and allows RL agent reconfigurations when unwanted behaviors are observed. The method further summarizes the detailed explanations into a focus value that takes into account all reward components and quantifies the fulfillment of the explanation of desired properties. We evaluated our method by applying it to a remote electrical telecom-antenna-tilt use case and two openAI gym environments: lunar lander and cartpole. The results demonstrated fine-grained explanations by detailing input features’ contributions to certain rewards and revealed biases of the reward components, which are then addressed by adjusting the reward’s weights. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

18 pages, 994 KiB  
Article
Cybersecurity Knowledge Extraction Using XAI
by Ana Šarčević, Damir Pintar, Mihaela Vranić and Agneza Krajna
Appl. Sci. 2022, 12(17), 8669; https://0-doi-org.brum.beds.ac.uk/10.3390/app12178669 - 29 Aug 2022
Cited by 4 | Viewed by 2007
Abstract
Global networking, growing computer infrastructure complexity and the ongoing migration of many private and business aspects to the electronic domain commonly mandate using cutting-edge technologies based on data analysis, machine learning, and artificial intelligence to ensure high levels of network and information system [...] Read more.
Global networking, growing computer infrastructure complexity and the ongoing migration of many private and business aspects to the electronic domain commonly mandate using cutting-edge technologies based on data analysis, machine learning, and artificial intelligence to ensure high levels of network and information system security. Transparency is a major barrier to the deployment of black box intelligent systems in high-risk domains, such as the cybersecurity domain, with the problem getting worse as machine learning models increase in complexity. In this research, explainable machine learning is used to extract information from the CIC-IDS2017 dataset and to critically contrast the knowledge attained by analyzing if–then decision tree rules with the knowledge attained by the SHAP approach. The paper compares the challenges of the knowledge extraction using the SHAP method and the if–then decision tree rules, providing guidelines regarding different approaches suited to specific situations. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

31 pages, 9544 KiB  
Article
Explainability of Predictive Process Monitoring Results: Can You See My Data Issues?
by Ghada Elkhawaga, Mervat Abu-Elkheir and Manfred Reichert
Appl. Sci. 2022, 12(16), 8192; https://0-doi-org.brum.beds.ac.uk/10.3390/app12168192 - 16 Aug 2022
Cited by 3 | Viewed by 1688
Abstract
Predictive process monitoring (PPM) has been discussed as a use case of process mining for several years. PPM enables foreseeing the future of an ongoing business process by predicting, for example, relevant information on the way in which running processes terminate or on [...] Read more.
Predictive process monitoring (PPM) has been discussed as a use case of process mining for several years. PPM enables foreseeing the future of an ongoing business process by predicting, for example, relevant information on the way in which running processes terminate or on related process performance indicators. A large share of PPM approaches adopt Machine Learning (ML), taking advantage of the accuracy and precision of ML models. Consequently, PPM inherits the challenges of traditional ML approaches. One of these challenges concerns the need to gain user trust in the generated predictions. This issue is addressed by explainable artificial intelligence (XAI). However, in addition to ML characteristics, the choices made and the techniques applied in the context of PPM influence the resulting explanations. This necessitates the availability of a study on the effects of different choices made in the context of a PPM task on the explainability of the generated predictions. In order to address this gap, we systemically investigate the effects of different PPM settings on the data fed into an ML model and subsequently into the employed XAI method. We study how differences between the resulting explanations indicate several issues in the underlying data. Example of these issues include collinearity and high dimensionality of the input data. We construct a framework for performing a series of experiments to examine different choices of PPM dimensions (i.e., event logs, preprocessing configurations, and ML models), integrating XAI as a fundamental component. In addition to agreements, the experiments highlight several inconsistencies between data characteristics and important predictors used by the ML model on one hand, and explanations of predictions of the investigated ML model on the other. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

27 pages, 9266 KiB  
Article
Empirical Perturbation Analysis of Two Adversarial Attacks: Black Box versus White Box
by Raluca Chitic, Ali Osman Topal and Franck Leprévost
Appl. Sci. 2022, 12(14), 7339; https://0-doi-org.brum.beds.ac.uk/10.3390/app12147339 - 21 Jul 2022
Cited by 1 | Viewed by 1284
Abstract
Through the addition of humanly imperceptible noise to an image classified as belonging to a category ca, targeted adversarial attacks can lead convolutional neural networks (CNNs) to classify a modified image as belonging to any predefined target class [...] Read more.
Through the addition of humanly imperceptible noise to an image classified as belonging to a category ca, targeted adversarial attacks can lead convolutional neural networks (CNNs) to classify a modified image as belonging to any predefined target class ctca. To achieve a better understanding of the inner workings of adversarial attacks, this study analyzes the adversarial images created by two completely opposite attacks against 10 ImageNet-trained CNNs. A total of 2×437 adversarial images are created by EAtarget,C, a black-box evolutionary algorithm (EA), and by the basic iterative method (BIM), a white-box, gradient-based attack. We inspect and compare these two sets of adversarial images from different perspectives: the behavior of CNNs at smaller image regions, the image noise frequency, the adversarial image transferability, the image texture change, and penultimate CNN layer activations. We find that texture change is a side effect rather than a means for the attacks and that ct-relevant features only build up significantly from image regions of size 56×56 onwards. In the penultimate CNN layers, both attacks increase the activation of units that are positively related to ct and units that are negatively related to ca. In contrast to EAtarget,C’s white noise nature, BIM predominantly introduces low-frequency noise. BIM affects the original ca features more than EAtarget,C, thus producing slightly more transferable adversarial images. However, the transferability with both attacks is low, since the attacks’ ct-related information is specific to the output layers of the targeted CNN. We find that the adversarial images are actually more transferable at regions with sizes of 56×56 than at full scale. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

15 pages, 7478 KiB  
Article
A Methodology for Controlling Bias and Fairness in Synthetic Data Generation
by Enrico Barbierato, Marco L. Della Vedova, Daniele Tessera, Daniele Toti and Nicola Vanoli
Appl. Sci. 2022, 12(9), 4619; https://0-doi-org.brum.beds.ac.uk/10.3390/app12094619 - 04 May 2022
Cited by 5 | Viewed by 2671
Abstract
The development of algorithms, based on machine learning techniques, supporting (or even replacing) human judgment must take into account concepts such as data bias and fairness. Though scientific literature proposes numerous techniques to detect and evaluate these problems, less attention has been dedicated [...] Read more.
The development of algorithms, based on machine learning techniques, supporting (or even replacing) human judgment must take into account concepts such as data bias and fairness. Though scientific literature proposes numerous techniques to detect and evaluate these problems, less attention has been dedicated to methods generating intentionally biased datasets, which could be used by data scientists to develop and validate unbiased and fair decision-making algorithms. To this end, this paper presents a novel method to generate a synthetic dataset, where bias can be modeled by using a probabilistic network exploiting structural equation modeling. The proposed methodology has been validated on a simple dataset to highlight the impact of tuning parameters on bias and fairness, as well as on a more realistic example based on a loan approval status dataset. In particular, this methodology requires a limited number of parameters compared to other techniques for generating datasets with a controlled amount of bias and fairness. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

14 pages, 515 KiB  
Article
Explainable Machine Learning for Lung Cancer Screening Models
by Katarzyna Kobylińska, Tadeusz Orłowski, Mariusz Adamek and Przemysław Biecek
Appl. Sci. 2022, 12(4), 1926; https://0-doi-org.brum.beds.ac.uk/10.3390/app12041926 - 12 Feb 2022
Cited by 16 | Viewed by 3259
Abstract
Modern medicine is supported by increasingly sophisticated algorithms. In diagnostics or screening, statistical models are commonly used to assess the risk of disease development, the severity of its course, and expected treatment outcome. The growing availability of very detailed data and increased interest [...] Read more.
Modern medicine is supported by increasingly sophisticated algorithms. In diagnostics or screening, statistical models are commonly used to assess the risk of disease development, the severity of its course, and expected treatment outcome. The growing availability of very detailed data and increased interest in personalized medicine are leading to the development of effective but complex machine learning models. For these models to be trusted, their predictions must be understandable to both the physician and the patient, hence the growing interest in the area of Explainable Artificial Intelligence (XAI). In this paper, we present selected methods from the XAI field in the example of models applied to assess lung cancer risk in lung cancer screening through low-dose computed tomography. The use of these techniques provides a better understanding of the similarities and differences between three commonly used models in lung cancer screening, i.e., BACH, PLCOm2012, and LCART. For the presentation of the results, we used data from the Domestic Lung Cancer Database. The XAI techniques help to better understand (1) which variables are most important in which model, (2) how they are transformed into model predictions, and facilitate (3) the explanation of model predictions for a particular screenee. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

20 pages, 13487 KiB  
Article
TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models
by Soumick Chatterjee, Arnab Das, Chirag Mandal, Budhaditya Mukhopadhyay, Manish Vipinraj, Aniruddh Shukla, Rajatha Nagaraja Rao, Chompunuch Sarasaen, Oliver Speck and Andreas Nürnberger
Appl. Sci. 2022, 12(4), 1834; https://doi.org/10.3390/app12041834 - 10 Feb 2022
Cited by 8 | Viewed by 3219
Abstract
Clinicians are often very sceptical about applying automatic image processing approaches, especially deep learning-based methods, in practice. One main reason for this is the black-box nature of these approaches and the inherent problem of missing insights of the automatically derived decisions. In order [...] Read more.
Clinicians are often very sceptical about applying automatic image processing approaches, especially deep learning-based methods, in practice. One main reason for this is the black-box nature of these approaches and the inherent problem of missing insights of the automatically derived decisions. In order to increase trust in these methods, this paper presents approaches that help to interpret and explain the results of deep learning algorithms by depicting the anatomical areas that influence the decision of the algorithm most. Moreover, this research presents a unified framework, TorchEsegeta, for applying various interpretability and explainability techniques for deep learning models and generates visual interpretations and explanations for clinicians to corroborate their clinical findings. In addition, this will aid in gaining confidence in such methods. The framework builds on existing interpretability and explainability techniques that are currently focusing on classification models, extending them to segmentation tasks. In addition, these methods have been adapted to 3D models for volumetric analysis. The proposed framework provides methods to quantitatively compare visual explanations using infidelity and sensitivity metrics. This framework can be used by data scientists to perform post hoc interpretations and explanations of their models, develop more explainable tools, and present the findings to clinicians to increase their faith in such models. The proposed framework was evaluated based on a use case scenario of vessel segmentation models trained on Time-of-Flight (TOF) Magnetic Resonance Angiogram (MRA) images of the human brain. Quantitative and qualitative results of a comparative study of different models and interpretability methods are presented. Furthermore, this paper provides an extensive overview of several existing interpretability and explainability methods. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

16 pages, 2812 KiB  
Article
Chain Graph Explanation of Neural Network Based on Feature-Level Class Confusion
by Hyekyoung Hwang, Eunbyung Park and Jitae Shin
Appl. Sci. 2022, 12(3), 1523; https://0-doi-org.brum.beds.ac.uk/10.3390/app12031523 - 30 Jan 2022
Viewed by 2344
Abstract
Despite increasing interest in developing interpretable machine learning methods, most recent studies have provided explanations only for single instances, require additional datasets, and are sensitive to hyperparameters. This paper proposes a confusion graph that reveals model weaknesses by constructing a confusion dictionary. Unlike [...] Read more.
Despite increasing interest in developing interpretable machine learning methods, most recent studies have provided explanations only for single instances, require additional datasets, and are sensitive to hyperparameters. This paper proposes a confusion graph that reveals model weaknesses by constructing a confusion dictionary. Unlike other methods, which focus on the performance variation caused by single-neuron suppression, it defines the role of each neuron in two different perspectives: ‘correction’ and ‘violation’. Furthermore, our method can identify the class relationships in similar positions at the feature level, which can suggest improvements to the model. Finally, the proposed graph construction is model-agnostic and does not require additional data or tedious hyperparameter tuning. Experimental results show that the information loss from omitting the channels guided by the proposed graph can result in huge performance degradation, from 91% to 33%, while the proposed graph only retains 1% of total neurons. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

15 pages, 659 KiB  
Article
Investigating Explainability Methods in Recurrent Neural Network Architectures for Financial Time Series Data
by Warren Freeborough and Terence van Zyl
Appl. Sci. 2022, 12(3), 1427; https://0-doi-org.brum.beds.ac.uk/10.3390/app12031427 - 28 Jan 2022
Cited by 18 | Viewed by 5496
Abstract
Statistical methods were traditionally primarily used for time series forecasting. However, new hybrid methods demonstrate competitive accuracy, leading to increased machine-learning-based methodologies in the financial sector. However, very little development has been seen in explainable AI (XAI) for financial time series prediction, with [...] Read more.
Statistical methods were traditionally primarily used for time series forecasting. However, new hybrid methods demonstrate competitive accuracy, leading to increased machine-learning-based methodologies in the financial sector. However, very little development has been seen in explainable AI (XAI) for financial time series prediction, with a growing mandate for explainable systems. This study aims to determine if the existing XAI methodology is transferable to the context of financial time series prediction. Four popular methods, namely, ablation, permutation, added noise, and integrated gradients, were applied to a recurrent neural network (RNN), long short-term memory (LSTM), and a gated recurrent unit (GRU) network trained on S&P 500 stocks data to determine the importance of features, individual data points, and specific cells in each architecture. The explainability analysis revealed that GRU displayed the most significant ability to retain long-term information, while the LSTM disregarded most of the given input and instead showed the most notable granularity to the considered inputs. Lastly, the RNN displayed features indicative of no long-term memory retention. The applied XAI methods produced complementary results, reinforcing paradigms on significant differences in how different architectures predict. The results show that these methods are transferable in the financial forecasting sector, but a more sophisticated hybrid prediction system requires further confirmation. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

15 pages, 342 KiB  
Article
Certifiable AI
by Jobst Landgrebe
Appl. Sci. 2022, 12(3), 1050; https://0-doi-org.brum.beds.ac.uk/10.3390/app12031050 - 20 Jan 2022
Cited by 3 | Viewed by 1849
Abstract
Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in [...] Read more.
Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged a new field called ‘explainable AI’ (XAI). When we examine the XAI literature, however, it becomes apparent that its protagonists have redefined the term ‘explanation’ to mean something else, namely: ‘interpretation’. Interpretations are indeed sometimes possible, but we show that they give at best only a subjective understanding of how a model works. We propose an alternative to XAI, namely certified AI (CAI), and describe how an AI can be specified, realized, and tested in order to become certified. The resulting approach combines ontologies and formal logic with statistical learning to obtain reliable AI systems which can be safely used in technical applications. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
14 pages, 617 KiB  
Article
Explanations for Neural Networks by Neural Networks
by Sascha Marton, Stefan Lüdtke and Christian Bartelt
Appl. Sci. 2022, 12(3), 980; https://0-doi-org.brum.beds.ac.uk/10.3390/app12030980 - 18 Jan 2022
Cited by 12 | Viewed by 2166
Abstract
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate [...] Read more.
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our idea is to generate explanations via another neural network (called the Interpretation Network, or I-Net), which maps network parameters to a symbolic representation of the network function. We show that the training of an I-Net for a family of functions can be performed up-front and subsequent generation of an explanation only requires querying the I-Net once, which is computationally very efficient and does not require training data. We empirically evaluate our approach for the case of low-order polynomials as explanations, and show that it achieves competitive results for various data and function complexities. To the best of our knowledge, this is the first approach that attempts to learn mapping from neural networks to symbolic representations. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

20 pages, 1239 KiB  
Article
Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation
by Ihsan Ullah, Andre Rios, Vaibhav Gala and Susan Mckeever
Appl. Sci. 2022, 12(1), 136; https://0-doi-org.brum.beds.ac.uk/10.3390/app12010136 - 23 Dec 2021
Cited by 10 | Viewed by 5747
Abstract
Trust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability of deep learning models is a well-known challenge, a further challenge is clarity of the explanation itself for relevant stakeholders of the [...] Read more.
Trust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability of deep learning models is a well-known challenge, a further challenge is clarity of the explanation itself for relevant stakeholders of the model. Layer-wise Relevance Propagation (LRP), an established explainability technique developed for deep models in computer vision, provides intuitive human-readable heat maps of input images. We present the novel application of LRP with tabular datasets containing mixed data (categorical and numerical) using a deep neural network (1D-CNN), for Credit Card Fraud detection and Telecom Customer Churn prediction use cases. We show how LRP is more effective than traditional explainability concepts of Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) for explainability. This effectiveness is both local to a sample level and holistic over the whole testing set. We also discuss the significant computational time advantage of LRP (1–2 s) over LIME (22 s) and SHAP (108 s) on the same laptop, and thus its potential for real time application scenarios. In addition, our validation of LRP has highlighted features for enhancing model performance, thus opening up a new area of research of using XAI as an approach for feature subset selection. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

19 pages, 1098 KiB  
Article
Towards a More Reliable Interpretation of Machine Learning Outputs for Safety-Critical Systems Using Feature Importance Fusion
by Divish Rengasamy, Benjamin C. Rothwell and Grazziela P. Figueredo
Appl. Sci. 2021, 11(24), 11854; https://0-doi-org.brum.beds.ac.uk/10.3390/app112411854 - 13 Dec 2021
Cited by 13 | Viewed by 2851
Abstract
When machine learning supports decision-making in safety-critical systems, it is important to verify and understand the reasons why a particular output is produced. Although feature importance calculation approaches assist in interpretation, there is a lack of consensus regarding how features’ importance is quantified, [...] Read more.
When machine learning supports decision-making in safety-critical systems, it is important to verify and understand the reasons why a particular output is produced. Although feature importance calculation approaches assist in interpretation, there is a lack of consensus regarding how features’ importance is quantified, which makes the explanations offered for the outcomes mostly unreliable. A possible solution to address the lack of agreement is to combine the results from multiple feature importance quantifiers to reduce the variance in estimates and to improve the quality of explanations. Our hypothesis is that this leads to more robust and trustworthy explanations of the contribution of each feature to machine learning predictions. To test this hypothesis, we propose an extensible model-agnostic framework divided in four main parts: (i) traditional data pre-processing and preparation for predictive machine learning models, (ii) predictive machine learning, (iii) feature importance quantification, and (iv) feature importance decision fusion using an ensemble strategy. Our approach is tested on synthetic data, where the ground truth is known. We compare different fusion approaches and their results for both training and test sets. We also investigate how different characteristics within the datasets affect the quality of the feature importance ensembles studied. The results show that, overall, our feature importance ensemble framework produces 15% less feature importance errors compared with existing methods. Additionally, the results reveal that different levels of noise in the datasets do not affect the feature importance ensembles’ ability to accurately quantify feature importance, whereas the feature importance quantification error increases with the number of features and number of orthogonal informative features. We also discuss the implications of our findings on the quality of explanations provided to safety-critical systems. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

23 pages, 774 KiB  
Article
An Explainable Approach Based on Emotion and Sentiment Features for Detecting People with Mental Disorders on Social Networks
by Leslie Marjorie Gallegos Salazar, Octavio Loyola-González and Miguel Angel Medina-Pérez
Appl. Sci. 2021, 11(22), 10932; https://0-doi-org.brum.beds.ac.uk/10.3390/app112210932 - 19 Nov 2021
Cited by 4 | Viewed by 2204
Abstract
Mental disorders are a global problem that widely affects different segments of the population. Diagnosis and treatment are difficult to obtain, as there are not enough specialists on the matter, and mental health is not yet a common topic among the population. The [...] Read more.
Mental disorders are a global problem that widely affects different segments of the population. Diagnosis and treatment are difficult to obtain, as there are not enough specialists on the matter, and mental health is not yet a common topic among the population. The computer science field has proposed some solutions to detect the risk of depression, based on language use and data obtained through social media. These solutions are mainly focused on objective features, such as n-grams and lexicons, which are complicated to be understood by experts in the application area. Hence, in this paper, we propose a contrast pattern-based classifier to detect depression by using a new data representation based only on emotion and sentiment analysis extracted from posts on social media. Our proposed feature representation contains 28 different features, which are more understandable by specialists than other proposed representations. Our feature representation jointly with a contrast pattern-based classifier has obtained better classification results than five other combinations of features and classifiers reported in the literature. Our proposal statistically outperformed the Random Forest, Naive Bayes, and AdaBoost classifiers using the parser-tree, VAD (Valence, Arousal, and Dominance) and Topics, and Bag of Words (BOW) representations. It obtained similar statistical results to the logistic regression models using the Ensemble of BOWs and Handcrafted features representations. In all cases, our proposal was able to provide an explanation close to the language of experts, due to the mined contrast patterns. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

27 pages, 880 KiB  
Article
An Explainable Artificial Intelligence Model for Detecting Xenophobic Tweets
by Gabriel Ichcanziho Pérez-Landa, Octavio Loyola-González and Miguel Angel Medina-Pérez
Appl. Sci. 2021, 11(22), 10801; https://0-doi-org.brum.beds.ac.uk/10.3390/app112210801 - 16 Nov 2021
Cited by 6 | Viewed by 4781
Abstract
Xenophobia is a social and political behavior that has been present in our societies since the beginning of humanity. The feeling of hatred, fear, or resentment is present before people from different communities from ours. With the rise of social networks like Twitter, [...] Read more.
Xenophobia is a social and political behavior that has been present in our societies since the beginning of humanity. The feeling of hatred, fear, or resentment is present before people from different communities from ours. With the rise of social networks like Twitter, hate speeches were swift because of the pseudo feeling of anonymity that these platforms provide. Sometimes this violent behavior on social networks that begins as threats or insults to third parties breaks the Internet barriers to become an act of real physical violence. Hence, this proposal aims to correctly classify xenophobic posts on social networks, specifically on Twitter. In addition, we collected a xenophobic tweets database from which we also extracted new features by using a Natural Language Processing (NLP) approach. Then, we provide an Explainable Artificial Intelligence (XAI) model, allowing us to understand better why a post is considered xenophobic. Consequently, we provide a set of contrast patterns describing xenophobic tweets, which could help decision-makers prevent acts of violence caused by xenophobic posts on Twitter. Finally, our interpretable results based on our new feature representation approach jointly with a contrast pattern-based classifier obtain similar classification results than other feature representations jointly with prominent machine learning classifiers, which are not easy to understand by an expert in the application area. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

23 pages, 1183 KiB  
Article
Explaining Bad Forecasts in Global Time Series Models
by Jože Rožanec, Elena Trajkova, Klemen Kenda, Blaž Fortuna and Dunja Mladenić
Appl. Sci. 2021, 11(19), 9243; https://0-doi-org.brum.beds.ac.uk/10.3390/app11199243 - 04 Oct 2021
Cited by 4 | Viewed by 3414
Abstract
While increasing empirical evidence suggests that global time series forecasting models can achieve better forecasting performance than local ones, there is a research void regarding when and why the global models fail to provide a good forecast. This paper uses anomaly detection algorithms [...] Read more.
While increasing empirical evidence suggests that global time series forecasting models can achieve better forecasting performance than local ones, there is a research void regarding when and why the global models fail to provide a good forecast. This paper uses anomaly detection algorithms and explainable artificial intelligence (XAI) to answer when and why a forecast should not be trusted. To address this issue, a dashboard was built to inform the user regarding (i) the relevance of the features for that particular forecast, (ii) which training samples most likely influenced the forecast outcome, (iii) why the forecast is considered an outlier, and (iv) provide a range of counterfactual examples to understand how value changes in the feature vector can lead to a different outcome. Moreover, a modular architecture and a methodology were developed to iteratively remove noisy data instances from the train set, to enhance the overall global time series forecasting model performance. Finally, to test the effectiveness of the proposed approach, it was validated on two publicly available real-world datasets. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

28 pages, 702 KiB  
Article
A Framework and Benchmarking Study for Counterfactual Generating Methods on Tabular Data
by Raphael Mazzine Barbosa de Oliveira and David Martens
Appl. Sci. 2021, 11(16), 7274; https://0-doi-org.brum.beds.ac.uk/10.3390/app11167274 - 07 Aug 2021
Cited by 16 | Viewed by 3346
Abstract
Counterfactual explanations are viewed as an effective way to explain machine learning predictions. This interest is reflected by a relatively young literature with already dozens of algorithms aiming to generate such explanations. These algorithms are focused on finding how features can be modified [...] Read more.
Counterfactual explanations are viewed as an effective way to explain machine learning predictions. This interest is reflected by a relatively young literature with already dozens of algorithms aiming to generate such explanations. These algorithms are focused on finding how features can be modified to change the output classification. However, this rather general objective can be achieved in different ways, which brings about the need for a methodology to test and benchmark these algorithms. The contributions of this work are manifold: First, a large benchmarking study of 10 algorithmic approaches on 22 tabular datasets is performed, using nine relevant evaluation metrics; second, the introduction of a novel, first of its kind, framework to test counterfactual generation algorithms; third, a set of objective metrics to evaluate and compare counterfactual results; and, finally, insight from the benchmarking results that indicate which approaches obtain the best performance on what type of dataset. This benchmarking study and framework can help practitioners in determining which technique and building blocks most suit their context, and can help researchers in the design and evaluation of current and future counterfactual generation algorithms. Our findings show that, overall, there’s no single best algorithm to generate counterfactual explanations as the performance highly depends on properties related to the dataset, model, score, and factual point specificities. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

23 pages, 3169 KiB  
Article
Explainable Hopfield Neural Networks Using an Automatic Video-Generation System
by Clemente Rubio-Manzano, Alejandra Segura-Navarrete, Claudia Martinez-Araneda and Christian Vidal-Castro
Appl. Sci. 2021, 11(13), 5771; https://0-doi-org.brum.beds.ac.uk/10.3390/app11135771 - 22 Jun 2021
Cited by 5 | Viewed by 2466
Abstract
Hopfield Neural Networks (HNNs) are recurrent neural networks used to implement associative memory. They can be applied to pattern recognition, optimization, or image segmentation. However, sometimes it is not easy to provide the users with good explanations about the results obtained with them [...] Read more.
Hopfield Neural Networks (HNNs) are recurrent neural networks used to implement associative memory. They can be applied to pattern recognition, optimization, or image segmentation. However, sometimes it is not easy to provide the users with good explanations about the results obtained with them due to mainly the large number of changes in the state of neurons (and their weights) produced during a problem of machine learning. There are currently limited techniques to visualize, verbalize, or abstract HNNs. This paper outlines how we can construct automatic video-generation systems to explain its execution. This work constitutes a novel approach to obtain explainable artificial intelligence systems in general and HNNs in particular building on the theory of data-to-text systems and software visualization approaches. We present a complete methodology to build these kinds of systems. Software architecture is also designed, implemented, and tested. Technical details about the implementation are also detailed and explained. We apply our approach to creating a complete explainer video about the execution of HNNs on a small recognition problem. Finally, several aspects of the videos generated are evaluated (quality, content, motivation and design/presentation). Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

19 pages, 376 KiB  
Article
Explainable Internet Traffic Classification
by Christian Callegari, Pietro Ducange, Michela Fazzolari and Massimo Vecchio
Appl. Sci. 2021, 11(10), 4697; https://0-doi-org.brum.beds.ac.uk/10.3390/app11104697 - 20 May 2021
Cited by 5 | Viewed by 2283
Abstract
The problem analyzed in this paper deals with the classification of Internet traffic. During the last years, this problem has experienced a new hype, as classification of Internet traffic has become essential to perform advanced network management. As a result, many different methods [...] Read more.
The problem analyzed in this paper deals with the classification of Internet traffic. During the last years, this problem has experienced a new hype, as classification of Internet traffic has become essential to perform advanced network management. As a result, many different methods based on classical Machine Learning and Deep Learning have been proposed. Despite the success achieved by these techniques, existing methods are lacking because they provide a classification output that does not help practitioners with any information regarding the criteria that have been taken to the given classification or what information in the input data makes them arrive at their decisions. To overcome these limitations, in this paper we focus on an “explainable” method for traffic classification able to provide the practitioners with information about the classification output. More specifically, our proposed solution is based on a multi-objective evolutionary fuzzy classifier (MOEFC), which offers a good trade-off between accuracy and explainability of the generated classification models. The experimental results, obtained over two well-known publicly available data sets, namely, UniBS and UPC, demonstrate the effectiveness of our method. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

28 pages, 88681 KiB  
Article
Explaining Deep Learning-Based Driver Models
by Maria Paz Sesmero Lorente, Elena Magán Lopez, Laura Alvarez Florez, Agapito Ledezma Espino, José Antonio Iglesias Martínez and Araceli Sanchis de Miguel
Appl. Sci. 2021, 11(8), 3321; https://0-doi-org.brum.beds.ac.uk/10.3390/app11083321 - 07 Apr 2021
Cited by 20 | Viewed by 4516
Abstract
Different systems based on Artificial Intelligence (AI) techniques are currently used in relevant areas such as healthcare, cybersecurity, natural language processing, and self-driving cars. However, many of these systems are developed with “black box” AI, which makes it difficult to explain how they [...] Read more.
Different systems based on Artificial Intelligence (AI) techniques are currently used in relevant areas such as healthcare, cybersecurity, natural language processing, and self-driving cars. However, many of these systems are developed with “black box” AI, which makes it difficult to explain how they work. For this reason, explainability and interpretability are key factors that need to be taken into consideration in the development of AI systems in critical areas. In addition, different contexts produce different explainability needs which must be met. Against this background, Explainable Artificial Intelligence (XAI) appears to be able to address and solve this situation. In the field of automated driving, XAI is particularly needed because the level of automation is constantly increasing according to the development of AI techniques. For this reason, the field of XAI in the context of automated driving is of particular interest. In this paper, we propose the use of an explainable intelligence technique in the understanding of some of the tasks involved in the development of advanced driver-assistance systems (ADAS). Since ADAS assist drivers in driving functions, it is essential to know the reason for the decisions taken. In addition, trusted AI is the cornerstone of the confidence needed in this research area. Thus, due to the complexity and the different variables that are part of the decision-making process, this paper focuses on two specific tasks in this area: the detection of emotions and the distractions of drivers. The results obtained are promising and show the capacity of the explainable artificial techniques in the different tasks of the proposed environments. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

17 pages, 1781 KiB  
Article
gbt-HIPS: Explaining the Classifications of Gradient Boosted Tree Ensembles
by Julian Hatwell, Mohamed Medhat Gaber and R. Muhammad Atif Azad
Appl. Sci. 2021, 11(6), 2511; https://0-doi-org.brum.beds.ac.uk/10.3390/app11062511 - 11 Mar 2021
Cited by 5 | Viewed by 2465
Abstract
This research presents Gradient Boosted Tree High Importance Path Snippets (gbt-HIPS), a novel, heuristic method for explaining gradient boosted tree (GBT) classification models by extracting a single classification rule (CR) from the ensemble of decision trees that make up the GBT model. This [...] Read more.
This research presents Gradient Boosted Tree High Importance Path Snippets (gbt-HIPS), a novel, heuristic method for explaining gradient boosted tree (GBT) classification models by extracting a single classification rule (CR) from the ensemble of decision trees that make up the GBT model. This CR contains the most statistically important boundary values of the input space as antecedent terms. The CR represents a hyper-rectangle of the input space inside which the GBT model is, very reliably, classifying all instances with the same class label as the explanandum instance. In a benchmark test using nine data sets and five competing state-of-the-art methods, gbt-HIPS offered the best trade-off between coverage (0.16–0.75) and precision (0.85–0.98). Unlike competing methods, gbt-HIPS is also demonstrably guarded against under- and over-fitting. A further distinguishing feature of our method is that, unlike much prior work, our explanations also provide counterfactual detail in accordance with widely accepted recommendations for what makes a good explanation. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

16 pages, 3763 KiB  
Article
Neuroscope: An Explainable AI Toolbox for Semantic Segmentation and Image Classification of Convolutional Neural Nets
by Christian Schorr, Payman Goodarzi, Fei Chen and Tim Dahmen
Appl. Sci. 2021, 11(5), 2199; https://0-doi-org.brum.beds.ac.uk/10.3390/app11052199 - 03 Mar 2021
Cited by 14 | Viewed by 5285
Abstract
Trust in artificial intelligence (AI) predictions is a crucial point for a widespread acceptance of new technologies, especially in sensitive areas like autonomous driving. The need for tools explaining AI for deep learning of images is thus eminent. Our proposed toolbox Neuroscope addresses [...] Read more.
Trust in artificial intelligence (AI) predictions is a crucial point for a widespread acceptance of new technologies, especially in sensitive areas like autonomous driving. The need for tools explaining AI for deep learning of images is thus eminent. Our proposed toolbox Neuroscope addresses this demand by offering state-of-the-art visualization algorithms for image classification and newly adapted methods for semantic segmentation of convolutional neural nets (CNNs). With its easy to use graphical user interface (GUI), it provides visualization on all layers of a CNN. Due to its open model-view-controller architecture, networks generated and trained with Keras and PyTorch are processable, with an interface allowing extension to additional frameworks. We demonstrate the explanation abilities provided by Neuroscope using the example of traffic scene analysis. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

18 pages, 3589 KiB  
Article
Minimum Relevant Features to Obtain Explainable Systems for Predicting Cardiovascular Disease Using the Statlog Data Set
by Roberto Porto, José M. Molina, Antonio Berlanga and Miguel A. Patricio
Appl. Sci. 2021, 11(3), 1285; https://0-doi-org.brum.beds.ac.uk/10.3390/app11031285 - 30 Jan 2021
Cited by 7 | Viewed by 2452
Abstract
Learning systems have been focused on creating models capable of obtaining the best results in error metrics. Recently, the focus has shifted to improvement in the interpretation and explanation of the results. The need for interpretation is greater when these models are used [...] Read more.
Learning systems have been focused on creating models capable of obtaining the best results in error metrics. Recently, the focus has shifted to improvement in the interpretation and explanation of the results. The need for interpretation is greater when these models are used to support decision making. In some areas, this becomes an indispensable requirement, such as in medicine. The goal of this study was to define a simple process to construct a system that could be easily interpreted based on two principles: (1) reduction of attributes without degrading the performance of the prediction systems and (2) selecting a technique to interpret the final prediction system. To describe this process, we selected a problem, predicting cardiovascular disease, by analyzing the well-known Statlog (Heart) data set from the University of California’s Automated Learning Repository. We analyzed the cost of making predictions easier to interpret by reducing the number of features that explain the classification of health status versus the cost in accuracy. We performed an analysis on a large set of classification techniques and performance metrics, demonstrating that it is possible to construct explainable and reliable models that provide high quality predictive performance. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

Review

Jump to: Research, Other

31 pages, 411 KiB  
Review
XAI Systems Evaluation: A Review of Human and Computer-Centred Methods
by Pedro Lopes, Eduardo Silva, Cristiana Braga, Tiago Oliveira and Luís Rosado
Appl. Sci. 2022, 12(19), 9423; https://0-doi-org.brum.beds.ac.uk/10.3390/app12199423 - 20 Sep 2022
Cited by 24 | Viewed by 4877
Abstract
The lack of transparency of powerful Machine Learning systems paired with their growth in popularity over the last decade led to the emergence of the eXplainable Artificial Intelligence (XAI) field. Instead of focusing solely on obtaining highly performing models, researchers also develop explanation [...] Read more.
The lack of transparency of powerful Machine Learning systems paired with their growth in popularity over the last decade led to the emergence of the eXplainable Artificial Intelligence (XAI) field. Instead of focusing solely on obtaining highly performing models, researchers also develop explanation techniques that help better understand the system’s reasoning for a particular output. An explainable system can be designed, developed, and evaluated from different perspectives, which enables researchers from different disciplines to work together on this topic. However, the multidisciplinary nature of XAI systems creates new challenges for condensing and structuring adequate methodologies to design and evaluate such systems. This paper presents a survey of Human-centred and Computer-centred methods to evaluate XAI systems. We propose a new taxonomy to categorize XAI evaluation methods more clearly and intuitively. This categorization gathers knowledge from different disciplines and organizes the evaluation methods according to a set of categories that represent key properties of XAI systems. Possible ways to use the proposed taxonomy in the design and evaluation of XAI systems are also discussed, alongside with some concluding remarks and future directions of research. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

47 pages, 1774 KiB  
Review
A Survey on Artificial Intelligence (AI) and eXplainable AI in Air Traffic Management: Current Trends and Development with Future Research Trajectory
by Augustin Degas, Mir Riyanul Islam, Christophe Hurter, Shaibal Barua, Hamidur Rahman, Minesh Poudel, Daniele Ruscio, Mobyen Uddin Ahmed, Shahina Begum, Md Aquif Rahman, Stefano Bonelli, Giulia Cartocci, Gianluca Di Flumeri, Gianluca Borghini, Fabio Babiloni and Pietro Aricó
Appl. Sci. 2022, 12(3), 1295; https://0-doi-org.brum.beds.ac.uk/10.3390/app12031295 - 26 Jan 2022
Cited by 34 | Viewed by 10509
Abstract
Air Traffic Management (ATM) will be more complex in the coming decades due to the growth and increased complexity of aviation and has to be improved in order to maintain aviation safety. It is agreed that without significant improvement in this domain, the [...] Read more.
Air Traffic Management (ATM) will be more complex in the coming decades due to the growth and increased complexity of aviation and has to be improved in order to maintain aviation safety. It is agreed that without significant improvement in this domain, the safety objectives defined by international organisations cannot be achieved and a risk of more incidents/accidents is envisaged. Nowadays, computer science plays a major role in data management and decisions made in ATM. Nonetheless, despite this, Artificial Intelligence (AI), which is one of the most researched topics in computer science, has not quite reached end users in ATM domain. In this paper, we analyse the state of the art with regards to usefulness of AI within aviation/ATM domain. It includes research work of the last decade of AI in ATM, the extraction of relevant trends and features, and the extraction of representative dimensions. We analysed how the general and ATM eXplainable Artificial Intelligence (XAI) works, analysing where and why XAI is needed, how it is currently provided, and the limitations, then synthesise the findings into a conceptual framework, named the DPP (Descriptive, Predictive, Prescriptive) model, and provide an example of its application in a scenario in 2030. It concludes that AI systems within ATM need further research for their acceptance by end-users. The development of appropriate XAI methods including the validation by appropriate authorities and end-users are key issues that needs to be addressed. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Graphical abstract

23 pages, 5300 KiB  
Review
A Review of Fuzzy and Pattern-Based Approaches for Class Imbalance Problems
by Ismael Lin, Octavio Loyola-González, Raúl Monroy and Miguel Angel Medina-Pérez
Appl. Sci. 2021, 11(14), 6310; https://0-doi-org.brum.beds.ac.uk/10.3390/app11146310 - 08 Jul 2021
Cited by 8 | Viewed by 2631
Abstract
The usage of imbalanced databases is a recurrent problem in real-world data such as medical diagnostic, fraud detection, and pattern recognition. Nevertheless, in class imbalance problems, the classifiers are commonly biased by the class with more objects (majority class) and ignore the class [...] Read more.
The usage of imbalanced databases is a recurrent problem in real-world data such as medical diagnostic, fraud detection, and pattern recognition. Nevertheless, in class imbalance problems, the classifiers are commonly biased by the class with more objects (majority class) and ignore the class with fewer objects (minority class). There are different ways to solve the class imbalance problem, and there has been a trend towards the usage of patterns and fuzzy approaches due to the favorable results. In this paper, we provide an in-depth review of popular methods for imbalanced databases related to patterns and fuzzy approaches. The reviewed papers include classifiers, data preprocessing, and evaluation metrics. We identify different application domains and describe how the methods are used. Finally, we suggest further research directions according to the analysis of the reviewed papers and the trend of the state of the art. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

21 pages, 646 KiB  
Review
A Review of Explainable Deep Learning Cancer Detection Models in Medical Imaging
by Mehmet A. Gulum, Christopher M. Trombley and Mehmed Kantardzic
Appl. Sci. 2021, 11(10), 4573; https://0-doi-org.brum.beds.ac.uk/10.3390/app11104573 - 17 May 2021
Cited by 57 | Viewed by 8014
Abstract
Deep learning has demonstrated remarkable accuracy analyzing images for cancer detection tasks in recent years. The accuracy that has been achieved rivals radiologists and is suitable for implementation as a clinical tool. However, a significant problem is that these models are black-box algorithms [...] Read more.
Deep learning has demonstrated remarkable accuracy analyzing images for cancer detection tasks in recent years. The accuracy that has been achieved rivals radiologists and is suitable for implementation as a clinical tool. However, a significant problem is that these models are black-box algorithms therefore they are intrinsically unexplainable. This creates a barrier for clinical implementation due to lack of trust and transparency that is a characteristic of black box algorithms. Additionally, recent regulations prevent the implementation of unexplainable models in clinical settings which further demonstrates a need for explainability. To mitigate these concerns, there have been recent studies that attempt to overcome these issues by modifying deep learning architectures or providing after-the-fact explanations. A review of the deep learning explanation literature focused on cancer detection using MR images is presented here. The gap between what clinicians deem explainable and what current methods provide is discussed and future suggestions to close this gap are provided. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

Other

Jump to: Research, Review

38 pages, 3189 KiB  
Systematic Review
A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks
by Mir Riyanul Islam, Mobyen Uddin Ahmed, Shaibal Barua and Shahina Begum
Appl. Sci. 2022, 12(3), 1353; https://0-doi-org.brum.beds.ac.uk/10.3390/app12031353 - 27 Jan 2022
Cited by 95 | Viewed by 14792
Abstract
Artificial intelligence (AI) and machine learning (ML) have recently been radically improved and are now being employed in almost every application domain to develop automated or semi-automated systems. To facilitate greater human acceptability of these systems, explainable artificial intelligence (XAI) has experienced significant [...] Read more.
Artificial intelligence (AI) and machine learning (ML) have recently been radically improved and are now being employed in almost every application domain to develop automated or semi-automated systems. To facilitate greater human acceptability of these systems, explainable artificial intelligence (XAI) has experienced significant growth over the last couple of years with the development of highly accurate models but with a paucity of explainability and interpretability. The literature shows evidence from numerous studies on the philosophy and methodologies of XAI. Nonetheless, there is an evident scarcity of secondary studies in connection with the application domains and tasks, let alone review studies following prescribed guidelines, that can enable researchers’ understanding of the current trends in XAI, which could lead to future research for domain- and application-specific method development. Therefore, this paper presents a systematic literature review (SLR) on the recent developments of XAI methods and evaluation metrics concerning different application domains and tasks. This study considers 137 articles published in recent years and identified through the prominent bibliographic databases. This systematic synthesis of research articles resulted in several analytical findings: XAI methods are mostly developed for safety-critical domains worldwide, deep learning and ensemble models are being exploited more than other types of AI/ML models, visual explanations are more acceptable to end-users and robust evaluation metrics are being developed to assess the quality of explanations. Research studies have been performed on the addition of explanations to widely used AI/ML models for expert users. However, more attention is required to generate explanations for general users from sensitive domains such as finance and the judicial system. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

Back to TopTop