Next Issue
Volume 3, December
Previous Issue
Volume 3, June

Mach. Learn. Knowl. Extr., Volume 3, Issue 3 (September 2021) – 11 articles

Cover Story (view full-size image): Artificial Intelligence can generate very accurate machine-learned data-driven models. Their non-linear, complex structures cannot be easily interpreted, however. Scholars created many methods to explain their functioning and inferential logic. This manuscript proposes to organize these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension, the explanation format. The explanatory methods provide several solutions to meet requirements that differ greatly between users, problems and application fields. Identifying the most appropriate explanation can be daunting, hence the need for this hierarchy. This work concludes by identifying the limitations of the explanation formats and by suggesting novel research directions. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain
Mach. Learn. Knowl. Extr. 2021, 3(3), 740-770; https://0-doi-org.brum.beds.ac.uk/10.3390/make3030037 - 19 Sep 2021
Cited by 2 | Viewed by 890
Abstract
In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions [...] Read more.
In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

Article
Artificial Neural Network Analysis of Gene Expression Data Predicted Non-Hodgkin Lymphoma Subtypes with High Accuracy
Mach. Learn. Knowl. Extr. 2021, 3(3), 720-739; https://0-doi-org.brum.beds.ac.uk/10.3390/make3030036 - 10 Sep 2021
Viewed by 576
Abstract
Predictive analytics using artificial intelligence is a useful tool in cancer research. A multilayer perceptron neural network used gene expression data to predict the lymphoma subtypes of 290 cases of non-Hodgkin lymphoma (GSE132929). The input layer included both the whole array of 20,863 [...] Read more.
Predictive analytics using artificial intelligence is a useful tool in cancer research. A multilayer perceptron neural network used gene expression data to predict the lymphoma subtypes of 290 cases of non-Hodgkin lymphoma (GSE132929). The input layer included both the whole array of 20,863 genes and a cancer transcriptome panel of 1769 genes. The output layer was lymphoma subtypes, including follicular lymphoma, mantle cell lymphoma, diffuse large B-cell lymphoma, Burkitt lymphoma, and marginal zone lymphoma. The neural networks successfully classified the cases consistent with the lymphoma subtypes, with an area under the curve (AUC) that ranged from 0.87 to 0.99. The most relevant predictive genes were LCE2B, KNG1, IGHV7_81, TG, C6, FGB, ZNF750, CTSV, INGX, and COL4A6 for the whole set; and ARG1, MAGEA3, AKT2, IL1B, S100A7A, CLEC5A, WIF1, TREM1, DEFB1, and GAGE1 for the cancer panel. The characteristic predictive genes for each lymphoma subtypes were also identified with high accuracy (AUC = 0.95, incorrect predictions = 6.2%). Finally, the topmost relevant 30 genes of the whole set, which belonged to apoptosis, cell proliferation, metabolism, and antigen presentation pathways, not only predicted the lymphoma subtypes but also the overall survival of diffuse large B-cell lymphoma (series GSE10846, n = 414 cases), and most relevant cancer subtypes of The Cancer Genome Atlas (TCGA) consortium including carcinomas of breast, colorectal, lung, prostate, and gastric, melanoma, etc. (7441 cases). In conclusion, neural networks predicted the non-Hodgkin lymphoma subtypes with high accuracy, and the highlighted genes also predicted the survival of a pan-cancer series. Full article
Show Figures

Figure 1

Article
Benchmarking Studies Aimed at Clustering and Classification Tasks Using K-Means, Fuzzy C-Means and Evolutionary Neural Networks
Mach. Learn. Knowl. Extr. 2021, 3(3), 695-719; https://0-doi-org.brum.beds.ac.uk/10.3390/make3030035 - 31 Aug 2021
Cited by 1 | Viewed by 783
Abstract
Clustering is a widely used unsupervised learning technique across data mining and machine learning applications and finds frequent use in diverse fields ranging from astronomy, medical imaging, search and optimization, geology, geophysics, and sentiment analysis, to name a few. It is therefore important [...] Read more.
Clustering is a widely used unsupervised learning technique across data mining and machine learning applications and finds frequent use in diverse fields ranging from astronomy, medical imaging, search and optimization, geology, geophysics, and sentiment analysis, to name a few. It is therefore important to verify the effectiveness of the clustering algorithm in question and to make reasonably strong arguments for the acceptance of the end results generated by the validity indices that measure the compactness and separability of clusters. This work aims to explore the successes and limitations of two popular clustering mechanisms by comparing their performance over publicly available benchmarking data sets that capture a variety of data point distributions as well as the number of attributes, especially from a computational point of view by incorporating techniques that alleviate some of the issues that plague these algorithms. Sensitivity to initialization conditions and stagnation to local minima are explored. Further, an implementation of a feedforward neural network utilizing a fully connected topology in particle swarm optimization is introduced. This serves to be a guided random search technique for the neural network weight optimization. The algorithms utilized here are studied and compared, from which their applications are explored. The study aims to provide a handy reference for practitioners to both learn about and verify benchmarking results on commonly used real-world data sets from both a supervised and unsupervised point of view before application in more tailored, complex problems. Full article
(This article belongs to the Special Issue Recent Advances in Feature Selection)
Show Figures

Figure 1

Review
A Survey of Machine Learning-Based Solutions for Phishing Website Detection
Mach. Learn. Knowl. Extr. 2021, 3(3), 672-694; https://0-doi-org.brum.beds.ac.uk/10.3390/make3030034 - 20 Aug 2021
Cited by 1 | Viewed by 851
Abstract
With the development of the Internet, network security has aroused people’s attention. It can be said that a secure network environment is a basis for the rapid and sound development of the Internet. Phishing is an essential class of cybercriminals which is a [...] Read more.
With the development of the Internet, network security has aroused people’s attention. It can be said that a secure network environment is a basis for the rapid and sound development of the Internet. Phishing is an essential class of cybercriminals which is a malicious act of tricking users into clicking on phishing links, stealing user information, and ultimately using user data to fake logging in with related accounts to steal funds. Network security is an iterative issue of attack and defense. The methods of phishing and the technology of phishing detection are constantly being updated. Traditional methods for identifying phishing links rely on blacklists and whitelists, but this cannot identify new phishing links. Therefore, we need to solve how to predict whether a newly emerging link is a phishing website and improve the accuracy of the prediction. With the maturity of machine learning technology, prediction has become a vital ability. This paper offers a state-of-the-art survey on methods for phishing website detection. It starts with the life cycle of phishing, introduces common anti-phishing methods, mainly focuses on the method of identifying phishing links, and has an in-depth understanding of machine learning-based solutions, including data collection, feature extraction, modeling, and evaluation performance. This paper provides a detailed comparison of various solutions for phishing website detection. Full article
Show Figures

Figure 1

Article
Surrogate Object Detection Explainer (SODEx) with YOLOv4 and LIME
Mach. Learn. Knowl. Extr. 2021, 3(3), 662-671; https://0-doi-org.brum.beds.ac.uk/10.3390/make3030033 - 06 Aug 2021
Viewed by 811
Abstract
Due to impressive performance, deep neural networks for object detection in images have become a prevalent choice. Given the complexity of the neural network models used, users of these algorithms are typically given no hint as to how the objects were found. It [...] Read more.
Due to impressive performance, deep neural networks for object detection in images have become a prevalent choice. Given the complexity of the neural network models used, users of these algorithms are typically given no hint as to how the objects were found. It remains, for example, unclear whether an object is detected based on what it looks like or based on the context in which it is located. We have developed an algorithm, Surrogate Object Detection Explainer (SODEx), that can explain any object detection algorithm using any classification explainer. We evaluate SODEx qualitatively and quantitatively by detecting objects in the COCO dataset with YOLOv4 and explaining these detections with LIME. This empirical evaluation does not only demonstrate the value of explainable object detection, it also provides valuable insights into how YOLOv4 detects objects. Full article
(This article belongs to the Special Issue Explainable Machine Learning)
Show Figures

Figure 1

Article
Classification of Explainable Artificial Intelligence Methods through Their Output Formats
Mach. Learn. Knowl. Extr. 2021, 3(3), 615-661; https://0-doi-org.brum.beds.ac.uk/10.3390/make3030032 - 04 Aug 2021
Cited by 1 | Viewed by 1155
Abstract
Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic [...] Read more.
Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulations. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

Article
Orientation-Encoding CNN for Point Cloud Classification and Segmentation
Mach. Learn. Knowl. Extr. 2021, 3(3), 601-614; https://0-doi-org.brum.beds.ac.uk/10.3390/make3030031 - 02 Aug 2021
Cited by 1 | Viewed by 545
Abstract
With the introduction of effective and general deep learning network frameworks, deep learning based methods have achieved remarkable success in various visual tasks. However, there are still tough challenges in applying them to convolutional neural networks due to the lack of a potential [...] Read more.
With the introduction of effective and general deep learning network frameworks, deep learning based methods have achieved remarkable success in various visual tasks. However, there are still tough challenges in applying them to convolutional neural networks due to the lack of a potential rule structure of point clouds. Therefore, by taking the original point clouds as the input data, this paper proposes an orientation-encoding (OE) convolutional module and designs a convolutional neural network for effectively extracting local geometric features of point sets. By searching for the same number of points in 8 directions and arranging them in order in 8 directions, the OE convolution is then carried out according to the number of points in the direction, which realizes the effective feature learning of the local structure of the point sets. Further experiments on diverse datasets show that the proposed method has competitive performance on classification and segmentation tasks of point sets. Full article
(This article belongs to the Topic Applied Computer Vision and Pattern Recognition)
Show Figures

Figure 1

Article
Proposing an Ontology Model for Planning Photovoltaic Systems
Mach. Learn. Knowl. Extr. 2021, 3(3), 582-600; https://0-doi-org.brum.beds.ac.uk/10.3390/make3030030 - 31 Jul 2021
Viewed by 560
Abstract
The performance of a photovoltaic (PV) system is negatively affected when operating under shading conditions. Maximum power point tracking (MPPT) systems are used to overcome this hurdle. Designing an efficient MPPT-based controller requires knowledge about power conversion in PV systems. However, it is [...] Read more.
The performance of a photovoltaic (PV) system is negatively affected when operating under shading conditions. Maximum power point tracking (MPPT) systems are used to overcome this hurdle. Designing an efficient MPPT-based controller requires knowledge about power conversion in PV systems. However, it is difficult for nontechnical solar energy consumers to define different parameters of the controller and deal with distinct sources of data related to the planning. Semantic Web technologies enable us to improve knowledge representation, sharing, and reusing of relevant information generated by various sources. In this work, we propose a knowledge-based model representing key concepts associated with an MPPT-based controller. The model is featured with Semantic Web Rule Language (SWRL), allowing the system planner to extract information about power reductions caused by snow and several airborne particles. The proposed ontology, named MPPT-On, is validated through a case study designed by the System Advisor Model (SAM). It acts as a decision support system and facilitate the process of planning PV projects for non-technical practitioners. Moreover, the presented rule-based system can be reused and shared among the solar energy community to adjust the power estimations reported by PV planning tools especially for snowy months and polluted environments. Full article
Show Figures

Figure 1

Review
Recent Advances in Deep Reinforcement Learning Applications for Solving Partially Observable Markov Decision Processes (POMDP) Problems: Part 1—Fundamentals and Applications in Games, Robotics and Natural Language Processing
Mach. Learn. Knowl. Extr. 2021, 3(3), 554-581; https://0-doi-org.brum.beds.ac.uk/10.3390/make3030029 - 15 Jul 2021
Cited by 1 | Viewed by 956
Abstract
The first part of a two-part series of papers provides a survey on recent advances in Deep Reinforcement Learning (DRL) applications for solving partially observable Markov decision processes (POMDP) problems. Reinforcement Learning (RL) is an approach to simulate the human’s natural learning process, [...] Read more.
The first part of a two-part series of papers provides a survey on recent advances in Deep Reinforcement Learning (DRL) applications for solving partially observable Markov decision processes (POMDP) problems. Reinforcement Learning (RL) is an approach to simulate the human’s natural learning process, whose key is to let the agent learn by interacting with the stochastic environment. The fact that the agent has limited access to the information of the environment enables AI to be applied efficiently in most fields that require self-learning. Although efficient algorithms are being widely used, it seems essential to have an organized investigation—we can make good comparisons and choose the best structures or algorithms when applying DRL in various applications. In this overview, we introduce Markov Decision Processes (MDP) problems and Reinforcement Learning and applications of DRL for solving POMDP problems in games, robotics, and natural language processing. A follow-up paper will cover applications in transportation, communications and networking, and industries. Full article
(This article belongs to the Special Issue Advances in Reinforcement Learning)
Show Figures

Figure 1

Article
Voting in Transfer Learning System for Ground-Based Cloud Classification
Mach. Learn. Knowl. Extr. 2021, 3(3), 542-553; https://0-doi-org.brum.beds.ac.uk/10.3390/make3030028 - 12 Jul 2021
Cited by 1 | Viewed by 621
Abstract
Cloud classification is a great challenge in meteorological research. The different types of clouds, currently known and present in our skies, can produce radioactive effects that impact the variation of atmospheric conditions, with consequent strong dominance over the earth’s climate and weather. Therefore, [...] Read more.
Cloud classification is a great challenge in meteorological research. The different types of clouds, currently known and present in our skies, can produce radioactive effects that impact the variation of atmospheric conditions, with consequent strong dominance over the earth’s climate and weather. Therefore, identifying their main visual features becomes a crucial aspect. In this paper, the goal is to adopt pretrained deep neural networks-based architecture for clouds image description, and subsequently, classification. The approach is pyramidal. Proceeding from the bottom up, it partially extracts previous knowledge of deep neural networks related to original task and transfers it to the new task. The updated knowledge is integrated in a voting context to provide a classification prediction. The framework trains the neural models on unbalanced sets, a condition that makes the task even more complex, and combines the provided predictions through statistical measures. An experimental phase on different cloud image datasets is performed, and the results achieved show the effectiveness of the proposed approach with respect to state-of-the-art competitors. Full article
Show Figures

Figure 1

Article
Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability
Mach. Learn. Knowl. Extr. 2021, 3(3), 525-541; https://0-doi-org.brum.beds.ac.uk/10.3390/make3030027 - 30 Jun 2021
Cited by 2 | Viewed by 1021
Abstract
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., [...] Read more.
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop