Symbolic Methods of Machine Learning in Knowledge Discovery and Explainable Artificial Intelligence

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (30 September 2023) | Viewed by 4945
Joint Special Issue: You may choose either journal Mathematics or Applied Sciences.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Networks and Systems, Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
Interests: decision support systems; data mining; rule induction; rough sets
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Symbolic methods, also called interpretable or white-box methods, were one of the first methods developed within the machine learning area. These methods are still being developed and they find practical applications particularly in knowledge-discovery tasks. In predictive analytics complex approaches (complex AI/ML models) such as boosting, bagging and deep learning usually achieve better results than white-box methods. However, the explanation of a decision-making process of complex AI/ML models is difficult and, without some additional assumptions, often impossible. For this reason, such models are called black-boxes. The dynamic growth of XAI (Explainable Artificial Intelligence) has been recently stimulated by the necessity to explain decisions made by complex AI/ML systems. In this domain the most progressive development has been observed in local, so- called instance-level explanation (i.e., explanation of reasons for making specific decision for a given example). The global or dataset-level XAI still requires intensive research. Generally, the method of global explanation should help the user to understand how the AI/ML model makes decisions globally, for example, about the patterns of right and wrong decisions made by the AI/ML model. In this context, white-box based approximations of complex AI/ML models may play an important role. Specifically, in recent years research on approximation of decisions made by black-box models using white-box approaches has been done.

This Special Issue focuses on new methods of induction of interpretable AI/ML models (rules, trees, graphs, etc.) in data mining and knowledge discovery. The methods for concept learning, contrast set mining, action mining, regression and censored data analysis are welcome. The Special Issue covers also all proposals related to white-box based XAI dedicated to global explanation of decisions made by the complex AI/ML models.

You may choose our Joint Special Issue in Applied Sciences.

Prof. Dr. Marek Sikora
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • knowledge discovery
  • white-box ML
  • explainable artificial intelligence
  • decision tree and rule induction
  • rough sets

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 6148 KiB  
Article
Analyzing Employee Attrition Using Explainable AI for Strategic HR Decision-Making
by Gabriel Marín Díaz, José Javier Galán Hernández and José Luis Galdón Salvador
Mathematics 2023, 11(22), 4677; https://0-doi-org.brum.beds.ac.uk/10.3390/math11224677 - 17 Nov 2023
Cited by 1 | Viewed by 2848
Abstract
Employee attrition and high turnover have become critical challenges faced by various sectors in today’s competitive job market. In response to these pressing issues, organizations are increasingly turning to artificial intelligence (AI) to predict employee attrition and implement effective retention strategies. This paper [...] Read more.
Employee attrition and high turnover have become critical challenges faced by various sectors in today’s competitive job market. In response to these pressing issues, organizations are increasingly turning to artificial intelligence (AI) to predict employee attrition and implement effective retention strategies. This paper delves into the application of explainable AI (XAI) in identifying potential employee turnover and devising data-driven solutions to address this complex problem. The first part of the paper examines the escalating problem of employee attrition in specific industries, analyzing the detrimental impact on organizational productivity, morale, and financial stability. The second section focuses on the utilization of AI techniques to predict employee attrition. AI can analyze historical data, employee behavior, and various external factors to forecast the likelihood of an employee leaving an organization. By identifying early warning signs, businesses can intervene proactively and implement personalized retention efforts. The third part introduces explainable AI techniques which enhance the transparency and interpretability of AI models. By incorporating these methods into AI-based predictive systems, organizations gain deeper insights into the factors driving employee turnover. This interpretability enables human resources (HR) professionals and decision-makers to understand the model’s predictions and facilitates the development of targeted retention and recruitment strategies that align with individual employee needs. Full article
Show Figures

Figure 1

29 pages, 2108 KiB  
Article
Random Maximum 2 Satisfiability Logic in Discrete Hopfield Neural Network Incorporating Improved Election Algorithm
by Vikneswari Someetheram, Muhammad Fadhil Marsani, Mohd Shareduwan Mohd Kasihmuddin, Nur Ezlin Zamri, Siti Syatirah Muhammad Sidik, Siti Zulaikha Mohd Jamaludin and Mohd. Asyraf Mansor
Mathematics 2022, 10(24), 4734; https://0-doi-org.brum.beds.ac.uk/10.3390/math10244734 - 13 Dec 2022
Cited by 6 | Viewed by 1320
Abstract
Real life logical rule is not always satisfiable in nature due to the redundant variable that represents the logical formulation. Thus, the intelligence system must be optimally governed to ensure the system can behave according to non-satisfiable structure that finds practical applications particularly [...] Read more.
Real life logical rule is not always satisfiable in nature due to the redundant variable that represents the logical formulation. Thus, the intelligence system must be optimally governed to ensure the system can behave according to non-satisfiable structure that finds practical applications particularly in knowledge discovery tasks. In this paper, we a propose non-satisfiability logical rule that combines two sub-logical rules, namely Maximum 2 Satisfiability and Random 2 Satisfiability, that play a vital role in creating explainable artificial intelligence. Interestingly, the combination will result in the negative logical outcome where the cost function of the proposed logic is always more than zero. The proposed logical rule is implemented into Discrete Hopfield Neural Network by computing the cost function associated with each variable in Random 2 Satisfiability. Since the proposed logical rule is difficult to be optimized during training phase of DHNN, Election Algorithm is implemented to find consistent interpretation that minimizes the cost function of the proposed logical rule. Election Algorithm has become the most popular optimization metaheuristic technique for resolving constraint optimization problems. The fundamental concepts of Election Algorithm are taken from socio-political phenomena which use new and efficient processes to produce the best outcome. The behavior of Random Maximum 2 Satisfiability in Discrete Hopfield Neural Network is investigated based on several performance metrics. The performance is compared between existing conventional methods with Genetic Algorithm and Election Algorithm. The results demonstrate that the proposed Random Maximum 2 Satisfiability can become the symbolic instruction in Discrete Hopfield Neural Network where Election Algorithm has performed as an effective training process of Discrete Hopfield Neural Network compared to Genetic Algorithm and Exhaustive Search. Full article
Show Figures

Figure 1

Back to TopTop