Explainable Artificial Intelligence: Efficiency and Sustainability

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 8185

Special Issue Editors


E-Mail Website
Guest Editor
Research and Innovation Centre for Electrical Engineering, University of West Bohemia, 30100 Pilsen, Czech Republic
Interests: artificial intelligence; machine learning; deep learning; complex systems; computational intelligence

E-Mail Website
Guest Editor
Department of Electrical Engineering, Kermanshah Branch, Islamic Azad University, Kermanshah 3981838381, Iran
Interests: artificial intelligence; artificial neural networks; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj 6617715175, Iran
Interests: natural language processing; speech processing; machine learning

Special Issue Information

Dear Colleagues,

In the last few years, with the rapid growth in generating data and increasing demand for computing power, Artificial Intelligence (AI) has been employed to enhance the performance of systems over a wide variety of tasks and applications in our private and professional lives. However, despite this considerable performance, the internal mechanisms of the machine learning or deep learning techniques are ambiguous as their inner architectures are non-linear and complex to understand. This complexity may lead to difficulty in understanding these models. Many powerful techniques have been used as “black boxes” and they have not provided any information about how and why certain methods of estimation, classification, and prediction are chosen. Although experts can interpret the structure of these models, in many daily applications, such a shortage of transparency may affect users’ ability to utilize the software or devices equipped by these models. This is best understood when it comes to medical applications since there is a meaningful connection between the success of a diagnosis or treatment of disease and the acceptability of the used method.

In this regard, the development of methods for interpreting, explaining, and visualizing AI-based models has recently attracted great attention. This field of research, referred to as explainable artificial intelligence (XAI), tries to shed light on existing AI methods and design new AI techniques whose learning processes and decisions can be understandable for users.

In this Special Issue, we are particularly interested in showcasing recent advancements in the area of explainable artificial intelligence, as well as in presenting recent results, reviews, and case studies to illustrate and analyze the impacts of such techniques on the future of AI applications. Prospective authors are invited to submit original manuscripts on topics including, but not limited to, the following:

  • Impacts of transparency and accountability of ai in the smart banking system;
  • explainable artificial intelligence and healthcare systems;
  • Acceptability and interpretability of XAI in sustainable development;
  • explainable artificial intelligence and sustainable AI;
  • XAI perspectives in designing hardware or software;
  • Effects of XAI in developing and acceptability metaverse;
  • explainable artificial intelligence and gaming;
  • explainable artificial intelligence and blockchain, cryptocurrency and cyber security;
  • Sustainable robotics and explainable artificial intelligence

Dr. Mohammad (Behdad) Jamshidi
Dr. Sobhan Roshani
Dr. Fatemeh Daneshfar 
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • robotics and XAI
  • gaming and XAI
  • blockchain and XAI
  • AI sustainability
  • AI trustworthy
  • AI acceptability
  • AI interpretability
  • cryptocurrency and XAI
  • healthcare and XAI
  • cyber security and XAI
  • metaverse and XAI

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

59 pages, 13948 KiB  
Article
An Empirical Survey on Explainable AI Technologies: Recent Trends, Use-Cases, and Categories from Technical and Application Perspectives
by Mohammad Nagahisarchoghaei, Nasheen Nur, Logan Cummins, Nashtarin Nur, Mirhossein Mousavi Karimi, Shreya Nandanwar, Siddhartha Bhattacharyya and Shahram Rahimi
Electronics 2023, 12(5), 1092; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12051092 - 22 Feb 2023
Cited by 13 | Viewed by 7293
Abstract
In a wide range of industries and academic fields, artificial intelligence is becoming increasingly prevalent. AI models are taking on more crucial decision-making tasks as they grow in popularity and performance. Although AI models, particularly machine learning models, are successful in research, they [...] Read more.
In a wide range of industries and academic fields, artificial intelligence is becoming increasingly prevalent. AI models are taking on more crucial decision-making tasks as they grow in popularity and performance. Although AI models, particularly machine learning models, are successful in research, they have numerous limitations and drawbacks in practice. Furthermore, due to the lack of transparency behind their behavior, users need more understanding of how these models make specific decisions, especially in complex state-of-the-art machine learning algorithms. Complex machine learning systems utilize less transparent algorithms, thereby exacerbating the problem. This survey analyzes the significance and evolution of explainable AI (XAI) research across various domains and applications. Throughout this study, a rich repository of explainability classifications and summaries has been developed, along with their applications and practical use cases. We believe this study will make it easier for researchers to understand all explainability methods and access their applications simultaneously. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence: Efficiency and Sustainability)
Show Figures

Figure 1

Back to TopTop