Emerging Technologies in Explainable Artificial Intelligence (XAI) for Big Data

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 August 2024 | Viewed by 163

Special Issue Editors

Department of Computer Science, The University of Sunderland, Sunderland NE36 0AS, UK
Interests: artificial intelligence; natural language processing; big data; data science
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, The University of Sunderland, Sunderland NE36 0AS, UK
Interests: artificial intelligence; natural language processing; machine learning; data science

E-Mail Website
Guest Editor
Department of Computer Science, University of Bradford, Bradford BD7 1DP, UK
Interests: cyber security; artificial intelligence; Internet of Things; machine learning; security in cyber physical systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence applications, methods, techniques, and approaches are rapidly advancing. Although they contribute positively to many aspects of our lives, there are major concerns about how they process and learn from our information, their critical decision making, etc. There is increasing demand from individuals in societies, businesses, organizations, and government bodies to increase our understanding of AI applications in performing tasks. For example, ChatGPT raised concerns from practitioners in many industries (e.g., education) and even prompted certain governments around the world (e.g., Italy) to ban its use by the public. The need for explainable AI has prompted researchers, businesses, and governments to take action to increase form theories and application designs that allow transparency and explanations of the processes and decisions of AI algorithms and products. Thus, this Special Issue invites the contribution of original theoretical or applied research, as well as survey papers, related to methods, approaches, and techniques to explain AI in relation to big data.

The aim of this Special Issue is to invite, encourage, and disseminate original research contributions to expand our knowledge boundary on explainable AI and its applications to big data.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  1. Theory and foundations of explainable AI using big data;
  2. Abstraction, validation, and generalization of explainable AI;
  3. Methods and standards for explainable AI;
  4. Explainable AI approaches to natural language processing, computer vision, image processing, robotics, cybersecurity, etc.
  5. Analyses of AI or machine learning model training;
  6. Explainable machine learning algorithms;
  7. Explainable AI training processes or model representations;
  8. Literature reviews on recent trends, methods, and approaches to explainable AI;
  9. Challenges and solutions in the design and application of explainable AI;
  10. User trust and privacy issue in AI and machine learning models;
  11. Assessment and impact of national and international laws and regulations on explainable AI;
  12. Artificial intelligence and machine learning for the security and privacy of big data;
  13. Artificial intelligence, safety assurance, and reliability of big data;
  14. Artificial intelligence and intrusion detection for big data;
  15. Artificial intelligence and blockchain for big data.

Dr. Sardar Jaf
Dr. Qiang Huang
Dr. Ibrahim Ghafir
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable AI
  • explainable AI methods
  • explainable AI for big data
  • security issues in AI models
  • privacy in AI models
  • safety issues in AI models
  • interpretable AI
  • artificial intelligence
  • machine learning
  • AI application to big data
  • machine learning model analyses
  • machine learning applications
  • machine learning algorithm analyses

Published Papers

This special issue is now open for submission.
Back to TopTop