entropy-logo

Journal Browser

Journal Browser

Human-Centric AI: The Symbiosis of Human and Artificial Intelligence

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Signal and Data Analysis".

Deadline for manuscript submissions: closed (30 November 2020) | Viewed by 34194

Special Issue Editors

Department of Physics, Faculty of Science, University of Zagreb, Bijenicka cesta 32, 10000 Zagreb, Croatia
Interests: complex network analysis and modeling; time series analysis; statistical physics; biophysics
Rudjer Boskovic Institute, Laboratory for Machine Learning and Knowledge Representation, Bijenicka cesta 54, 10000 Zagreb, Croatia
Interests: human-centric and explainable artificial intelligence; interpretable and scalable machine learning; trustworthy AI; veridical data science; complex networks; human-machine networks; social good applications; smart healthcare and medicine, biomedical imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Well-evidenced advances of data-driven complex machine learning approaches emerging within the so-called second wave of artificial intelligence (AI) fostered the exploration of possible AI applications in various domains and aspects of human life, practices, and society. Most of the recent success in AI comes from the utilization of representation learning with end-to-end trained deep neural network models in tasks such as image, text, and speech recognition or strategic board and video games. By enabling the automatic feature engineering, deep learning models significantly reduce the reliance on domain-expert knowledge, outperforming traditional methods based on handcrafted feature engineering and achieving performance that equals or even supersedes that of humans in some respects. 

Despite the outstanding advancements and potential benefits, the concerns about their black-box nature and the lack of transparency behind their behavior have hampered their further applications in our society. In order to fully trust, accept, and adopt newly emerging AI solutions in our everyday lives and practices, we need explainable AI (XAI) that can provide human-understandable interpretations for their algorithmic behavior and outcomes, enabling us to control and continuously improve their performance, robustness, fairness, accountability, transparency, and explainability throughout the entire lifecycle of AI application. Following this motivation, the recently emerging trend within diverse and multidisciplinary research communities is based on the exploration of human-centric AI approaches and the development of contextual explanatory models propelling the symbiosis of human and artificial intelligence, which forms the basis of the next (third) wave of AI.

We can explore this human-centered partnership of people and AI from two perspectives. At one end, AI provides an assistive role in advancing human capabilities and cognitive performance, which is commonly referred to as augmented intelligence. In general, the goal is to improve the efficiency of the human decision-making process using automation complemented with human reasoning in order to manage the potential risks of automated decisions. On the other end, human intelligence can serve either as information feedback with a human in the loop to usefully inform the processes of AI development, deployment, and operation, or serve as an inspiration for novel design principles based on reverse engineering human intelligence (e.g., human-like concept learning, progressive learning, creativity, general-purpose reasoning). The foundation for making the symbiosis of human and artificial intelligence feasible is to provide human-centric explainable AI.

This Special Issue aims at collecting original and high-quality papers focusing on methodologies, techniques, and tools for achieving explainability in different complex machine learning models, their outputs, and their behaviors (e.g., directly by self-explainable, intrinsically interpretable models, through learning disentangled representations, post hoc local interpretability, explanations via examples), which are intended for specific target users (e.g., machine learning experts, domain experts, and general end users) and are evaluated on different synthetic or real-world datasets and settings coming from various disciplines. All contributions should include and explain the role of entropy, information theory, or complexity science concepts and applications in human-centric explainable AI for understanding the relationships between accuracy, reliability, robustness to adversarial attacks, fairness, accountability, transparency, causality, or explainability in different types of deep learning models. As the current approaches based on information-theoretic principles mostly focus on an aspect of advancing algorithmic transparency and understanding the learning dynamics of complex machine learning algorithms, we encourage multi-disciplinary researchers to go a step further and explore additional aspects of interpretability and novel approaches for contextual explanatory models through the lens of information theory, statistical physics, and complexity science in general.

Possible topics include but are not limited to the following:

  • Information theoretic principles on the connection between accuracy, reliability, robustness, fairness, accountability, transparency, or explainability in different types of deep learning models.
  • Methodologies and tools providing global or local human-interpretable explanations for predictions of general or specific complex machine learning and deep learning models for dealing with time series, sequences, graph structured and heterogeneous relational data, or other specifically structured data, as well as different types of unstructured data. Information-theoretic- and complexity-based techniques and metrics to assess the quality of the explanations.
  • Methodologies and tools for understanding learned distributed representations, information flows, and algorithmic transparency in different deep learning network models and architectures with a specific focus on the information theory and statistical physics aspects.
  • Case studies utilizing explainable AI and machine learning approaches for scientific discoveries in various disciplines such as computational biology, biophysics, medicine, neuroscience, social science, or digital art and humanities.
  • Case studies and techniques for providing human-like concept learning in AI systems by exploring and utilizing principles of causality, learning-to-learn (meta-learning), one/few-shot, incremental, and active machine learning.

This Special Issue should serve as a platform for multi-disciplinary researchers interested in sharing their results with other communities using similar techniques. All submitted manuscripts will be subject to peer review, and accepted papers will be available via open access. We welcome the submission of extended conference papers with a clear justification of all extensions with respect to previously published works.

Prof. Davor Horvatic
Dr. Tomislav Lipic
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

- machine learning
- artificial intelligence
- deep neural networks
- interpretability
- explainability
- causality
- fairness, accountability, and transparency

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

5 pages, 175 KiB  
Editorial
Human-Centric AI: The Symbiosis of Human and Artificial Intelligence
by Davor Horvatić and Tomislav Lipic
Entropy 2021, 23(3), 332; https://0-doi-org.brum.beds.ac.uk/10.3390/e23030332 - 11 Mar 2021
Cited by 7 | Viewed by 3231
Abstract
Well-evidenced advances of data-driven complex machine learning approaches emerging within the so-called second wave of artificial intelligence (AI) fostered the exploration of possible AI applications in various domains and aspects of human life, practices, and society [...] Full article
(This article belongs to the Special Issue Human-Centric AI: The Symbiosis of Human and Artificial Intelligence)

Research

Jump to: Editorial

23 pages, 712 KiB  
Article
Benchmarking Attention-Based Interpretability of Deep Learning in Multivariate Time Series Predictions
by Domjan Barić, Petar Fumić, Davor Horvatić and Tomislav Lipic
Entropy 2021, 23(2), 143; https://0-doi-org.brum.beds.ac.uk/10.3390/e23020143 - 25 Jan 2021
Cited by 9 | Viewed by 5049
Abstract
The adaptation of deep learning models within safety-critical systems cannot rely only on good prediction performance but needs to provide interpretable and robust explanations for their decisions. When modeling complex sequences, attention mechanisms are regarded as the established approach to support deep neural [...] Read more.
The adaptation of deep learning models within safety-critical systems cannot rely only on good prediction performance but needs to provide interpretable and robust explanations for their decisions. When modeling complex sequences, attention mechanisms are regarded as the established approach to support deep neural networks with intrinsic interpretability. This paper focuses on the emerging trend of specifically designing diagnostic datasets for understanding the inner workings of attention mechanism based deep learning models for multivariate forecasting tasks. We design a novel benchmark of synthetically designed datasets with the transparent underlying generating process of multiple time series interactions with increasing complexity. The benchmark enables empirical evaluation of the performance of attention based deep neural networks in three different aspects: (i) prediction performance score, (ii) interpretability correctness, (iii) sensitivity analysis. Our analysis shows that although most models have satisfying and stable prediction performance results, they often fail to give correct interpretability. The only model with both a satisfying performance score and correct interpretability is IMV-LSTM, capturing both autocorrelations and crosscorrelations between multiple time series. Interestingly, while evaluating IMV-LSTM on simulated data from statistical and mechanistic models, the correctness of interpretability increases with more complex datasets. Full article
(This article belongs to the Special Issue Human-Centric AI: The Symbiosis of Human and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 1148 KiB  
Article
Deep Neural Network Model for Approximating Eigenmodes Localized by a Confining Potential
by Luka Grubišić, Marko Hajba and Domagoj Lacmanović
Entropy 2021, 23(1), 95; https://0-doi-org.brum.beds.ac.uk/10.3390/e23010095 - 11 Jan 2021
Cited by 8 | Viewed by 3376
Abstract
We study eigenmode localization for a class of elliptic reaction-diffusion operators. As the prototype model problem we use a family of Schrödinger Hamiltonians parametrized by random potentials and study the associated effective confining potential. This problem is posed in the finite domain and [...] Read more.
We study eigenmode localization for a class of elliptic reaction-diffusion operators. As the prototype model problem we use a family of Schrödinger Hamiltonians parametrized by random potentials and study the associated effective confining potential. This problem is posed in the finite domain and we compute localized bounded states at the lower end of the spectrum. We present several deep network architectures that predict the localization of bounded states from a sample of a potential. For tackling higher dimensional problems, we consider a class of physics-informed deep dense networks. In particular, we focus on the interpretability of the proposed approaches. Deep network is used as a general reduced order model that describes the nonlinear connection between the potential and the ground state. The performance of the surrogate reduced model is controlled by an error estimator and the model is updated if necessary. Finally, we present a host of experiments to measure the accuracy and performance of the proposed algorithm. Full article
(This article belongs to the Special Issue Human-Centric AI: The Symbiosis of Human and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 3334 KiB  
Article
Predicting the Critical Number of Layers for Hierarchical Support Vector Regression
by Ryan Mohr, Maria Fonoberova, Zlatko Drmač, Iva Manojlović and Igor Mezić
Entropy 2021, 23(1), 37; https://0-doi-org.brum.beds.ac.uk/10.3390/e23010037 - 29 Dec 2020
Cited by 4 | Viewed by 2072
Abstract
Hierarchical support vector regression (HSVR) models a function from data as a linear combination of SVR models at a range of scales, starting at a coarse scale and moving to finer scales as the hierarchy continues. In the original formulation of HSVR, there [...] Read more.
Hierarchical support vector regression (HSVR) models a function from data as a linear combination of SVR models at a range of scales, starting at a coarse scale and moving to finer scales as the hierarchy continues. In the original formulation of HSVR, there were no rules for choosing the depth of the model. In this paper, we observe in a number of models a phase transition in the training error—the error remains relatively constant as layers are added, until a critical scale is passed, at which point the training error drops close to zero and remains nearly constant for added layers. We introduce a method to predict this critical scale a priori with the prediction based on the support of either a Fourier transform of the data or the Dynamic Mode Decomposition (DMD) spectrum. This allows us to determine the required number of layers prior to training any models. Full article
(This article belongs to the Special Issue Human-Centric AI: The Symbiosis of Human and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 514 KiB  
Article
Deep Neural Networks for Behavioral Credit Rating
by Andro Merćep, Lovre Mrčela, Matija Birov and Zvonko Kostanjčar
Entropy 2021, 23(1), 27; https://0-doi-org.brum.beds.ac.uk/10.3390/e23010027 - 27 Dec 2020
Cited by 7 | Viewed by 3461
Abstract
Logistic regression is the industry standard in credit risk modeling. Regulatory requirements for model explainability have halted the implementation of more advanced, non-linear machine learning algorithms, even though more accurate predictions would benefit consumers and banks alike. Deep neural networks are certainly some [...] Read more.
Logistic regression is the industry standard in credit risk modeling. Regulatory requirements for model explainability have halted the implementation of more advanced, non-linear machine learning algorithms, even though more accurate predictions would benefit consumers and banks alike. Deep neural networks are certainly some of the most prominent non-linear algorithms. In this paper, we propose a deep neural network model for behavioral credit rating. Behavioral models are used to assess the future performance of a bank’s existing portfolio in order to meet the capital requirements introduced by the Basel regulatory framework, which are designed to increase the banks’ ability to absorb large financial shocks. The proposed deep neural network was trained on two different datasets: the first one contains information on loans between 2009 and 2013 (during the financial crisis) and the second one from 2014 to 2018 (after the financial crisis); combined, they include more than 1.5 million examples. The proposed network outperformed multiple benchmarks and was evenly matched with the XGBoost model. Long-term credit rating performance is also presented, as well as a detailed analysis of the reprogrammed facilities’ impact on model performance. Full article
(This article belongs to the Special Issue Human-Centric AI: The Symbiosis of Human and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 2466 KiB  
Article
Visual Speech Recognition with Lightweight Psychologically Motivated Gabor Features
by Xuejie Zhang, Yan Xu, Andrew K. Abel, Leslie S. Smith, Roger Watt, Amir Hussain and Chengxiang Gao
Entropy 2020, 22(12), 1367; https://0-doi-org.brum.beds.ac.uk/10.3390/e22121367 - 03 Dec 2020
Cited by 4 | Viewed by 2474
Abstract
Extraction of relevant lip features is of continuing interest in the visual speech domain. Using end-to-end feature extraction can produce good results, but at the cost of the results being difficult for humans to comprehend and relate to. We present a new, lightweight [...] Read more.
Extraction of relevant lip features is of continuing interest in the visual speech domain. Using end-to-end feature extraction can produce good results, but at the cost of the results being difficult for humans to comprehend and relate to. We present a new, lightweight feature extraction approach, motivated by human-centric glimpse-based psychological research into facial barcodes, and demonstrate that these simple, easy to extract 3D geometric features (produced using Gabor-based image patches), can successfully be used for speech recognition with LSTM-based machine learning. This approach can successfully extract low dimensionality lip parameters with a minimum of processing. One key difference between using these Gabor-based features and using other features such as traditional DCT, or the current fashion for CNN features is that these are human-centric features that can be visualised and analysed by humans. This means that it is easier to explain and visualise the results. They can also be used for reliable speech recognition, as demonstrated using the Grid corpus. Results for overlapping speakers using our lightweight system gave a recognition rate of over 82%, which compares well to less explainable features in the literature. Full article
(This article belongs to the Special Issue Human-Centric AI: The Symbiosis of Human and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 5037 KiB  
Article
Personalized Image Classification by Semantic Embedding and Active Learning
by Mofei Song
Entropy 2020, 22(11), 1314; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111314 - 18 Nov 2020
Cited by 2 | Viewed by 1844
Abstract
Currently, deep learning has shown state-of-the-art performance in image classification with pre-defined taxonomy. However, in a more real-world scenario, different users usually have different classification intents given an image collection. To satisfactorily personalize the requirement, we propose an interactive image classification system with [...] Read more.
Currently, deep learning has shown state-of-the-art performance in image classification with pre-defined taxonomy. However, in a more real-world scenario, different users usually have different classification intents given an image collection. To satisfactorily personalize the requirement, we propose an interactive image classification system with an offline representation learning stage and an online classification stage. During the offline stage, we learn a deep model to extract the feature with higher flexibility and scalability for different users’ preferences. Instead of training the model only with the inter-class discrimination, we also encode the similarity between the semantic-embedding vectors of the category labels into the model. This makes the extracted feature adapt to multiple taxonomies with different granularities. During the online session, an annotation task iteratively alternates with a high-throughput verification task. When performing the verification task, the users are only required to indicate the incorrect prediction without giving the exact category label. For each iteration, our system chooses the images to be annotated or verified based on interactive efficiency optimization. To provide a high interactive rate, a unified active learning algorithm is used to search the optimal annotation and verification set by minimizing the expected time cost. After interactive annotation and verification, the new classified images are used to train a customized classifier online, which reflects the user-adaptive intent of categorization. The learned classifier is then used for subsequent annotation and verification tasks. Experimental results under several public image datasets show that our method outperforms existing methods. Full article
(This article belongs to the Special Issue Human-Centric AI: The Symbiosis of Human and Artificial Intelligence)
Show Figures

Graphical abstract

14 pages, 561 KiB  
Article
Environmental Adaptation and Differential Replication in Machine Learning
by Irene Unceta, Jordi Nin and Oriol Pujol
Entropy 2020, 22(10), 1122; https://0-doi-org.brum.beds.ac.uk/10.3390/e22101122 - 03 Oct 2020
Cited by 6 | Viewed by 2440
Abstract
When deployed in the wild, machine learning models are usually confronted with an environment that imposes severe constraints. As this environment evolves, so do these constraints. As a result, the feasible set of solutions for the considered need is prone to change in [...] Read more.
When deployed in the wild, machine learning models are usually confronted with an environment that imposes severe constraints. As this environment evolves, so do these constraints. As a result, the feasible set of solutions for the considered need is prone to change in time. We refer to this problem as that of environmental adaptation. In this paper, we formalize environmental adaptation and discuss how it differs from other problems in the literature. We propose solutions based on differential replication, a technique where the knowledge acquired by the deployed models is reused in specific ways to train more suitable future generations. We discuss different mechanisms to implement differential replications in practice, depending on the considered level of knowledge. Finally, we present seven examples where the problem of environmental adaptation can be solved through differential replication in real-life applications. Full article
(This article belongs to the Special Issue Human-Centric AI: The Symbiosis of Human and Artificial Intelligence)
Show Figures

Figure 1

14 pages, 978 KiB  
Article
Exploring the Possibility of a Recovery of Physics Process Properties from a Neural Network Model
by Marko Jercic and Nikola Poljak
Entropy 2020, 22(9), 994; https://0-doi-org.brum.beds.ac.uk/10.3390/e22090994 - 07 Sep 2020
Cited by 3 | Viewed by 2080
Abstract
The application of machine learning methods to particle physics often does not provide enough understanding of the underlying physics. An interpretable model which provides a way to improve our knowledge of the mechanism governing a physical system directly from the data can be [...] Read more.
The application of machine learning methods to particle physics often does not provide enough understanding of the underlying physics. An interpretable model which provides a way to improve our knowledge of the mechanism governing a physical system directly from the data can be very useful. In this paper, we introduce a simple artificial physical generator based on the Quantum chromodynamical (QCD) fragmentation process. The data simulated from the generator are then passed to a neural network model which we base only on the partial knowledge of the generator. We aimed to see if the interpretation of the generated data can provide the probability distributions of basic processes of such a physical system. This way, some of the information we omitted from the network model on purpose is recovered. We believe this approach can be beneficial in the analysis of real QCD processes. Full article
(This article belongs to the Special Issue Human-Centric AI: The Symbiosis of Human and Artificial Intelligence)
Show Figures

Figure 1

11 pages, 4651 KiB  
Article
From Knowledge Transmission to Knowledge Construction: A Step towards Human-Like Active Learning
by Ilona Kulikovskikh, Tomislav Lipic and Tomislav Šmuc
Entropy 2020, 22(8), 906; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080906 - 18 Aug 2020
Cited by 4 | Viewed by 3078
Abstract
Machines usually employ a guess-and-check strategy to analyze data: they take the data, make a guess, check the answer, adjust it with regard to the correct one if necessary, and try again on a new data set. An active learning environment guarantees better [...] Read more.
Machines usually employ a guess-and-check strategy to analyze data: they take the data, make a guess, check the answer, adjust it with regard to the correct one if necessary, and try again on a new data set. An active learning environment guarantees better performance while training on less, but carefully chosen, data which reduces the costs of both annotating and analyzing large data sets. This issue becomes even more critical for deep learning applications. Human-like active learning integrates a variety of strategies and instructional models chosen by a teacher to contribute to learners’ knowledge, while machine active learning strategies lack versatile tools for shifting the focus of instruction away from knowledge transmission to learners’ knowledge construction. We approach this gap by considering an active learning environment in an educational setting. We propose a new strategy that measures the information capacity of data using the information function from the four-parameter logistic item response theory (4PL IRT). We compared the proposed strategy with the most common active learning strategies—Least Confidence and Entropy Sampling. The results of computational experiments showed that the Information Capacity strategy shares similar behavior but provides a more flexible framework for building transparent knowledge models in deep learning. Full article
(This article belongs to the Special Issue Human-Centric AI: The Symbiosis of Human and Artificial Intelligence)
Show Figures

Figure 1

13 pages, 1442 KiB  
Article
Optic Disc Segmentation Using Attention-Based U-Net and the Improved Cross-Entropy Convolutional Neural Network
by Baixin Jin, Pingping Liu, Peng Wang, Lida Shi and Jing Zhao
Entropy 2020, 22(8), 844; https://0-doi-org.brum.beds.ac.uk/10.3390/e22080844 - 30 Jul 2020
Cited by 23 | Viewed by 3349
Abstract
Medical image segmentation is an important part of medical image analysis. With the rapid development of convolutional neural networks in image processing, deep learning methods have achieved great success in the field of medical image processing. Deep learning is also used in the [...] Read more.
Medical image segmentation is an important part of medical image analysis. With the rapid development of convolutional neural networks in image processing, deep learning methods have achieved great success in the field of medical image processing. Deep learning is also used in the field of auxiliary diagnosis of glaucoma, and the effective segmentation of the optic disc area plays an important assistant role in the diagnosis of doctors in the clinical diagnosis of glaucoma. Previously, many U-Net-based optic disc segmentation methods have been proposed. However, the channel dependence of different levels of features is ignored. The performance of fundus image segmentation in small areas is not satisfactory. In this paper, we propose a new aggregation channel attention network to make full use of the influence of context information on semantic segmentation. Different from the existing attention mechanism, we exploit channel dependencies and integrate information of different scales into the attention mechanism. At the same time, we improved the basic classification framework based on cross entropy, combined the dice coefficient and cross entropy, and balanced the contribution of dice coefficients and cross entropy loss to the segmentation task, which enhanced the performance of the network in small area segmentation. The network retains more image features, restores the significant features more accurately, and further improves the segmentation performance of medical images. We apply it to the fundus optic disc segmentation task. We demonstrate the segmentation performance of the model on the Messidor dataset and the RIM-ONE dataset, and evaluate the proposed architecture. Experimental results show that our network architecture improves the prediction performance of the base architectures under different datasets while maintaining the computational efficiency. The results render that the proposed technologies improve the segmentation with 0.0469 overlapping error on Messidor. Full article
(This article belongs to the Special Issue Human-Centric AI: The Symbiosis of Human and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop