sensors-logo

Journal Browser

Journal Browser

Robust and Explainable Neural Intelligence

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 July 2022) | Viewed by 12314

Special Issue Editors

Toyota Tech Center, Ann Arbor 48105, Michigan, MI, USA
Interests: Intelligent vehicles; neural networks; machine learning; computational intelligence
Department of Mathematics, University of Leicester, Leicester LE1 7RH, UK
Interests: Multiscale Analysis; Model Reduction; Kinetic Equations; Mathematical Chemistry; Mathematical Neuroscience

Special Issue Information

Dear Colleagues,

The success and mass application of modern machine learning methods and, in particular, methods of deep learning of neural networks have led us to a clear understanding of two main problems: the problem of errors (the reliability problem) and the problem of explicitly explaining decisions made by neural intelligence. These problems are closely related: unexplained artificial intelligence (AI) mistakes can be repeated over and over again. If we leave the problems of errors and explainability unsolved, the mistakes and inexplicability of AI decisions can lead to a new AI winter.

These problems have been known for a long time, but new challenges require effective modern solutions. This Issue is focused on robust and explainable neural intelligence. We invite papers with:

  • New ideas and methods of developing robust and explainable neural intelligence;
  • Theoretical works exploring the limits of solvability of the main problems;
  • Implementation and testing of new methods on real applications from various fields of science and industry.

Dr. Danil Prokhorov
Prof. Dr. Alexander N. Gorban
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • neural intelligence
  • deep learning
  • neural networks
  • artificial intelligence
  • machine learning

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 2961 KiB  
Article
An Account of Models of Molecular Circuits for Associative Learning with Reinforcement Effect and Forced Dissociation
by Zonglun Li, Alya Fattah, Peter Timashev and Alexey Zaikin
Sensors 2022, 22(15), 5907; https://0-doi-org.brum.beds.ac.uk/10.3390/s22155907 - 07 Aug 2022
Cited by 2 | Viewed by 1758
Abstract
The development of synthetic biology has enabled massive progress in biotechnology and in approaching research questions from a brand-new perspective. In particular, the design and study of gene regulatory networks in vitro, in vivo, and in silico have played an increasingly indispensable role [...] Read more.
The development of synthetic biology has enabled massive progress in biotechnology and in approaching research questions from a brand-new perspective. In particular, the design and study of gene regulatory networks in vitro, in vivo, and in silico have played an increasingly indispensable role in understanding and controlling biological phenomena. Among them, it is of great interest to understand how associative learning is formed at the molecular circuit level. Mathematical models are increasingly used to predict the behaviours of molecular circuits. Fernando’s model, which is one of the first works in this line of research using the Hill equation, attempted to design a synthetic circuit that mimics Hebbian learning in a neural network architecture. In this article, we carry out indepth computational analysis of the model and demonstrate that the reinforcement effect can be achieved by choosing the proper parameter values. We also construct a novel circuit that can demonstrate forced dissociation, which was not observed in Fernando’s model. Our work can be readily used as reference for synthetic biologists who consider implementing circuits of this kind in biological systems. Full article
(This article belongs to the Special Issue Robust and Explainable Neural Intelligence)
Show Figures

Figure 1

14 pages, 5587 KiB  
Article
Entropy-Aware Model Initialization for Effective Exploration in Deep Reinforcement Learning
by Sooyoung Jang and Hyung-Il Kim
Sensors 2022, 22(15), 5845; https://0-doi-org.brum.beds.ac.uk/10.3390/s22155845 - 04 Aug 2022
Cited by 1 | Viewed by 1516
Abstract
Effective exploration is one of the critical factors affecting performance in deep reinforcement learning. Agents acquire data to learn the optimal policy through exploration, and if it is not guaranteed, the data quality deteriorates, which leads to performance degradation. This study investigates the [...] Read more.
Effective exploration is one of the critical factors affecting performance in deep reinforcement learning. Agents acquire data to learn the optimal policy through exploration, and if it is not guaranteed, the data quality deteriorates, which leads to performance degradation. This study investigates the effect of initial entropy, which significantly influences exploration, especially in the early learning stage. The results of this study on tasks with discrete action space show that (1) low initial entropy increases the probability of learning failure, (2) the distributions of initial entropy for various tasks are biased towards low values that inhibit exploration, and (3) the initial entropy for discrete action space varies with both the initial weight and task, making it hard to control. We then devise a simple yet powerful learning strategy to deal with these limitations, namely, entropy-aware model initialization. The proposed algorithm aims to provide a model with high initial entropy to a deep reinforcement learning algorithm for effective exploration. Our experiments showed that the devised learning strategy significantly reduces learning failures and enhances performance, stability, and learning speed. Full article
(This article belongs to the Special Issue Robust and Explainable Neural Intelligence)
Show Figures

Figure 1

11 pages, 1217 KiB  
Article
Age-Related Changes in Functional Connectivity during the Sensorimotor Integration Detected by Artificial Neural Network
by Elena N. Pitsik, Nikita S. Frolov, Natalia Shusharina and Alexander E. Hramov
Sensors 2022, 22(7), 2537; https://0-doi-org.brum.beds.ac.uk/10.3390/s22072537 - 25 Mar 2022
Cited by 7 | Viewed by 2394
Abstract
Large-scale functional connectivity is an important indicator of the brain’s normal functioning. The abnormalities in the connectivity pattern can be used as a diagnostic tool to detect various neurological disorders. The present paper describes the functional connectivity assessment based on artificial intelligence to [...] Read more.
Large-scale functional connectivity is an important indicator of the brain’s normal functioning. The abnormalities in the connectivity pattern can be used as a diagnostic tool to detect various neurological disorders. The present paper describes the functional connectivity assessment based on artificial intelligence to reveal age-related changes in neural response in a simple motor execution task. Twenty subjects of two age groups performed repetitive motor tasks on command, while the whole-scalp EEG was recorded. We applied the model based on the feed-forward multilayer perceptron to detect functional relationships between five groups of sensors located over the frontal, parietal, left, right, and middle motor cortex. Functional dependence was evaluated with the predicted and original time series coefficient of determination. Then, we applied statistical analysis to highlight the significant features of the functional connectivity network assessed by our model. Our findings revealed the connectivity pattern is consistent with modern ideas of the healthy aging effect on neural activation. Elderly adults demonstrate a pronounced activation of the whole-brain theta-band network and decreased activation of frontal–parietal and motor areas of the mu-band. Between-subject analysis revealed a strengthening of inter-areal task-relevant links in elderly adults. These findings can be interpreted as an increased cognitive demand in elderly adults to perform simple motor tasks with the dominant hand, induced by age-related working memory decline. Full article
(This article belongs to the Special Issue Robust and Explainable Neural Intelligence)
Show Figures

Figure 1

16 pages, 9615 KiB  
Article
CNN-Based Spectral Super-Resolution of Panchromatic Night-Time Light Imagery: City-Size-Associated Neighborhood Effects
by Nataliya Rybnikova, Evgeny M. Mirkes and Alexander N. Gorban
Sensors 2021, 21(22), 7662; https://0-doi-org.brum.beds.ac.uk/10.3390/s21227662 - 18 Nov 2021
Cited by 2 | Viewed by 1560
Abstract
Data on artificial night-time light (NTL), emitted from the areas, and captured by satellites, are available at a global scale in panchromatic format. In the meantime, data on spectral properties of NTL give more information for further analysis. Such data, however, are available [...] Read more.
Data on artificial night-time light (NTL), emitted from the areas, and captured by satellites, are available at a global scale in panchromatic format. In the meantime, data on spectral properties of NTL give more information for further analysis. Such data, however, are available locally or on a commercial basis only. In our recent work, we examined several machine learning techniques, such as linear regression, kernel regression, random forest, and elastic map models, to convert the panchromatic NTL images into colored ones. We compared red, green, and blue light levels for eight geographical areas all over the world with panchromatic light intensities and characteristics of built-up extent from spatially corresponding pixels and their nearest neighbors. In the meantime, information from more distant neighboring pixels might improve the predictive power of models. In the present study, we explore this neighborhood effect using convolutional neural networks (CNN). The main outcome of our analysis is that the neighborhood effect goes in line with the geographical extent of metropolitan areas under analysis: For smaller areas, optimal input image size is smaller than for bigger ones. At that, for relatively large cities, the optimal input image size tends to differ for different colors, being on average higher for red and lower for blue lights. Compared to other machine learning techniques, CNN models emerged comparable in terms of Pearson’s correlation but showed performed better in terms of WMSE, especially for testing datasets. Full article
(This article belongs to the Special Issue Robust and Explainable Neural Intelligence)
Show Figures

Figure 1

17 pages, 1660 KiB  
Article
Towards ML-Based Diagnostics of Laser–Plasma Interactions
by Yury Rodimkov, Shikha Bhadoria, Valentin Volokitin, Evgeny Efimenko, Alexey Polovinkin, Thomas Blackburn, Mattias Marklund, Arkady Gonoskov and Iosif Meyerov
Sensors 2021, 21(21), 6982; https://0-doi-org.brum.beds.ac.uk/10.3390/s21216982 - 21 Oct 2021
Cited by 3 | Viewed by 2030
Abstract
The power of machine learning (ML) in feature identification can be harnessed for determining quantities in experiments that are difficult to measure directly. However, if an ML model is trained on simulated data, rather than experimental results, the differences between the two can [...] Read more.
The power of machine learning (ML) in feature identification can be harnessed for determining quantities in experiments that are difficult to measure directly. However, if an ML model is trained on simulated data, rather than experimental results, the differences between the two can pose an obstacle to reliable data extraction. Here we report on the development of ML-based diagnostics for experiments on high-intensity laser–matter interactions. With the intention to accentuate robust, physics-governed features, the presence of which is tolerant to such differences, we test the application of principal component analysis, data augmentation and training with data that has superimposed noise of gradually increasing amplitude. Using synthetic data of simulated experiments, we identify that the approach based on the noise of increasing amplitude yields the most accurate ML models and thus is likely to be useful in similar projects on ML-based diagnostics. Full article
(This article belongs to the Special Issue Robust and Explainable Neural Intelligence)
Show Figures

Figure 1

16 pages, 2027 KiB  
Article
Model Simplification of Deep Random Forest for Real-Time Applications of Various Sensor Data
by Sangwon Kim, Byoung-Chul Ko and Jaeyeal Nam
Sensors 2021, 21(9), 3004; https://0-doi-org.brum.beds.ac.uk/10.3390/s21093004 - 25 Apr 2021
Cited by 2 | Viewed by 1947
Abstract
The deep random forest (DRF) has recently gained new attention in deep learning because it has a high performance similar to that of a deep neural network (DNN) and does not rely on a backpropagation. However, it connects a large number of decision [...] Read more.
The deep random forest (DRF) has recently gained new attention in deep learning because it has a high performance similar to that of a deep neural network (DNN) and does not rely on a backpropagation. However, it connects a large number of decision trees to multiple layers, thereby making analysis difficult. This paper proposes a new method for simplifying a black-box model of a DRF using a proposed rule elimination. For this, we consider quantifying the feature contributions and frequency of the fully trained DRF in the form of a decision rule set. The feature contributions provide a basis for determining how features affect the decision process in a rule set. Model simplification is achieved by eliminating unnecessary rules by measuring the feature contributions. Consequently, the simplified and transparent DRF has fewer parameters and rules than before. The proposed method was successfully applied to various DRF models and benchmark sensor datasets while maintaining a robust performance despite the elimination of a large number of rules. A comparison with state-of-the-art compressed DNNs also showed the proposed model simplification’s higher parameter compression and memory efficiency with a similar classification accuracy. Full article
(This article belongs to the Special Issue Robust and Explainable Neural Intelligence)
Show Figures

Figure 1

Back to TopTop