Recent Advances in Computational Intelligence and Its Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 22368

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Technology, South China University of Technology, Guangzhou 510006, China
Interests: evolutionary computation; deep reinforcement learning
Special Issues, Collections and Topics in MDPI journals
Guangzhou Institute of Technology, Xidian University, Guangzhou 510555, China
Interests: evolutionary computation; graph neural network
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Computational Intelligence (CI) is an evolving field that develops or applies a set of nature-inspired methodologies and algorithms to address complex problems in practice. It encompasses different computational paradigms, such as evolutionary computation, swarm intelligence, fuzzy systems, artificial neural networks, and deep learning systems. The methodologies and algorithms are being widely used to handle the complex engineering problems to which the mathematical or traditional modeling methods are ineffective.

The purpose of this Special Issue is to gather a collection of latest studies in CI and the related fields, from either theoretical or practical perspectives. We welcome new or improved methods to solve problems of optimization, learning, and decision support. Particular interest is also paid to the applications of CI to practical fields like intelligent transportation, social networks, and the Internet of Things. We invite authors to submit research articles and/or review articles that fit this purpose.

Prof. Dr. Yue-Jiao Gong
Prof. Dr. Ting Huang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • evolutionary computation
  • swarm intelligence
  • fuzzy systems
  • artificial neural networks
  • deep learning
  • reinforcement learning
  • heuristics and metaheuristics
  • data-driven optimization
  • intelligent transportation systems
  • vehicle routing
  • vehicle dispatching
  • traffic time estimation
  • traffic volume prediction
  • traffic signal control
  • itinerary planning
  • social network analysis
  • community detection
  • network influence maximization
  • task assignment in IoT
  • resource allocation in IoT

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 3829 KiB  
Article
Memetic Algorithm with Isomorphic Transcoding for UAV Deployment Optimization in Energy-Efficient AIoT Data Collection
by Xin Zhang and Yiyan Cao
Mathematics 2022, 10(24), 4668; https://0-doi-org.brum.beds.ac.uk/10.3390/math10244668 - 09 Dec 2022
Cited by 3 | Viewed by 1361
Abstract
Unmanned aerial vehicles (UAVs) are one of the devices used to collect big data as part of the artificial intelligence of things (AIoT). To reduce total energy consumption, most researchers focus on optimizing the number and the location of UAVs, but ignore the [...] Read more.
Unmanned aerial vehicles (UAVs) are one of the devices used to collect big data as part of the artificial intelligence of things (AIoT). To reduce total energy consumption, most researchers focus on optimizing the number and the location of UAVs, but ignore the distribution of UAVs in relation to the AIoT devices. Therefore, this paper proposes a memetic algorithm based on isomorphic transcoding space (MA-IT) to optimize the deployment of UAVs, solving, in particular, the distribution of UAVs in energy-efficient AIoT data collection. First, a simplified encoding method is designed to reduce the search space. This method only uses the distribution to represent a solution, and the number and the location of UAVs can be greedily deduced through the distribution. Afterwards, a pseudo-random initialization is proposed to initialize a population randomly and greedily. Then, an isomorphic transcoding (isoTcode) method is proposed to identify solutions with the isomorphic relations and to represent these solutions in a practical way in the UAV deployment problem. Finally, a crossover and a local search based on the isoTcode method are proposed to increase the solution diversity and improve the solution quality. Comparative experiments are conducted in the randomly generated instances with three problem scales. The results show that MA-IT performs better than other algorithms for solving the deployment optimization of UAVs. Full article
(This article belongs to the Special Issue Recent Advances in Computational Intelligence and Its Applications)
Show Figures

Figure 1

14 pages, 2842 KiB  
Article
Research on Path Planning Algorithm for Driverless Vehicles
by Hao Ma, Wenhui Pei and Qi Zhang
Mathematics 2022, 10(15), 2555; https://0-doi-org.brum.beds.ac.uk/10.3390/math10152555 - 22 Jul 2022
Cited by 7 | Viewed by 2007
Abstract
In a complex environment, although the artificial potential field (APF) method of improving the repulsion function solves the defect of local minimum, the planned path has an oscillation phenomenon which cannot meet the vehicle motion. In order to improve the efficiency of path [...] Read more.
In a complex environment, although the artificial potential field (APF) method of improving the repulsion function solves the defect of local minimum, the planned path has an oscillation phenomenon which cannot meet the vehicle motion. In order to improve the efficiency of path planning and solve the oscillation phenomenon existing in the improved artificial potential field method planning path. This paper proposes to combine the improved artificial potential field method with the rapidly exploring random tree (RRT) algorithm to plan the path. First, the improved artificial potential field method is combined with the RRT algorithm, and the obstacle avoidance method of the RRT algorithm is used to solve the path oscillation; The vehicle kinematics model is then established, and under the premise of ensuring the safety of the vehicle, a model predictive control (MPC) trajectory tracking controller with constraints is designed to verify whether the path planned by the combination of the two algorithms is optimal and conforms to the vehicle motion. Finally, the feasibility of the method is verified in simulation. The simulation results show that the method can effectively solve the problem of path oscillation and can plan the optimal path according to different environments and vehicle motion. Full article
(This article belongs to the Special Issue Recent Advances in Computational Intelligence and Its Applications)
Show Figures

Figure 1

33 pages, 7658 KiB  
Article
Hybridization of Manta-Ray Foraging Optimization Algorithm with Pseudo Parameter-Based Genetic Algorithm for Dealing Optimization Problems and Unit Commitment Problem
by Mohammed A. El-Shorbagy, Hala A. Omar and Tamer Fetouh
Mathematics 2022, 10(13), 2179; https://0-doi-org.brum.beds.ac.uk/10.3390/math10132179 - 22 Jun 2022
Cited by 6 | Viewed by 1856
Abstract
The manta ray foraging optimization algorithm (MRFO) is one of the promised meta-heuristic optimization algorithms. However, it can stick to a local minimum, consuming iterations without reaching the optimum solution. So, this paper proposes a hybridization between MRFO, and the genetic algorithm (GA) [...] Read more.
The manta ray foraging optimization algorithm (MRFO) is one of the promised meta-heuristic optimization algorithms. However, it can stick to a local minimum, consuming iterations without reaching the optimum solution. So, this paper proposes a hybridization between MRFO, and the genetic algorithm (GA) based on a pseudo parameter; where the GA can help MRFO to escape from falling into the local minimum. It is called a pseudo genetic algorithm with manta-ray foraging optimization (PGA-MRFO). The proposed algorithm is not a classical hybridization between MRFO and GA, wherein the classical hybridization consumes time in the search process as each algorithm is applied to all system variables. In addition, the classical hybridization results in an extended search algorithm, especially in systems with many variables. The PGA-MRFO hybridizes the pseudo-parameter-based GA and the MRFO algorithm to produce a more efficient algorithm that combines the advantages of both algorithms without getting stuck in a local minimum or taking a long time in the calculations. The pseudo parameter enables the GA to be applied to a specific number of variables and not to all system variables leading to reduce the computation time and burden. Also, the proposed algorithm used an approximation for the gradient of the objective function, which leads to dispensing derivatives calculations. Besides, PGA-MRFO depends on the pseudo inverse of non-square matrices, which saves calculations time; where the dependence on the pseudo inverse gives the algorithm more flexibility to deal with square and non-square systems. The proposed algorithm will be tested on the test functions that the main MRFO failed to find their optimum solution to prove its capability and efficiency. In addition, it will be applied to solve the unit commitment (UC) problem as one of the vital power system problems to show the validity of the proposed algorithm in practical applications. Finally, several analyses will be applied to the proposed algorithm to illustrate its effectiveness and reliability. Full article
(This article belongs to the Special Issue Recent Advances in Computational Intelligence and Its Applications)
Show Figures

Figure 1

15 pages, 8839 KiB  
Article
EADN: An Efficient Deep Learning Model for Anomaly Detection in Videos
by Sareer Ul Amin, Mohib Ullah, Muhammad Sajjad, Faouzi Alaya Cheikh, Mohammad Hijji, Abdulrahman Hijji and Khan Muhammad
Mathematics 2022, 10(9), 1555; https://0-doi-org.brum.beds.ac.uk/10.3390/math10091555 - 05 May 2022
Cited by 22 | Viewed by 3147
Abstract
Surveillance systems regularly create massive video data in the modern technological era, making their analysis challenging for security specialists. Finding anomalous activities manually in these enormous video recordings is a tedious task, as they infrequently occur in the real world. We proposed a [...] Read more.
Surveillance systems regularly create massive video data in the modern technological era, making their analysis challenging for security specialists. Finding anomalous activities manually in these enormous video recordings is a tedious task, as they infrequently occur in the real world. We proposed a minimal complex deep learning-based model named EADN for anomaly detection that can operate in a surveillance system. At the model’s input, the video is segmented into salient shots using a shot boundary detection algorithm. Next, the selected sequence of frames is given to a Convolutional Neural Network (CNN) that consists of time-distributed 2D layers for extracting salient spatiotemporal features. The extracted features are enriched with valuable information that is very helpful in capturing abnormal events. Lastly, Long Short-Term Memory (LSTM) cells are employed to learn spatiotemporal features from a sequence of frames per sample of each abnormal event for anomaly detection. Comprehensive experiments are performed on benchmark datasets. Additionally, the quantitative results are compared with state-of-the-art methods, and a substantial improvement is achieved, showing our model’s effectiveness. Full article
(This article belongs to the Special Issue Recent Advances in Computational Intelligence and Its Applications)
Show Figures

Figure 1

18 pages, 4185 KiB  
Article
Enhancement: SiamFC Tracker Algorithm Performance Based on Convolutional Hyperparameters Optimization and Low Pass Filter
by Rogeany Kanza, Yu Zhao, Zhilin Huang, Chenyu Huang and Zhuoming Li
Mathematics 2022, 10(9), 1527; https://0-doi-org.brum.beds.ac.uk/10.3390/math10091527 - 03 May 2022
Cited by 3 | Viewed by 1549
Abstract
Over the past few decades, convolutional neural networks (CNNs) have achieved outstanding results in addressing a broad scope of computer vision problems. Despite these improvements, fully convolutional Siamese neural networks (FCSNN) still hardly adapt to complex scenes, such as appearance change, scale change, [...] Read more.
Over the past few decades, convolutional neural networks (CNNs) have achieved outstanding results in addressing a broad scope of computer vision problems. Despite these improvements, fully convolutional Siamese neural networks (FCSNN) still hardly adapt to complex scenes, such as appearance change, scale change, similar objects interference, etc. The present study focuses on an enhanced FCSNN based on convolutional block hyperparameters optimization, a new activation function (ModReLU) and Gaussian low pass filter. The optimization of hyperparameters is an important task, as it has a crucial ascendancy on the tracking process performance, especially when it comes to the initialization of weights and bias. They have to work efficiently with the following activation function layer. Inadequate initialization can result in vanishing or exploding gradients. In the first method, we propose an optimization strategy for initializing weights and bias in the convolutional block to ameliorate the learning of features so that each neuron learns as much as possible. Next, the activation function normalizes the output. We implement the convolutional block hyperparameters optimization by setting the convolutional weights initialization to constant, the bias initialization to zero and the Leaky ReLU activation function at the output. In the second method, we propose a new activation, ModReLU, in the activation layer of CNN. Additionally, we also introduce a Gaussian low pass filter to minimize image noise and improve the structures of images at distinct scales. Moreover, we add a pixel-domain-based color adjustment implementation to enhance the capacity of the proposed strategies. The proposed implementations handle better rotation, moving, occlusion and appearance change problems and improve tracking speed. Our experimental results clearly show a significant improvement in the overall performance compared to the original SiamFC tracker. The first proposed technique of this work surpasses the original fully convolutional Siamese networks (SiamFC) on the VOT 2016 dataset with an increase of 15.42% in precision, 16.79% in AUPC and 15.93% in IOU compared to the original SiamFC. Our second proposed technique also reveals remarkable advances over the original SiamFC with 18.07% precision increment, 17.01% AUPC improvement and an increase of 15.87% in IOU. We evaluate our methods on the Visual Object Tracking (VOT) Challenge 2016 dataset, and they both outperform the original SiamFC tracker performance and many other top performers. Full article
(This article belongs to the Special Issue Recent Advances in Computational Intelligence and Its Applications)
Show Figures

Figure 1

29 pages, 1834 KiB  
Article
Elite Directed Particle Swarm Optimization with Historical Information for High-Dimensional Problems
by Qiang Yang, Yuanpeng Zhu, Xudong Gao, Dongdong Xu and Zhenyu Lu
Mathematics 2022, 10(9), 1384; https://0-doi-org.brum.beds.ac.uk/10.3390/math10091384 - 20 Apr 2022
Cited by 16 | Viewed by 1697
Abstract
High-dimensional optimization problems are ubiquitous in every field nowadays, which seriously challenge the optimization ability of existing optimizers. To solve this kind of optimization problems effectively, this paper proposes an elite-directed particle swarm optimization (EDPSO) with historical information to explore and exploit the [...] Read more.
High-dimensional optimization problems are ubiquitous in every field nowadays, which seriously challenge the optimization ability of existing optimizers. To solve this kind of optimization problems effectively, this paper proposes an elite-directed particle swarm optimization (EDPSO) with historical information to explore and exploit the high-dimensional solution space efficiently. Specifically, in EDPSO, the swarm is first separated into two exclusive sets based on the Pareto principle (80-20 rule), namely the elite set containing the top best 20% of particles and the non-elite set consisting of the remaining 80% of particles. Then, the non-elite set is further separated into two layers with the same size from the best to the worst. As a result, the swarm is divided into three layers. Subsequently, particles in the third layer learn from those in the first two layers, while particles in the second layer learn from those in the first layer, on the condition that particles in the first layer remain unchanged. In this way, the learning effectiveness and the learning diversity of particles could be largely promoted. To further enhance the learning diversity of particles, we maintain an additional archive to store obsolete elites, and use the predominant elites in the archive along with particles in the first two layers to direct the update of particles in the third layer. With these two mechanisms, the proposed EDPSO is expected to compromise search intensification and diversification well at the swarm level and the particle level, to explore and exploit the solution space. Extensive experiments are conducted on the widely used CEC’2010 and CEC’2013 high-dimensional benchmark problem sets to validate the effectiveness of the proposed EDPSO. Compared with several state-of-the-art large-scale algorithms, EDPSO is demonstrated to achieve highly competitive or even much better performance in tackling high-dimensional problems. Full article
(This article belongs to the Special Issue Recent Advances in Computational Intelligence and Its Applications)
Show Figures

Figure 1

32 pages, 1403 KiB  
Article
Differential Elite Learning Particle Swarm Optimization for Global Numerical Optimization
by Qiang Yang, Xu Guo, Xu-Dong Gao, Dong-Dong Xu and Zhen-Yu Lu
Mathematics 2022, 10(8), 1261; https://0-doi-org.brum.beds.ac.uk/10.3390/math10081261 - 11 Apr 2022
Cited by 16 | Viewed by 1456
Abstract
Although particle swarm optimization (PSO) has been successfully applied to solve optimization problems, its optimization performance still encounters challenges when dealing with complicated optimization problems, especially those with many interacting variables and many wide and flat local basins. To alleviate this issue, this [...] Read more.
Although particle swarm optimization (PSO) has been successfully applied to solve optimization problems, its optimization performance still encounters challenges when dealing with complicated optimization problems, especially those with many interacting variables and many wide and flat local basins. To alleviate this issue, this paper proposes a differential elite learning particle swarm optimization (DELPSO) by differentiating the two guiding exemplars as much as possible to direct the update of each particle. Specifically, in this optimizer, particles in the current swarm are divided into two groups, namely the elite group and non-elite group, based on their fitness. Then, particles in the non-elite group are updated by learning from those in the elite group, while particles in the elite group are not updated and directly enter the next generation. To comprise fast convergence and high diversity at the particle level, we let each particle in the non-elite group learn from two differential elites in the elite group. In this way, the learning effectiveness and the learning diversity of particles is expectedly improved to a large extent. To alleviate the sensitivity of the proposed DELPSO to the newly introduced parameters, dynamic adjustment strategies for parameters were further designed. With the above two main components, the proposed DELPSO is expected to compromise the search intensification and diversification well to explore and exploit the solution space properly to obtain promising performance. Extensive experiments conducted on the widely used CEC 2017 benchmark set with three different dimension sizes demonstrated that the proposed DELPSO achieves highly competitive or even much better performance than state-of-the-art PSO variants. Full article
(This article belongs to the Special Issue Recent Advances in Computational Intelligence and Its Applications)
Show Figures

Figure 1

26 pages, 3624 KiB  
Article
Empirical Study of Data-Driven Evolutionary Algorithms in Noisy Environments
by Dalue Lin, Haogan Huang, Xiaoyan Li and Yuejiao Gong
Mathematics 2022, 10(6), 943; https://0-doi-org.brum.beds.ac.uk/10.3390/math10060943 - 15 Mar 2022
Viewed by 1480
Abstract
For computationally intensive problems, data-driven evolutionary algorithms (DDEAs) are advantageous for low computational budgets because they build surrogate models based on historical data to approximate the expensive evaluation. Real-world optimization problems are highly susceptible to noisy data, but most of the existing DDEAs [...] Read more.
For computationally intensive problems, data-driven evolutionary algorithms (DDEAs) are advantageous for low computational budgets because they build surrogate models based on historical data to approximate the expensive evaluation. Real-world optimization problems are highly susceptible to noisy data, but most of the existing DDEAs are developed and tested on ideal and clean environments; hence, their performance is uncertain in practice. In order to discover how DDEAs are affected by noisy data, this paper empirically studied the performance of DDEAs in different noisy environments. To fulfill the research purpose, we implemented four representative DDEAs and tested them on common benchmark problems with noise simulations in a systematic manner. Specifically, the simulation of noisy environments considered different levels of noise intensity and probability. The experimental analysis revealed the association relationships among noisy environments, benchmark problems and the performance of DDEAs. The analysis showed that noise will generally cause deterioration of the DDEA’s performance in most cases, but the effects could vary with different types of problem landscapes and different designs of DDEAs. Full article
(This article belongs to the Special Issue Recent Advances in Computational Intelligence and Its Applications)
Show Figures

Figure 1

15 pages, 766 KiB  
Article
Hypergraph-Supervised Deep Subspace Clustering
by Yu Hu and Hongmin Cai
Mathematics 2021, 9(24), 3259; https://0-doi-org.brum.beds.ac.uk/10.3390/math9243259 - 15 Dec 2021
Cited by 1 | Viewed by 1816
Abstract
Auto-encoder (AE)-based deep subspace clustering (DSC) methods aim to partition high-dimensional data into underlying clusters, where each cluster corresponds to a subspace. As a standard module in current AE-based DSC, the self-reconstruction cost plays an essential role in regularizing the feature learning. However, [...] Read more.
Auto-encoder (AE)-based deep subspace clustering (DSC) methods aim to partition high-dimensional data into underlying clusters, where each cluster corresponds to a subspace. As a standard module in current AE-based DSC, the self-reconstruction cost plays an essential role in regularizing the feature learning. However, the self-reconstruction adversely affects the discriminative feature learning of AE, thereby hampering the downstream subspace clustering. To address this issue, we propose a hypergraph-supervised reconstruction to replace the self-reconstruction. Specifically, instead of enforcing the decoder in the AE to merely reconstruct samples themselves, the hypergraph-supervised reconstruction encourages reconstructing samples according to their high-order neighborhood relations. By the back-propagation training, the hypergraph-supervised reconstruction cost enables the deep AE to capture the high-order structure information among samples, facilitating the discriminative feature learning and, thus, alleviating the adverse effect of the self-reconstruction cost. Compared to current DSC methods, relying on the self-reconstruction, our method has achieved consistent performance improvement on benchmark high-dimensional datasets. Full article
(This article belongs to the Special Issue Recent Advances in Computational Intelligence and Its Applications)
Show Figures

Figure 1

16 pages, 2024 KiB  
Article
Classification of Initial Stages of Alzheimer’s Disease through Pet Neuroimaging Modality and Deep Learning: Quantifying the Impact of Image Filtering Approaches
by Ahsan Bin Tufail, Yong-Kui Ma, Mohammed K. A. Kaabar, Ateeq Ur Rehman, Rahim Khan and Omar Cheikhrouhou
Mathematics 2021, 9(23), 3101; https://0-doi-org.brum.beds.ac.uk/10.3390/math9233101 - 01 Dec 2021
Cited by 14 | Viewed by 2361
Abstract
Alzheimer’s disease (AD) is a leading health concern affecting the elderly population worldwide. It is defined by amyloid plaques, neurofibrillary tangles, and neuronal loss. Neuroimaging modalities such as positron emission tomography (PET) and magnetic resonance imaging are routinely used in clinical settings to [...] Read more.
Alzheimer’s disease (AD) is a leading health concern affecting the elderly population worldwide. It is defined by amyloid plaques, neurofibrillary tangles, and neuronal loss. Neuroimaging modalities such as positron emission tomography (PET) and magnetic resonance imaging are routinely used in clinical settings to monitor the alterations in the brain during the course of progression of AD. Deep learning techniques such as convolutional neural networks (CNNs) have found numerous applications in healthcare and other technologies. Together with neuroimaging modalities, they can be deployed in clinical settings to learn effective representations of data for different tasks such as classification, segmentation, detection, etc. Image filtering methods are instrumental in making images viable for image processing operations and have found numerous applications in image-processing-related tasks. In this work, we deployed 3D-CNNs to learn effective representations of PET modality data to quantify the impact of different image filtering approaches. We used box filtering, median filtering, Gaussian filtering, and modified Gaussian filtering approaches to preprocess the images and use them for classification using 3D-CNN architecture. Our findings suggest that these approaches are nearly equivalent and have no distinct advantage over one another. For the multiclass classification task between normal control (NC), mild cognitive impairment (MCI), and AD classes, the 3D-CNN architecture trained using Gaussian-filtered data performed the best. For binary classification between NC and MCI classes, the 3D-CNN architecture trained using median-filtered data performed the best, while, for binary classification between AD and MCI classes, the 3D-CNN architecture trained using modified Gaussian-filtered data performed the best. Finally, for binary classification between AD and NC classes, the 3D-CNN architecture trained using box-filtered data performed the best. Full article
(This article belongs to the Special Issue Recent Advances in Computational Intelligence and Its Applications)
Show Figures

Figure 1

20 pages, 5794 KiB  
Article
An Astrocyte-Flow Mapping on a Mesh-Based Communication Infrastructure to Defective Neurons Phagocytosis
by Amir Masoud Rahmani, Rizwan Ali Naqvi, Saqib Ali, Seyedeh Yasaman Hosseini Mirmahaleh, Mohammed Alswaitti, Mehdi Hosseinzadeh and Kamran Siddique
Mathematics 2021, 9(23), 3012; https://0-doi-org.brum.beds.ac.uk/10.3390/math9233012 - 24 Nov 2021
Cited by 1 | Viewed by 1546
Abstract
In deploying the Internet of Things (IoT) and Internet of Medical Things (IoMT)-based applications and infrastructures, the researchers faced many sensors and their output’s values, which have transferred between service requesters and servers. Some case studies addressed the different methods and technologies, including [...] Read more.
In deploying the Internet of Things (IoT) and Internet of Medical Things (IoMT)-based applications and infrastructures, the researchers faced many sensors and their output’s values, which have transferred between service requesters and servers. Some case studies addressed the different methods and technologies, including machine learning algorithms, deep learning accelerators, Processing-In-Memory (PIM), and neuromorphic computing (NC) approaches to support the data processing complexity and communication between IoMT nodes. With inspiring human brain structure, some researchers tackled the challenges of rising IoT- and IoMT-based applications and neural structures’ simulation. A defective device has destructive effects on the performance and cost of the applications, and their detection is challenging for a communication infrastructure with many devices. We inspired astrocyte cells to map the flow (AFM) of the Internet of Medical Things onto mesh network processing elements (PEs), and detect the defective devices based on a phagocytosis model. This study focuses on an astrocyte’s cholesterol distribution into neurons and presents an algorithm that utilizes its pattern to distribute IoMT’s dataflow and detect the defective devices. We researched Alzheimer’s symptoms to understand astrocyte and phagocytosis functions against the disease and employ the vaccination COVID-19 dataset to define a set of task graphs. The study improves total runtime and energy by approximately 60.85% and 52.38% after implementing AFM, compared with before astrocyte-flow mapping, which helps IoMT’s infrastructure developers to provide healthcare services to the requesters with minimal cost and high accuracy. Full article
(This article belongs to the Special Issue Recent Advances in Computational Intelligence and Its Applications)
Show Figures

Figure 1

Back to TopTop