AI Algorithm Design and Application

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (15 December 2023) | Viewed by 22946

Special Issue Editor

1. School of Software Technology, Dalian University of Technology, Dalian 116620, China
2. Fuzzy Logic Systems Institute, Fukuoka 820-0067, Japan
Interests: computational intelligence

Special Issue Information

Dear Colleagues, 

Computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. In the last decade, deep learning has received widespread attention, especially deep neural networks. In fact, some of the most successful AI systems are based on CI. Not only this, neural networks, such as evolutionary computation, fuzzy systems, etc., have also achieved state-of-the-art performance in a lot of difficult tasks and facilitated the creation of new products and services in many different fields.

The purpose of this Special Issue is to provide advanced CI-based methods to solve some challenging practical problems, such as few-shot learning, large-scale optimization, uncertainty, etc. Of particular interest are the innovative researches trying to connect mathematics with CI, involving explainable neural networks, neuro-evolution, CI-based meta-heuristics, parallel meta-heuristics with GPU computing, etc. From the application side, we welcome papers introducing novel CI-based methods for computer vision, bioinformatics, electronics, automatics, and industrial engineering. The scope of this Special Issue includes, but is not limited to, the following topics:

  • Neural Network
  • Deep Learning
  • Machine Learning
  • Fuzzy Systems
  • Evolutionary Computation
  • Swarm Intelligence
  • Meta-Heuristic
  • Few-shot Learning
  • Large-scale optimization
  • Uncertainty
  • Explainable Neural Network
  • Neuro-evolution
  • GPU computing
  • Computer Vision
  • Bioinformatics
  • Automatics and Electronics
  • Industrial Engineering
  • Logistics
  • Smart Manufacturing

Dr. Lin Lin
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 3985 KiB  
Article
Autonomous Ride-Sharing Service Using Graph Embedding and Dial-a-Ride Problem: Application to the Last-Mile Transit in Lyon City
by Omar Rifki
Mathematics 2024, 12(4), 546; https://0-doi-org.brum.beds.ac.uk/10.3390/math12040546 - 10 Feb 2024
Viewed by 487
Abstract
Autonomous vehicles are anticipated to revolutionize ride-sharing services and subsequently enhance the public transportation systems through a first–last-mile transit service. Within this context, a fleet of autonomous vehicles can be modeled as a Dial-a-Ride Problem with certain features. In this study, we propose [...] Read more.
Autonomous vehicles are anticipated to revolutionize ride-sharing services and subsequently enhance the public transportation systems through a first–last-mile transit service. Within this context, a fleet of autonomous vehicles can be modeled as a Dial-a-Ride Problem with certain features. In this study, we propose a holistic solving approach to this problem, which combines the mixed-integer linear programming formulation with a novel graph dimension reduction method based on the graph embedding framework. This latter method is effective since accounting for heterogeneous travel demands of the covered territory tends to increase the size of the routing graph drastically, thus rendering the exact solving of small instances computationally infeasible. An application is provided for the real transport demand of the industrial district of “Vallée de la Chimie” in Lyon city, France. Instances involving more than 50 transport requests and 10 vehicles could be easily solved. Results suggest that this method generates routes of reduced nodes with lower vehicle kilometers traveled compared to the constrained K-means-based reduction. Reductions in terms of GHG emissions are estimated to be around 75% less than the private vehicle mode in our applied service. A sensitivity analysis is also provided. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

29 pages, 4793 KiB  
Article
A Machine Learning Algorithm That Experiences the Evolutionary Algorithm’s Predictions—An Application to Optimal Control
by Viorel Mînzu and Iulian Arama
Mathematics 2024, 12(2), 187; https://0-doi-org.brum.beds.ac.uk/10.3390/math12020187 - 06 Jan 2024
Viewed by 452
Abstract
Using metaheuristics such as the Evolutionary Algorithm (EA) within control structures is a realistic approach for certain optimal control problems. They often predict the optimal control values over a prediction horizon using a process model (PM). The computational effort sometimes causes the execution [...] Read more.
Using metaheuristics such as the Evolutionary Algorithm (EA) within control structures is a realistic approach for certain optimal control problems. They often predict the optimal control values over a prediction horizon using a process model (PM). The computational effort sometimes causes the execution time to exceed the sampling period. Our work addresses a new issue: whether a machine learning (ML) algorithm could “learn” the optimal behaviour of the couple (EA and PM). A positive answer is given by proposing datasets apprehending this couple’s optimal behaviour and appropriate ML models. Following a design procedure, a number of closed-loop simulations will provide the sequences of optimal control and state values, which are collected and aggregated in a data structure. For each sampling period, datasets are extracted from the aggregated data. The ML algorithm experiencing these datasets will produce a set of regression functions. Replacing the EA predictor with the ML model, new simulations are carried out, proving that the state evolution is almost identical. The execution time decreases drastically because the PM’s numerical integrations are totally avoided. The performance index equals the best-known value. In different case studies, the ML models succeeded in capturing the optimal behaviour of the couple (EA and PM) and yielded efficient controllers. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

20 pages, 6010 KiB  
Article
An Adaptive Low Computational Cost Alternating Direction Method of Multiplier for RELM Large-Scale Distributed Optimization
by Ke Wang, Shanshan Huo, Banteng Liu, Zhangquan Wang and Tiaojuan Ren
Mathematics 2024, 12(1), 43; https://0-doi-org.brum.beds.ac.uk/10.3390/math12010043 - 22 Dec 2023
Viewed by 448
Abstract
In a class of large-scale distributed optimization, the calculation of RELM based on the Moore–Penrose inverse matrix is prohibitively expensive, which hinders the formulation of a computationally efficient optimization model. Attempting to improve the model’s convergence performance, this paper proposes a low computing [...] Read more.
In a class of large-scale distributed optimization, the calculation of RELM based on the Moore–Penrose inverse matrix is prohibitively expensive, which hinders the formulation of a computationally efficient optimization model. Attempting to improve the model’s convergence performance, this paper proposes a low computing cost Alternating Direction Method of Multipliers (ADMM), where the original update in ADMM is solved inexactly with approximate curvature information. Based on quasi-Newton techniques, the ADMM approach allows us to solve convex optimization with reasonable accuracy and computational effort. By introducing this algorithm into the RELM model, the model fitting problem can be decomposed into a set of subproblems that can be executed in parallel to achieve efficient classification performance. To avoid the storage of expensive Hessian for large problems, BFGS with limited memory is proposed with computational efficiency. And the optimal parameter values of the step-size search method are obtained through Wolfe line search strategy. To demonstrate the superiority of our methods, numerical experiments are conducted on eight real-world datasets. Results on problems arising in machine learning suggest that the proposed method is competitive with other similar methods, both in terms of better computational efficiency as well as accuracy. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

26 pages, 6146 KiB  
Article
Research on Malware Detection Technology for Mobile Terminals Based on API Call Sequence
by Ye Yao, Yian Zhu, Yao Jia, Xianchen Shi, Lixiang Zhang, Dong Zhong and Junhua Duan
Mathematics 2024, 12(1), 20; https://0-doi-org.brum.beds.ac.uk/10.3390/math12010020 - 21 Dec 2023
Viewed by 561
Abstract
With the development of the Internet, the types and quantities of malware have grown rapidly, and how to identify unknown malware is becoming a new challenge. The traditional malware detection method based on fixed features is becoming more and more difficult. In order [...] Read more.
With the development of the Internet, the types and quantities of malware have grown rapidly, and how to identify unknown malware is becoming a new challenge. The traditional malware detection method based on fixed features is becoming more and more difficult. In order to improve detection accuracy and efficiency for mobile terminals, this paper proposed a malware detection method for mobile terminals based on application programming interface (API) call sequence, which was characterized by the API call sequence and used a series of feature preprocessing techniques to remove redundant processing of the API call sequence. Finally, the recurrent neural network method (RNN) was used to build the model and perform detection and verification. Furthermore, this paper constructed a malware detection model based on a two-way recurrent neural network and used the two-way long short-term memory network model (LSTM) to train the data set containing 5986 malware samples and 5065 benign software samples to obtain the final detection model and its parameters. Finally, the feature vector of the APK file to be detected was passed into the model and obtained the detection results. The experimental results indicated that the detection accuracy of this method can reach 93.68%. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

23 pages, 2777 KiB  
Article
A Smart Contract Vulnerability Detection Method Based on Multimodal Feature Fusion and Deep Learning
by Jinggang Li, Gehao Lu, Yulian Gao and Feng Gao
Mathematics 2023, 11(23), 4823; https://0-doi-org.brum.beds.ac.uk/10.3390/math11234823 - 29 Nov 2023
Viewed by 1116
Abstract
With the proliferation of blockchain technology in decentralized applications like decentralized finance and supply chain and identity management, smart contracts operating on a blockchain frequently encounter security issues such as reentrancy vulnerabilities, timestamp dependency vulnerabilities, tx.origin vulnerabilities, and integer overflow vulnerabilities. These security [...] Read more.
With the proliferation of blockchain technology in decentralized applications like decentralized finance and supply chain and identity management, smart contracts operating on a blockchain frequently encounter security issues such as reentrancy vulnerabilities, timestamp dependency vulnerabilities, tx.origin vulnerabilities, and integer overflow vulnerabilities. These security concerns pose a significant risk of causing substantial losses to user accounts. Consequently, the detection of vulnerabilities in smart contracts has become a prominent area of research. Existing research exhibits limitations, including low detection accuracy in traditional smart contract vulnerability detection approaches and the tendency of deep learning-based solutions to focus on a single type of vulnerability. To address these constraints, this paper introduces a smart contract vulnerability detection method founded on multimodal feature fusion. This method adopts a multimodal perspective to extract three modal features from the lifecycle of smart contracts, leveraging both static and dynamic features comprehensively. Through deep learning models like Graph Convolutional Networks (GCNs) and bidirectional Long Short-Term Memory networks (bi-LSTMs), effective detection of vulnerabilities in smart contracts is achieved. Experimental results demonstrate that the proposed method attains detection accuracies of 85.73% for reentrancy vulnerabilities, 85.41% for timestamp dependency vulnerabilities, 83.58% for tx.origin vulnerabilities, and 90.96% for integer Overflow vulnerabilities. Furthermore, ablation experiments confirm the efficacy of the newly introduced modal features, highlighting the significance of fusing dynamic and static features in enhancing detection accuracy. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

15 pages, 2893 KiB  
Article
A Data-Driven Convolutional Neural Network Approach for Power Quality Disturbance Signal Classification (DeepPQDS-FKTNet)
by Fahman Saeed, Sultan Aldera, Mohammad Alkhatib, Abdullrahman A. Al-Shamma’a and Hassan M. Hussein Farh
Mathematics 2023, 11(23), 4726; https://0-doi-org.brum.beds.ac.uk/10.3390/math11234726 - 22 Nov 2023
Viewed by 652
Abstract
Power quality disturbance (PQD) signal classification is crucial for the real-time monitoring of modern power grids, assuring safe and reliable operation and user safety. Traditional power quality disturbance signal classification approaches are sensitive to noise, feature selection, etc. This study introduces a novel [...] Read more.
Power quality disturbance (PQD) signal classification is crucial for the real-time monitoring of modern power grids, assuring safe and reliable operation and user safety. Traditional power quality disturbance signal classification approaches are sensitive to noise, feature selection, etc. This study introduces a novel approach utilizing a data-driven convolutional neural network (CNN) to improve the effectiveness of power quality disturbance signal classification. Deep learning has been successfully used in various fields of recognition, yielding promising outcomes. Deep learning is often characterized as a complex system, with its filters and layers being determined through empirical investigations. A deep learning model was developed for the purpose of classifying PQDs, with the aim of narrowing down the search for unidentified PQDs to a specific problem domain. This approach demonstrates a high level of efficiency in accelerating the process of recognizing PQDs among a vast database of PQDs. In order to automatically identify the number of filters and the number of layers in the model in a PQD dataset, the proposed model uses pyramidal clustering, the Fukunaga–Koontz transform, and the ratio of the between-class scatter to the within-class scatter. The suggested model was assessed using the synthetic dataset generated, with and without the presence of noise. The proposed models outperformed both well-known pre-trained models and state-of-the-art PQD classification techniques in terms of classification accuracy. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

21 pages, 1323 KiB  
Article
Intelligent Classification and Diagnosis of Diabetes and Impaired Glucose Tolerance Using Deep Neural Networks
by Alma Y. Alanis, Oscar D. Sanchez, Alonso Vaca-González and Eduardo Rangel-Heras
Mathematics 2023, 11(19), 4065; https://0-doi-org.brum.beds.ac.uk/10.3390/math11194065 - 25 Sep 2023
Viewed by 1126
Abstract
Time series classification is a challenging and exciting problem in data mining. Some diseases are classified and diagnosed based on time series. Such is the case for diabetes mellitus, which can be analyzed based on data from the oral glucose tolerance test (OGTT). [...] Read more.
Time series classification is a challenging and exciting problem in data mining. Some diseases are classified and diagnosed based on time series. Such is the case for diabetes mellitus, which can be analyzed based on data from the oral glucose tolerance test (OGTT). Prompt diagnosis of diabetes mellitus is essential for disease management. Diabetes mellitus does not appear suddenly; instead, the patient presents symptoms of impaired glucose tolerance that can also be diagnosed via glucose tolerance testing. This work presents a classification and diagnosis scheme for diseases, specifically diabetes mellitus and poor glucose tolerance, using deep neural networks based on time series data. In addition, data from virtual patients were obtained through the Dalla Man and UVA/Padova models; the validation was carried out with data from actual patients. The results show that deep neural networks have an accuracy of 96%. This indicates that DNNs is a helpful tool that can improve the diagnosis and classification of diseases in early detection. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

26 pages, 2228 KiB  
Article
A Novel Integrated Heuristic Optimizer Using a Water Cycle Algorithm and Gravitational Search Algorithm for Optimization Problems
by Mengnan Tian, Junhua Liu, Wei Yue and Jie Zhou
Mathematics 2023, 11(8), 1880; https://0-doi-org.brum.beds.ac.uk/10.3390/math11081880 - 15 Apr 2023
Cited by 1 | Viewed by 727
Abstract
This paper presents a novel composite heuristic algorithm for global optimization by organically integrating the merits of a water cycle algorithm (WCA) and gravitational search algorithm (GSA). To effectively reinforce the exploration and exploitation of algorithms and reasonably achieve their balance, a modified [...] Read more.
This paper presents a novel composite heuristic algorithm for global optimization by organically integrating the merits of a water cycle algorithm (WCA) and gravitational search algorithm (GSA). To effectively reinforce the exploration and exploitation of algorithms and reasonably achieve their balance, a modified WCA is first put forward to strengthen its search performance by introducing the concept of the basin, where the position of the solution is also considered into the assignment of the sea or river and its streams, and the number of the guider solutions is adaptively reduced during the search process. Furthermore, the enhanced WCA is adaptively cooperated with the gravitational search to search for new solutions based on their historical performance within a certain stage. Moreover, the binomial crossover operation is also incorporated after the water cycle search or the gravitational search to further improve the search capability of the algorithm. Finally, the performance of the proposed algorithm is evaluated by comparing with six excellent meta-heuristic algorithms on the IEEE CEC2014 test suite, and the numerical results indicate that the proposed algorithm is very competitive. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

21 pages, 766 KiB  
Article
Impact of Machine Learning and Artificial Intelligence in Business Based on Intuitionistic Fuzzy Soft WASPAS Method
by Majed Albaity, Tahir Mahmood and Zeeshan Ali
Mathematics 2023, 11(6), 1453; https://0-doi-org.brum.beds.ac.uk/10.3390/math11061453 - 16 Mar 2023
Cited by 4 | Viewed by 1456
Abstract
Artificial intelligence (AI) is a well-known and reliable technology that enables a machine to simulate human behavior. While the major theme of AI is to make a smart computer system that thinks like a human to solve awkward problems, machine learning allows a [...] Read more.
Artificial intelligence (AI) is a well-known and reliable technology that enables a machine to simulate human behavior. While the major theme of AI is to make a smart computer system that thinks like a human to solve awkward problems, machine learning allows a machine to automatically learn from past information without the need for explicit programming. In this analysis, we aim to derive the idea of Aczel–Alsina aggregation operators based on an intuitionistic fuzzy soft set. The initial stage was the discovery of the primary and critical Aczel–Alsina operational laws for intuitionistic fuzzy soft sets. Subsequently, we pioneer a range of applicable theories (set out below) and identify their essential characteristics and key results: intuitionistic fuzzy soft Aczel–Alsina weighted averaging; intuitionistic fuzzy soft Aczel–Alsina ordered weighted averaging; intuitionistic fuzzy soft Aczel–Alsina weighted geometric operators; and intuitionistic fuzzy soft Aczel–Alsina ordered weighted geometric operators. Additionally, by utilizing certain key information, including intuitionistic fuzzy soft Aczel–Alsina weighted averaging and intuitionistic fuzzy soft Aczel–Alsina weighted geometric operators, we also introduce the theory of the weighted aggregates sum product assessment method for intuitionistic fuzzy soft information. This paper also introduces a multi-attribute decision-making method, which is based on derived operators for intuitionistic fuzzy soft numbers and seeks to assess specific industrial problems using artificial intelligence or machine learning. Finally, to underline the value and reasonableness of the information described herein, we compare our obtained results with some pre-existing information in the field. This comparison is supported by a range of numerical examples to demonstrate the practicality of the invented theory. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

18 pages, 4352 KiB  
Article
A Three Stage Optimal Scheduling Algorithm for AGV Route Planning Considering Collision Avoidance under Speed Control Strategy
by Chengji Liang, Yue Zhang and Liang Dong
Mathematics 2023, 11(1), 138; https://0-doi-org.brum.beds.ac.uk/10.3390/math11010138 - 27 Dec 2022
Cited by 11 | Viewed by 2305
Abstract
With the trend of terminal automation and the requirement for port operation efficiency to be greatly improved, it is very necessary to optimize the traveling route of automatic guided vehicles (AGV) with reference to the connection of loading and unloading equipment. As a [...] Read more.
With the trend of terminal automation and the requirement for port operation efficiency to be greatly improved, it is very necessary to optimize the traveling route of automatic guided vehicles (AGV) with reference to the connection of loading and unloading equipment. As a complex multi-equipment system, it is inevitable that AGV will collide when traveling due to various accidents in actual operation, which will lead to AGV locking and reduce the efficiency of terminal operation. Considering the locking problem of AGV, we propose a three-stage integrated scheduling algorithm for AGV route planning. Through joint optimization with quay cranes (QC) and yard blocks, a road network model is established in the front area of the container port to optimize the path of AGV in the road network, and a speed control strategy is proposed to solve the problem of AGV collision avoidance. In the first stage, we establish the AGV optimal route model with the goal of minimizing the AGV path according to the AGV road network situation. In the second stage, on the basis of the determination of AGV route planning, and when the container task is known, the AGV task assignment model is established with the goal of minimizing the maximum completion time, and the model is solved by genetic algorithm (GA). In the third stage, on the basis of AGV task assignment and route determination, the AGV route and AGV task assignment scheme are input into the simulation model by establishing the AGV collision avoidance control model for speed control, and establishing the AGV route network simulation model for automated terminals considering collision avoidance in plant simulation software. The maximum completion time obtained from the simulation model is compared with the completion time obtained from the genetic algorithm. The proposed three-stage joint scheduling algorithm can improve the loading and unloading efficiency of the port, reduce the AGV locking situation, and has a certain contribution to the formulation of the actual operation planning of the port. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

17 pages, 1010 KiB  
Article
Zeroing Neural Networks Combined with Gradient for Solving Time-Varying Linear Matrix Equations in Finite Time with Noise Resistance
by Jun Cai, Wenlong Dai, Jingjing Chen and Chenfu Yi
Mathematics 2022, 10(24), 4828; https://0-doi-org.brum.beds.ac.uk/10.3390/math10244828 - 19 Dec 2022
Viewed by 1104
Abstract
Due to the time delay and some unavoidable noise factors, obtaining a real-time solution of dynamic time-varying linear matrix equation (LME) problems is of great importance in the scientific and engineering fields. In this paper, based on the philosophy of zeroing neural networks [...] Read more.
Due to the time delay and some unavoidable noise factors, obtaining a real-time solution of dynamic time-varying linear matrix equation (LME) problems is of great importance in the scientific and engineering fields. In this paper, based on the philosophy of zeroing neural networks (ZNN), we propose an integration-enhanced combined accelerating zeroing neural network (IEAZNN) model to solve LME problem accurately and efficiently. Different from most of the existing ZNNs research, there are two error functions combined in the IEAZNN model, among which the gradient of the energy function is the first design for the purpose of decreasing the norm-based error to zero and the second one is adding an integral term to resist additive noise. On the strength of novel combination in two error functions, the IEAZNN model is capable of converging in finite time and resisting noise at the same time. Moreover, theoretical proof and numerical verification results show that the IEAZNN model can achieve high accuracy and fast convergence speed in solving time-varying LME problems compared with the conventional ZNN (CZNN) and integration-enhanced ZNN (IEZNN) models, even in various kinds of noise environments. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

19 pages, 4184 KiB  
Article
Multi-AGV Dynamic Scheduling in an Automated Container Terminal: A Deep Reinforcement Learning Approach
by Xiyan Zheng, Chengji Liang, Yu Wang, Jian Shi and Gino Lim
Mathematics 2022, 10(23), 4575; https://0-doi-org.brum.beds.ac.uk/10.3390/math10234575 - 02 Dec 2022
Cited by 6 | Viewed by 2011
Abstract
With the rapid development of global trade, ports and terminals are playing an increasingly important role, and automatic guided vehicles (AGVs) have been used as the main carriers performing the loading/unloading operations in automated container terminals. In this paper, we investigate a multi-AGV [...] Read more.
With the rapid development of global trade, ports and terminals are playing an increasingly important role, and automatic guided vehicles (AGVs) have been used as the main carriers performing the loading/unloading operations in automated container terminals. In this paper, we investigate a multi-AGV dynamic scheduling problem to improve the terminal operational efficiency, considering the sophisticated complexity and uncertainty involved in the port terminal operation. We propose to model the dynamic scheduling of AGVs as a Markov decision process (MDP) with mixed decision rules. Then, we develop a novel adaptive learning algorithm based on a deep Q-network (DQN) to generate the optimal policy. The proposed algorithm is trained based on data obtained from interactions with a simulation environment that reflects the real-world operation of an automated in Shanghai, China. The simulation studies show that, compared with conventional scheduling methods using a heuristic algorithm, i.e., genetic algorithm (GA) and rule-based scheduling, terminal the proposed approach performs better in terms of effectiveness and efficiency. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

14 pages, 990 KiB  
Article
A Multi-Objective Optimisation Mathematical Model with Constraints Conducive to the Healthy Rhythm for Lighting Control Strategy
by Huiling Cai, Qingcheng Lin, Hanwei Liu, Xuefeng Li and Hui Xiao
Mathematics 2022, 10(19), 3471; https://0-doi-org.brum.beds.ac.uk/10.3390/math10193471 - 23 Sep 2022
Cited by 1 | Viewed by 1119
Abstract
Studies have shown that illuminance and correlated colour temperature (CCT) are strongly correlated with body responses such as circadian rhythm, alertness, and mood. It is worth noting that these responses show a complex and variable coupling, which needs to be solved using accurate [...] Read more.
Studies have shown that illuminance and correlated colour temperature (CCT) are strongly correlated with body responses such as circadian rhythm, alertness, and mood. It is worth noting that these responses show a complex and variable coupling, which needs to be solved using accurate mathematical models for the regulation of indoor light parameters. Therefore, in this study, by weighing the evaluations of visual comfort, alertness, valence, and arousal of mood, a multi-objective optimisation mathematical model was developed with constraints conducive to the healthy rhythm. The problem was solved with the multi-objective evolutionary algorithm based on the decomposition differential evolution (MOEA/D-DE) algorithm. Taking educational space as the analysis goal, a dual-parameter setting strategy for illuminance and CCT covering four modes was proposed: focused learning, comfortable learning, soothing learning, and resting state, which could provide a scientific basis for the regulation of the lighting control system. The alertness during class time reached 3.01 compared to 2.34 during break time, showing a good light facilitation effect. The proposed mathematical model and analysis method also have the potential for application in the lighting design and control in other spaces to meet the era of intelligent, highly flexible, and sustainable buildings. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

10 pages, 2325 KiB  
Article
Data-Driven Building Energy Consumption Prediction Model Based on VMD-SA-DBN
by Yongrui Qin, Meng Zhao, Qingcheng Lin, Xuefeng Li and Jing Ji
Mathematics 2022, 10(17), 3058; https://0-doi-org.brum.beds.ac.uk/10.3390/math10173058 - 24 Aug 2022
Cited by 5 | Viewed by 1419
Abstract
Prediction of building energy consumption using mathematical modeling is crucial for improving the efficiency of building energy utilization, assisting in building energy consumption planning and scheduling, and further achieving the goal of energy conservation and emission reduction. In consideration of the non-linear and [...] Read more.
Prediction of building energy consumption using mathematical modeling is crucial for improving the efficiency of building energy utilization, assisting in building energy consumption planning and scheduling, and further achieving the goal of energy conservation and emission reduction. In consideration of the non-linear and non-smooth characteristics of building energy consumption time series data, a short-term, hybrid building energy consumption prediction model combining variational mode decomposition (VMD), a simulated annealing (SA) algorithm, and a deep belief network (DBN) is proposed in this study. In the proposed VMD-SA-DBN model, the VMD algorithm decomposes the time series into different modes to reduce the fluctuation of the data. The SA-DBN prediction model is built for each mode separately, and the DBN network structure parameters are optimized by the SA algorithm. The prediction results of each model are aggregated and reconstructed to obtain the final prediction output. The validity and prediction performance of the proposed model is evaluated on a publicly available dataset, and the results show that the proposed new model significantly improves the accuracy and stability of building energy consumption prediction compared with several typical machine learning methods. The mean absolute percent error (MAPE) of the VMD-SA-DBN model is 63.7%, 65.5%, 46.83%, 64.82%, 44.1%, 36.3%, and 28.3% lower than that of the long short-term memory (LSTM), gated recurrent unit (GRU), VMD-LSTM, VMD-GRU, DBN, SA-DBN, and VMD-DBN models, respectively. The results will help managers formulate more-favorable low-energy emission reduction plans and improve building energy efficiency. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

17 pages, 424 KiB  
Article
Unknown Security Attack Detection of Industrial Control System by Deep Learning
by Jie Wang, Pengfei Li, Weiqiang Kong and Ran An
Mathematics 2022, 10(16), 2872; https://0-doi-org.brum.beds.ac.uk/10.3390/math10162872 - 11 Aug 2022
Viewed by 1204
Abstract
With the rapid development of network technologies, the network security of industrial control systems has aroused widespread concern. As a defense mechanism, an ideal intrusion detection system (IDS) can effectively detect abnormal behaviors in a system without affecting the performance of the industrial [...] Read more.
With the rapid development of network technologies, the network security of industrial control systems has aroused widespread concern. As a defense mechanism, an ideal intrusion detection system (IDS) can effectively detect abnormal behaviors in a system without affecting the performance of the industrial control system (ICS). Many deep learning methods are used to build an IDS, which rely on massive numbers of variously labeled samples for model training. However, network traffic is imbalanced, and it is difficult for researchers to obtain sufficient attack samples. In addition, the attack variants are rich, and constructing all possible attack types in advance is impossible. In order to overcome these challenges and improve the performance of an IDS, this paper presents a novel intrusion detection approach which integrates a one-dimensional convolutional autoencoder (1DCAE) and support vector data description (SVDD) for the first time. For the two-stage training process, 1DCAE fails to retain the key features of intrusion detection and SVDD has to add restrictions, so a joint optimization solution is introduced. A three-stage optimization process is proposed to obtain better performance. Experiments on the benchmark intrusion detection dataset NSL-KDD show that the proposed method can effectively detect various unknown attacks, learning with only normal traffic. Compared with the recent state-of-art intrusion detection baselines, the proposed method is improved in most metrics. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

15 pages, 4710 KiB  
Article
Research on Location Selection Model of 5G Micro Base Station Based on Smart Street Lighting System
by Hanwei Liu, Wenchao Li, Huiling Cai, Qingcheng Lin, Xuefeng Li and Hui Xiao
Mathematics 2022, 10(15), 2627; https://0-doi-org.brum.beds.ac.uk/10.3390/math10152627 - 27 Jul 2022
Cited by 5 | Viewed by 1846
Abstract
In order to promote the development and construction of smart cities, the massive equipment requirements of sensing terminals increased the pressure on urban site resource allocation. The light pole is suitable for carrying various urban functional equipment to form a smart street lighting [...] Read more.
In order to promote the development and construction of smart cities, the massive equipment requirements of sensing terminals increased the pressure on urban site resource allocation. The light pole is suitable for carrying various urban functional equipment to form a smart street lighting system, which can provide rich site resources for the large-scale construction of urban functional facilities such as 5G micro base stations. However, the selection and combination of equipment mounted in the smart street lighting system only focus on the functional superposition at the physical level, without considering the relevance of each subsystem in practical application scenarios. Therefore, this study proposed a 5G micro base station location model based on a smart street lighting system. The correlation and cooperativity between 5G micro base stations and mounted devices were fully considered, and a universal system-level location selection index was developed to realize rational utilization of urban space site resources and intelligent linkage between subsystems. The results showed that the model is significantly effective for functional areas with different road network characteristics and provides practical, robust, effective, and accurate help for similar location selection problems. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

10 pages, 1217 KiB  
Article
DenSec: Secreted Protein Prediction in Cerebrospinal Fluid Based on DenseNet and Transformer
by Lan Huang, Yanli Qu, Kai He, Yan Wang and Dan Shao
Mathematics 2022, 10(14), 2490; https://0-doi-org.brum.beds.ac.uk/10.3390/math10142490 - 18 Jul 2022
Cited by 1 | Viewed by 1325
Abstract
Cerebrospinal fluid (CSF) exists in the surrounding spaces of mammalian central nervous systems (CNS); therefore, there are numerous potential protein biomarkers associated with CNS disease in CSF. Currently, approximately 4300 proteins have been identified in CSF by protein profiling. However, due to the [...] Read more.
Cerebrospinal fluid (CSF) exists in the surrounding spaces of mammalian central nervous systems (CNS); therefore, there are numerous potential protein biomarkers associated with CNS disease in CSF. Currently, approximately 4300 proteins have been identified in CSF by protein profiling. However, due to the diverse modifications, as well as the existing technical limits, large-scale protein identification in CSF is still considered a challenge. Inspired by computational methods, this paper proposes a deep learning framework, named DenSec, for secreted protein prediction in CSF. In the first phase of DenSec, all input proteins are encoded as a matrix with a fixed size of 1000 × 20 by calculating a position-specific score matrix (PSSM) of protein sequences. In the second phase, a dense convolutional network (DenseNet) is adopted to extract the feature from these PSSMs automatically. After that, Transformer with a fully connected dense layer acts as classifier to perform a binary classification in terms of secretion into CSF or not. According to the experiment results, DenSec achieves a mean accuracy of 86.00% in the test dataset and outperforms the state-of-the-art methods. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

15 pages, 961 KiB  
Article
Dynamic Uncertainty Study of Multi-Center Location and Route Optimization for Medicine Logistics Company
by Zhiyuan Yuan and Jie Gao
Mathematics 2022, 10(6), 953; https://0-doi-org.brum.beds.ac.uk/10.3390/math10060953 - 16 Mar 2022
Cited by 3 | Viewed by 1889
Abstract
Multi-center location of pharmaceutical logistics is the focus of pharmaceutical logistics research, and the dynamic uncertainty of pharmaceutical logistics multi-center location is a difficult point of research. In order to reduce the risk and cost of multi-enterprise, multi-category, large-volume, high-efficiency, and nationwide centralized [...] Read more.
Multi-center location of pharmaceutical logistics is the focus of pharmaceutical logistics research, and the dynamic uncertainty of pharmaceutical logistics multi-center location is a difficult point of research. In order to reduce the risk and cost of multi-enterprise, multi-category, large-volume, high-efficiency, and nationwide centralized medicine distribution, this study explores the best solution for planning medicine delivery for the medicine logistics. In this paper, based on the idea of big data, comprehensive consideration is given to uncertainties in center location, medicine type, medicine chemical characteristics, cost of medicine quality control (refrigeration and monitoring costs), delivery timeliness, and other factors. On this basis, a multi-center location- and route-optimization model for a medicine logistics company under dynamic uncertainty is constructed. The accuracy of the algorithm is improved by hybridizing the fuzzy C-means algorithm, sequential quadratic programming algorithm, and variable neighborhood search algorithm to combine the advantages of each. Finally, the model and the algorithm are verified through multi-enterprise, multi-category, high-volume, high-efficiency, and nationwide centralized medicine distribution cases, and various combinations of the three algorithms and several rival algorithms are compared and analyzed. Compared with rival algorithms, this hybrid algorithm has higher accuracy in solving multi-center location path optimization problem under the dynamic uncertainty in pharmaceutical logistics. Full article
(This article belongs to the Special Issue AI Algorithm Design and Application)
Show Figures

Figure 1

Back to TopTop