Next Article in Journal
Instability of Viscoelastic Liquid Sheets in a Transverse Electric Field
Next Article in Special Issue
Deep Learning Nonhomogeneous Elliptic Interface Problems by Soft Constraint Physics-Informed Neural Networks
Previous Article in Journal
A Prospect-Theory-Based Operation Loop Decision-Making Method for Kill Web
Previous Article in Special Issue
Artificial Neuron-Based Model for a Hybrid Real-Time System: Induction Motor Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Neural Network Training Algorithm Based on Artificial Bee Colony Algorithm for Nonlinear System Identification

Department of Computer Engineering, Engineering Architecture Faculty, Nevsehir Haci Bektas Veli Univesity, 50300 Nevsehir, Turkey
Submission received: 22 August 2022 / Revised: 14 September 2022 / Accepted: 17 September 2022 / Published: 24 September 2022
(This article belongs to the Special Issue Applications of Mathematical Modeling and Neural Networks)

Abstract

:
Artificial neural networks (ANNs), one of the most important artificial intelligence techniques, are used extensively in modeling many types of problems. A successful training process is required to create effective models with ANN. An effective training algorithm is essential for a successful training process. In this study, a new neural network training algorithm called the hybrid artificial bee colony algorithm based on effective scout bee stage (HABCES) was proposed. The HABCES algorithm includes four fundamental changes. Arithmetic crossover was used in the solution generation mechanisms of the employed bee and onlooker bee stages. The knowledge of the global best solution was utilized by arithmetic crossover. Again, this solution generation mechanism also has an adaptive step size. Limit is an important control parameter. In the standard ABC algorithm, it is constant throughout the optimization. In the HABCES algorithm, it was determined dynamically depending on the number of generations. Unlike the standard ABC algorithm, the HABCES algorithm used a solution generation mechanism based on the global best solution in the scout bee stage. Through these features, the HABCES algorithm has a strong local and global convergence ability. Firstly, the performance of the HABCES algorithm was analyzed on the solution of global optimization problems. Then, applications on the training of the ANN were carried out. ANN was trained using the HABCES algorithm for the identification of nonlinear static and dynamic systems. The performance of the HABCES algorithm was compared with the standard ABC, aABC and ABCES algorithms. The results showed that the performance of the HABCES algorithm was better in terms of solution quality and convergence speed. A performance increase of up to 69.57% was achieved by using the HABCES algorithm in the identification of static systems. This rate is 46.82% for the identification of dynamic systems.

1. Introduction

ANN is one of the most important artificial intelligence techniques. It has been used to solve many real-world problems [1]. Its popularity is a result of its strong structure. Security, science, engineering, medical science, finance, education, energy, and manufacturing are some of the areas where ANN is used. One of the important capabilities of ANN is the learning feature. It can provide modeling of problems using existing data and can give information about data that it has never seen. The training process of ANN is very important. Successful modeling of a problem is directly related to the successful training process of ANN. One of the artificial intelligence approaches used extensively in ANN training is meta-heuristic algorithms [2,3,4].
Meta-heuristic algorithms have been used successfully in solving many real-world problems [5,6,7,8]. The success of meta-heuristic algorithms has increased the interest in this field. Currently, there are more than 200 meta-heuristic algorithms in the literature. The equilibrium optimizer (EO) [9], the marine predators algorithm (MPA) [10], the slime mould algorithm (SMA) [11], the reptile search algorithm (RSA) [12], the dandelion optimizer [13], the runge kutta optimizer (RUN) [14], and weighted mean of vectors (INFO) [15] are some of the current meta-heuristic algorithms recommended in 2020 and beyond. Artificial bee colony (ABC) [16,17], cuckoo search (CS) [18], particle swarm optimization (PSO) [19], differential evolution algorithm (DE) [20] and flower pollination algorithm (FPA) [21] are some of the very popular meta-heuristic algorithms proposed in recent years.
One of the popular meta-heuristic algorithms used in ANN training is the ABC algorithm. The ABC algorithm is used to solve many real-world problems [22]. The strong global and local convergence ability of the ABC algorithm is one of the important reasons for its popularity. This strong structure of the ABC algorithm has also enabled it to be used extensively in ANN-based studies. Some ANN studies based on the ABC algorithm are as follows: Karaboga and Akay [23] evaluated the performance of the ABC algorithm in ANN training and compared it with different algorithms. They reported that the ABC algorithm was successful in training ANN. Karaboga et al. [24] trained feed-forward neural networks (FFNNs) using the ABC algorithm. Ozturk and Karaboga [25] proposed a hybrid training algorithm based on Levenberg–Marquardt (LM) and ABC algorithm. Karaboga and Ozturk [26] evaluated its performance on classification problems by training FFNN with the ABC algorithm. Kaya [27] compared the performance of several meta-heuristic algorithms in ANN training for nonlinear system identification. It was reported that the ABC algorithm is among the most successful training algorithms. Uzlu et al. [28] used an approach based on the ABC algorithm and ANN for the prediction of hydroelectric generation in Turkey. They stated that their proposed approach was effective. Xu et al. [29] proposed a modified ABC algorithm to train FFNN. Kaya and Baştemur Kaya [30] proposed a novel neural network training algorithm called artificial bee colony algorithm based on effective scout bee stage (ABCES) for the identification of nonlinear static systems. Ghanem and Jantan [31] introduced a hybrid training algorithm based on the ABC algorithm and monarch butterfly optimization for cyberattack classification applications. Shah et al. [32] presented a quick gbest guided artificial bee colony algorithm to train FFNN for Saudi stock market prices prediction. They reported that their proposed approach was more effective than typical computational algorithms.
Many variants have been proposed to obtain more effective solutions with the ABC algorithm. At the same time, hybrid approaches have also been suggested using different meta-heuristic algorithms and some of their features. Yildiz [33] developed a new hybrid algorithm based on the ABC algorithm and the Taguchi method for a structural design optimization of a vehicle component and a multi tool milling optimization problem. Karaboga and Kaya [34] proposed a hybrid ABC algorithm using arithmetic crossover to train ANFIS. Jadon et al. [35] suggested a novel hybrid algorithm (HABCDE) based on the ABC algorithm and DE. The performance of HABCDE was compared with different variant and meta-heuristic approaches for solving various science and engineering optimization problems. Li et al. [36] presented a hybrid feature selection algorithm (MFABC) based on a discrete ABC algorithm for Parkinson’s disease diagnosis. The performance of MFABC was compared with meta-heuristic approaches such as PSO, GWO, GA, ABC, QPSO, DE, FIREFLY, BAT, FWA and BBO. It was reported that MFABC was effective. Kefayat et al. [37] suggested a hybrid optimization approach including ant colony optimization and the ABC algorithm for probabilistic optimal placement and sizing of distributed energy resources. Duan et al. [38] proposed a hybrid algorithm based on the ABC algorithm and the quantum evolutionary algorithm to solve continuous optimization problems. Awadallah et al. [39] used the hill climbing optimizer in the process of the employed bee and proposed a hybrid algorithm (HABC) based on the hill climbing optimizer and ABC algorithm for a nurse rostering problem. Apart from these, there are also different hybrid approaches [40,41,42,43,44,45,46].
In light of the above information, it can be seen that the ABC algorithm has an important area of use in ANN training. To improve the performance of the ABC algorithm in ANN training, variants possessing improved global and local convergence capability are needed. Therefore, in this study, the HABCES algorithm, which a powerful variant of the ABC algorithm, is proposed. For the analysis of the performance of the HABCES algorithm in ANN training, it was applied for modeling parity problems and the identification of nonlinear static and dynamic systems. This study makes important contributions to the literature and has many innovative aspects. These are summarized below:
  • The study uses one of the most effective variants of the ABC algorithm proposed for ANN training. The HABCES algorithm is proven to be effective by comparing it with different variants.
  • It is one of the most comprehensive studies on the identification of nonlinear static and dynamic systems.
  • The study uses an important variant in which both global convergence and local convergence ability are developed. It can be used to solve many different real-world problems.
  • Unlike other variants of ABC algorithms in the literature, the related algorithm is improved by using adaptive arithmetic crossover, adaptive step size, an adaptive limit parameter, and an effective scout bee solution generation mechanism.

2. Materials and Methods

2.1. Standard Artificial Bee Colony (ABC) Algorithm

The ABC algorithm models the foraging behavior of honey bees [16,17]. Unlike the behavior of real bees, the ABC algorithm makes some assumptions. The ABC algorithm includes three different types of bee, including employed bees, onlooker bees and scout bees. Half of a colony consists of employed bees. The other half includes onlooker bees. The position of a food source corresponds to a candidate solution of a problem. The nectar amount of a food source corresponds to the solution quality. The ABC algorithm has three basic control parameters. These are colony size, limit, and maximum number of generations.
In the ABC algorithm, the process starts with the random determination of the positions of the food sources using (1). x j m i n is the lower bound and x j m a x is the upper bound. x i corresponds to the ith solution. To produce new solutions in the employed bee onlooker bee phases, the solution generation mechanisms given in (2) are used. Here, i j is a random number in the range [−1, 1] and k is an integer number in the range [1, number of employed bees].
x i j = x j m i n + r a n d 0 , 1 x j m a x x j m i n
v i j = x i j + i j x i j x k j
P i = f i t n e s s i j = 1 S N f i t n e s s j
A comparison is made as to whether the nectar amount of the candidate source is better than the previous one. If the nectar amount of the candidate source is better, the information of the previous source is deleted from the memory and the information of the candidate source is written. In the standard ABC algorithm, a selection process is carried out using the roulette wheel method. The selection probability of sources is computed by (3). Here, f i t n e s s i represents the quality of the ith source. SN is the number of employed bees.
The ABC algorithm has a failure counter. If the value of the failure counter reaches the limit value, a new solution is randomly generated by the scout bees using (1). This process is repeated until the maximum number of generations is reached.

2.2. Hybrid Artificial Bee Colony Algorithm Based on Effective Scout Bee Stage (HABCES)

The hybrid artificial bee colony algorithm based on effective scout bee stage (HABCES) improves both the global and local search capability of the standard ABC algorithm. The aABC algorithm [34] and ABCES algorithm [30] are two important variants of the ABC algorithm. The HABCES algorithm was created by combining the innovative features of these variants and adding new additional features.
To increase the local search capability of the ABC algorithm, two fundamental innovations were made in the structure of the aABC algorithm. Firstly, arithmetic crossover was used in the solution generation mechanisms of the employed and onlooker bee phases. An arithmetic crossover was applied between the instantaneous solution and the global best solution. The second is the adaptive adjustment of the step size according to the failure counter. To strengthen the local search capability of the HABCES algorithm, the solution search mechanism of the aABC algorithm specified in (4) and (5) was used. γ is the crossover rate and α is the adaptivity coefficient. One of the major drawbacks of the aABC algorithm is the manual setting of these parameters. The optimum values of the γ and α parameters may change according to the problem type. For the best results, it is necessary to try all alternatives. This disadvantage is eliminated with the HABCES algorithm. In each iteration, the values of γ and α are adjusted adaptively according to the number of iterations as noted in (6) and (7). iter is the current number of iterations and maxCycle is the maximum number of iterations. As can be understood from (6), the effect of the global best solution on the new solution increases as the number of iterations increases. Namely, better solutions are sought around the global best solution. At first, the effect of the global best solution is minimal. With this approach, new solutions are prevented from being similar to the global best solution. (7) shows the adaptive adjustment of the step size (α). α changes according to the number of iterations and approaches from 0 to 1. Its value is initially 0. It reaches 1 when iter and maxCycle are equal. This approach allows searching for a new solution in remote region of the existing solution in the initial iterations. As the number of iterations increases, the search region narrows.
v i j = x i j γ + x g j 1 γ + B i i j x i j x k j
B i = 1 1 + t r i a l i α
γ = r a n d m a x C y c l e i t e r m a x C y c l e
α = i t e r m a x C y c l e
L i m i t = 1 + w   ×   D   ×   F o o d N u m b e r   ×   m a x C y c l e i t e r m a x C y c l e
v i j = x i j ε + x g j 1 ϵ , r 1 < r 2 x i j + x i j   δ , o t h e r
r 2 = r a n d + m a x C y c l e i t e r m a x C y c l e
Limit is one of the important control parameters of the ABC algorithm. When the limit value reaches the failure counter, a random new solution is generated in the scout bee stage. Random generation of the new solution means abandoning previously achieved gains. This is a situation that negatively affects the global convergence ability of the algorithm. In the ABC algorithm, the value of the limit parameter is fixed. Large or small limit values can turn into a disadvantage depending on the development of the solution. The two main issues mentioned here are solved in the ABCES algorithm. As stated in (8), the limit value is dynamically adjusted depending on the current iteration number. Here, w is a random number in the range [0,1]. D is the number of parameters of the problem to be optimized. The ABCES algorithm utilizes a solution generation mechanism that uses the current solution and the global best solution instead of randomly determining the solutions in the scout bee phase. In (9) and (10), r1 and ϵ are random numbers in the range [0,1]. δ is the step size and is accepted as 0.01. In this way, it is ensured that the gains obtained are transferred to the new generations. At the same time, the global convergence capability of the algorithm has been increased. The two stated strengths of the ABCES algorithm are integrated into the HABCES algorithm to strengthen its global search capability.
In light of this information, the basic architecture of the HABCES algorithm can be summarized as follows:
  • Arithmetic crossover is used in the solution generation mechanism of the employed and onlooker bee stages and an adaptive local search is adopted.
  • Arithmetic crossover and adaptivity parameters are adjusted dynamically.
  • Limit, one of the important control parameters, is dynamically adjusted depending on the number of iterations. Its value changes with each iteration.
  • In the scout bee stage, instead of randomly generating new solutions, a strong solution generation mechanism including the global best solution and the current solution is used.

3. Results

3.1. Applications on the Solution of the Global Optimization Problems

In this section, the performance of the HABCES algorithm on solving global optimization problems was evaluated. For this purpose, the 43 benchmark test functions given in Table 1 were used. The first 20 (F1–F20) test functions are suitable for high-dimensional problems. The other test functions (F21–F43) are low-dimensional optimization problems. For high-dimensional problems, colony size and maximum number of generations were taken as 50 and 500, respectively. The maximum number of generations for low-dimensional optimization problems is 100. Each application was run 30 times, and the mean fitness values were calculated.
The performance of the HABCES algorithm was compared with the standard ABC, aABC and ABCES algorithms. The results obtained for the first 20 test functions are given in Table 2. In addition to the results, Table 2 also includes the search range, dimensions, and optimal values. The best results on all test functions were obtained with the proposed HABCES algorithm. A value of 5.7 × 10−2 was reached using the HABCES algorithm for F1. For the ABCES and aABC algorithms, the fitness values were at 10−6 and 10−5 levels. Only the HABCES algorithm reached the optimal value for F2. After the HABCES algorithm, ABCES had the best fitness value, with a standard deviation value of 3.0 × 10−7. Although very high error values were obtained with the ABC, ABCES and aABC algorithms for F3, very successful results were found when utilizing the HABCES algorithm. The best fitness value for F4 was obtained by using HABCES algorithm as 3.0 × 10−5. Due to the difficulty of the problem for F5, high fitness values were generally achieved in all algorithms. The best result was obtained using the HABCES algorithm. The optimal value of F6 is 0, and the optimal value was achieved by the HABCES and ABCES algorithms. Interestingly, the least successful result was obtained using the aABC algorithm for this function. The best result for F7 was found to be 9.1 × 10−18 and it was obtained using the HABCES algorithm. The fitness value of the HABCES algorithm for F8 was at the 10−3 level. For other algorithms, it was greater than 10−3. The HABCES algorithm was very successful for F9. Its fitness value was 2.4 × 10−10. The closest result for F9 was obtained using the ABCES algorithm. The fitness value of the ABCES algorithm was 1.2 × 10−5. While the fitness value of the HABCES algorithm was at the 10−2 level for F10, it was greater than 1 for the other algorithms. The best result after the HABCES algorithm was obtained when using the ABC algorithm. The best fitness value found for F11 was 5.9 × 10−11 and was obtained when utilizing the HABCES algorithm. A value of 2.2 × 10−10 was obtained by using the HABCES algorithm for F12. The fitness values of the other algorithms were greater than 10−5. The fitness values found for F13 was between 22 and 32. The best result was achieved when using the HABCES algorithm. The least successful result was obtained with ABC algorithm.
The most successful algorithm for F14 was the HABCES algorithm. Generally, large fitness values were achieved for F15. The best result was 8.0 × 101 and was obtained when using the HABCES algorithm. The most successful algorithm after HABCES was ABCES. Its fitness value was 9.3 × 101. For F16, the difference between the fitness values of the HABCES algorithm and the ABC algorithm was very large. The HABCES algorithm was better than the other algorithms tested here. The optimal value of F17 was −1174.98. Only the HABCES algorithm reached this value. The best result after the HABCES algorithm was obtained with aABC algorithm. The best fitness value for F18 was found to be 3.1 × 10−7, and was obtained when using the HABCES algorithm. For F19 and F20, the HABCES algorithm was more effective than the other algorithms.
The results obtained for functions between F21 and F43 are presented in Table 3. The optimum value was achieved with all algorithms for F21, F33, F37 and F38. The best result for F22 was found to be 4.4 × 10−6, and was obtained when using the HABCES algorithm. The optimum value was achieved when utilizing the ABCES, aABC and the HABCES algorithms for F23. The fitness value of the ABC algorithm was 7.1 × 10−15. The best result for F24 was found with the HABCES algorithm. The HABCES algorithm achieved the optimal value here. After the HABCES algorithm, the best result was obtained when using the aABC algorithm. Effective fitness values were obtained when using the HABCES and aABC algorithms for F25. Although the HABCES algorithm had the best fitness value for F26, the results obtained when using the other algorithms were also successful. The HABCES algorithm was better than the other algorithms for F27. The best fitness value for F28 was obtained when using the aABC algorithm. After the aABC algorithm, the best result was found by utilizing the HABCES algorithm. The best results for F29 were obtained via the HABCES and aABC algorithms. The fitness values achieved were 7.5 × 10−6 and 3.2 × 10−5, respectively. The optimal value of F30 is −0.18465. The closest result to the optimal value was obtained when using the HABCES algorithm. The best result for F31 was found to be 1.7 × 10−18, and was obtained when using the HABCES algorithm. The least successful algorithm was ABCES. The optimal value of F32 is 0.9. The best result for F32 was obtained by using the aABC algorithm. The fitness value of the HABCES algorithm was 0.906667. The fitness values obtained with the aABC and HABCES algorithms for F34 were both 0. Other algorithms were less successful. Generally effective fitness values were obtained for F35. The most successful was the HABCES algorithm. The best fitness value for F36 was found to be 0.397887 by utilizing the aABC and the HABCES algorithms. The fitness value of the HABCES algorithm for F39 was 1.2 × 10−3, and was the best result. For F40, the fitness value of all algorithms is the same. The best value for F41 was obtained when using the ABC algorithm. For F42, the best fitness value was found to be 4.1 × 10−1, and was obtained with the HABCES algorithm. The performance of the HABCES algorithm for F43 was better than that of other algorithms. The fitness value obtained when using the HABCES algorithm was 1.8 × 10−7. The best result after the HABCES algorithm was found with the ABCES algorithm.
The fitness values obtained on 43 test functions showed that the performance of the HABCES algorithm was the best. However, solution quality is not the only criterion. The convergence graphs of the algorithms also give an idea of their performance. In Figure 1 and Figure 2, the convergence of the ABC, aABC, ABCES and the HABCES algorithms is compared for the first 20 test functions. For F1, F3, F4, F7, F9, F14, F16, F17, F18 and F19, the HABCES algorithm showed a fast convergence and reached the effective solution in a short time. For F2, all algorithms approached the optimal value in a short time. For F5, F6, F8, F10, F11, F12 and F20, the initial convergence speeds of the algorithms are similar. With increasing number of generations, the convergence speed of the HABCES algorithm became more effective than the others. For F13, the convergence speeds of the algorithms are close to one another. Due to the structure of F15, the convergence graphs of the algorithms showed a different appearance. The convergence graphics for the functions between F21 and F32 are compared in Figure 3. The HABCES algorithm converged rapidly for F22, F23, F24, F27, F29, F30 and F31. For F21, the convergence speeds of the algorithms are close to one another. For F25 and F26, the HABCES algorithm converged rapidly in a short time. Then, the speed of convergence decreased. For F28, the convergence speed of the HABCES algorithm was fast at first, and then decreased. The convergence graphics for the functions between F33 and F43 are compared in Figure 4. The HABCES algorithm showed fast convergence for F33, F34 and F35. In general, the convergence speeds were close for F36, F37, F38, F39 and F40. For F41, the HABCES algorithm reached a good solution in a short time. For F42, the convergence speed of the HABCES algorithm was better than for the others. For F43, it was seen that the convergence of the ABCES and HABCES algorithms was more effective. When the convergence graphs and solution qualities were evaluated together, it was concluded that the performance of the HABCES algorithm was better than the others.

3.2. Applications for the Training of the Neural Network

In this section, the performance of the HABCES algorithm regarding neural network training was evaluated. In this context, the algorithm was applied to three different groups of problems. These were the modeling of parity problems, the identification of nonlinear static systems, and the identification of nonlinear dynamic systems. The mean square error (MSE) was used as the error metric. Each application was run 30 times, and the mean error was calculated. The summing function was used as the transfer function. The sigmoid function was chosen as the activation function. The maximum number of generations was considered as the stopping criterion.
The results obtained when using the ABC, aABC, ABCES and the proposed HABCES algorithms for modeling of XOR, 3-bit parity and 4-bit parity problems are compared in Table 4. For each problem, 5 and 10 neurons were used in the hidden layer. For the XOR problem, the number of generations was taken to be 50. In the others, it was 100. A colony size of 20 was adopted. In the XOR problem, increasing numbers of neurons decreased the error value. The error values obtained when using the HABCES algorithm for 2-5-1 and 2-10-1 network structures were 0.00406869 and 0.000516968, respectively. After the HABCES algorithm, the best results were found with the ABCES algorithm. As in the XOR problem, the increase in the number of neurons improved the quality of the solution in the 3-bit parity problem. The best result obtained was 0.00855215, which was achieved when using the HABCES algorithm with the 3-10-1 network structure. The 4-bit parity problem was more difficult than the other problems. This situation can also be observed from the error values found. The error values found for the 4-5-1 and 4-10-1 network structures when using the HABCES algorithm were 0.118425 and 0.0913625, respectively. The performance of the other algorithms was worse. Table 4 shows that the solution quality of the HABCES algorithm was better for all problem and network structures. Figure 5 provides a comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms when solving parity problems. The fastest and most efficient convergence for each problem was achieved using the HABCES algorithm.
Secondly, the performance of the HABCES algorithm was analyzed with respect to the identification of nonlinear static systems. The information regarding the nonlinear static systems used is given in Table 5. S1 has one input. S2 and S3 have two inputs. S4 and S5 have three inputs. S6 has four inputs. All systems have one output. Neural networks with 4, 8 and 12 neurons in the hidden layer were used for each system. Colony size and the maximum number of generations belonging to the training algorithms were taken as 20 and 1000, respectively. In general, 80% of the data were used for the training process. The rest was reserved for the testing process. The results obtained when using the HABCES algorithms for nonlinear static system identification are given in Table 6. It can be seen that both the training and test results for S1 improve in parallel with increasing numbers of neurons. The best results were found when using the 1-12-1 network structure. The best training and test error values were 0.00072 and 0.00161, respectively. The effect of the number of neurons on performance was the same for S2 as it was for S1. The best training error value found was 0.00058. Additionally, the best test error value obtained was 0.00453. Increasing the number of neurons in S3 worsened the solution quality. The best error values for the training and testing process were found when using the 2-4-1 network structure. The error values were 0.00014 and 0.00297, respectively. In S4, the 3-8-1 network structure was more effective than the others. The training error value of the 3-8-1 network structure was 0.00154. The test result was also 0.00207. The best results for S5 were obtained with a network including four neurons in the hidden layer, similarly to the case of S3. The training and test error values for S5 were 0.00082 and 0.00463, respectively. Reducing the number of neurons in S6 improved performance. Effective results were achieved with the 4-4-1 network structure. The best training error value for S6 was 0.00148. Additionally, the test error value was 0.001563. Table 7 compares the results obtained when using the ABC, aABC, ABCES and HABCES algorithms for nonlinear static system identification. For all systems, the best training error value was found when using the HABCES algorithm. Other than for S1 and S6, the best test results were obtained when using the HABCES algorithm. The best test results for S1 and S6 were obtained by using the aABC algorithm. Table 8 shows the increase in performance achieved when using the HABCES algorithm. The HABCES algorithm provided a performance improvement of up to 69.57% compared to using the ABC algorithm for the training process. For the test process, this rate was 35.29%. With respect to the aABC algorithm, performance improvements of up to 54.84% (training) and 23.05% (test), respectively, were achieved. When comparing the HABCES algorithm with the ABCES algorithm, effective performance increases can be observed. A performance increase of up to 32.56% was achieved for training. For the test process, this rate was 54.11%. Figure 6 presents a comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms. For all systems, the HABCES algorithm obtained effective results by achieving fast convergence. In particular, the convergence speeds for S1, S2, S3, S5 and S6 were much better than those of the other algorithms. In some iterations of S4, the ABCES and the HABCES algorithms came close to one another. A comparison of the real output and the predicted output for all nonlinear static systems is given in Figure 7. It can be seen that the real and predicted outputs overlap with each other for all systems. This is an indication that the HABCES algorithm can successfully solve the corresponding problems.
Thirdly, the algorithms were applied for the identification of nonlinear dynamic systems. Eight nonlinear dynamic systems were used. The nonlinear dynamic systems utilized are presented in Table 9. D3, D5 and D8 consist of two inputs. D1, D2, D4 and D7 have three inputs. D6 consists of four inputs. Except for D6, 80% of the data were used for training. The rest were designated for the testing process. For D6, these rates were 90% and 10%, respectively. Network structures with 4, 8 and 12 neurons in the hidden layer were utilized. The colony size and maximum number of generations used were 20 and 1000, respectively, as in the identification of nonlinear static systems. The results obtained using the HABCES algorithms in nonlinear dynamic system identification are given in Table 10. The best training and test error values for D1 were obtained using the 3-8-1 network structure. The related error values were 0.00039 and 0.00072, respectively. Increasing the number of neurons in D2 decreased the error value. The best results were obtained when using the 3-12-1 network structure. The training error value for the 3-12-1 network structure was 0.00066. The test error value was 0.00082. The most effective results for D3 and D4 were obtained when using network structures featuring eight neurons in the hidden layer. The training error values of D3 and D4 were 0.00057 and 0.00348, respectively. In addition, their test error values were 0.00070 and 0.00367, respectively. For D5, effective results were generally found when using all network structures. The 2-8-1 network structure was slightly more successful than the others. Increasing the number of neurons in D6 decreased performance. Namely, the results got worse. The best training and testing results were obtained when using the 4-4-1 network structure. The best training error value of D6 was 0.00067. The test error value was also 0.00105. The 3-8-1 network structure was effective for D7. There was a clear difference between the error values obtained for 3-8-1 and those obtained for the other network structures. The best training and test error values for D7 were 0.00030 and 0.00050, respectively. According to the training and test results in D8, the 2-8-1 network structure is more effective. The best training error value obtaind for D8 was 0.00017. The test error value was 0.00031. The performance of the HABCES algorithm was compared with the ABC, aABC and ABCES algorithms, and the results are presented in Table 11. It can be seen that the HABCES algorithm is better than the others with respect to the values of training and test error for all systems. A serious increase in performance was achieved when using the HABCES algorithm. The performance increase rates according to the systems are given in Table 12. With the HABCES algorithm, a performance increase of up to 46.82% was achieved in the training process compared to when using the ABC algorithm. This ratio was 39.53% for the test. Compared to the aABC algorithm, the HABCES algorithm achieved performance improvements of up to 37.96% and 35.00%, respectively, for the training and testing processes. At the same time, the HABCES algorithm was more successful than the ABCES algorithm. With respect to the ABCES algorithm, a 20.48% performance increase was achieved in the training process. For the testing process, this ratio was 25.77%. Figure 8 presents a comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms. In D1, D2, D4, D6, D7 and D8, the HABCES algorithm showed a rapid convergence and reached effective solutions in a short time. In D3 and D5, the HABCES and ABCES algorithms had good convergence. A comparison of the real output and the predicted output for nonlinear dynamic system identification is given in Figure 9. It can be understood from the overlapping output graphics that successful modeling for all systems was achieved when using the HABCES algorithm.

4. Discussion

This study aims to develop a variant with strong global and local search capability for neural network training. The HABCES algorithm is a hybrid algorithm consisting of a combination of the aABC algorithm and ABCES algorithm. The strengths of both algorithms have been taken, making the HABCES algorithm a powerful algorithm with effective global and local convergence capability.
The arithmetic crossover feature was adapted for the HABCES algorithm, and knowledge of the global best solution was used. At the same time, the width of the search area was determined adaptively. This adaptive feature generally resulted in improved solution quality at each iteration. These two features provided a strong convergence feature. In particular, the determination of the solutions in the scout bee phase by means of an effective solution generation mechanism, rather than being randomly generated, ensured the preservation of the gains obtained. All of these changes strengthened the convergence ability of the HABCES algorithm and improved its solution quality. Although the HABCES algorithm was developed as a neural network training algorithm, it should be emphasized that it was also successful at solving global optimization problems. This fact expands the usage area of the algorithm. To interpret the performance of the HABCES algorithm with respect to different variants, it was compared with the standard ABC, aABC and ABCES algorithms. The results showed that the HABCES algorithm was more successful than the aABC and ABCES algorithms both at solving global optimization problems and at training the neural network.
The HABCES algorithm was first applied to solving global optimization problems. Forty-three benchmark test functions were used. These test functions were examined under two groups: high-dimensional and low-dimensional problems. High-dimensional problems can be more difficult or complex due to their higher number of dimensions. Due to the fast convergence of the HABCES algorithm, it was found that it was able to approach the optimal solution within a short period of time. The other variants fell short of the performance achieved by the HABCES algorithm. For low-dimensional optimization problems, it may be easier to reach the optimal result in a low number of iterations. Therefore, with increasing difficulty or size of a problem, the success of the HABCES algorithm could be more clearly observed.
In the training of neural networks, the algorithm was applied for the modeling of parity problems, the identification of nonlinear static systems, and the identification of nonlinear dynamic systems. The amount of data used in the training process for solving XOR, 3-bit parity, and 4-bit parity problems was limited. At the same time, the problem became more complex with increasing numbers of inputs. Parallel to this situation, the error value also increased. For these problems, there was a significant performance advantage compared to the other variants of the HABCES algorithm. In particular, the results were obtained with a low number of iterations. The fact that the HABCES algorithm was able to achieve an effective solution in a short time is an important indicator of its success.
Most of the problems encountered in daily life exhibit nonlinear behavior. Therefore, it is important to identify nonlinear systems. Nonlinear systems are inherently difficult problems. Effective training algorithms are required in order to obtain successful results. Nonlinear systems can be static or dynamic. In this study, the algorithm was applied for both system types. It can be observed that the HABCES algorithm was able to achieve effective results in a short period of time for the identification of nonlinear systems. When examining the output graphs obtained for these systems, the HABCES algorithm can be seen to have been successful. The graphs of real and predicted output overlap. The success of the HABCES algorithm at performing neural network training for the identification of these systems shows that it can also be used for solving different problems. The aABC algorithm stands out as a result of its arithmetic crossover and adaptive step size. The ABCES algorithm also stands out due to its adaptive limit value, as well as its solution generation mechanism at the scout bee stage. These features affect the result differently. In the identification of static systems, it can be seen that adaptive limit values and the effective scout bee stage have a more significant positive effect on the results than the arithmetic crossover and adaptive step size. In the identification of dynamic systems, different system behaviors are observed depending on the type of system.
One of the important parameters is standard deviation. In all problem types, effective standard deviation values were obtained in parallel with the solution. Each application was repeated 30 times, and the mean error/fitness values were obtained. The effective standard deviation values show that the obtained results can be achieved again within a certain tolerance. In particular, meta-heuristic algorithms usually start with random solutions. The closeness of the obtained results to one another also increases the confidence in the algorithm.
It can be observed that the network structures created in these applications affect the performance. In particular, an increase in the number of neurons resulted in an increase in performance in some cases, while decreasing the quality of the solution in other cases. This is related to the difficulty of the problems.
This study has limitations in terms of population size, maximum number of generations, network structures, and problem types. A value of 20, one of the values for which the ABC algorithm is effective, was used as the colony size. For the maximum number of generations, different values were selected depending on the problem type. In neural network training, it is known that the network structure affects performance. In this study, the applications were realized using a limited number of network structures for each problem. Two different types of problem were used. These were the solving of global optimization problems and the training of a neural network. The success of the HABCES algorithm should be evaluated within the context of these limitations.

5. Conclusions

This study proposed a new variant referred to as the HABCES algorithm for neural network training. Effective solution generation mechanisms were used in the employed bee, onlooker bee, and scout bee stages of the HABCES algorithm. The algorithm basically incorporates four changes. The first is the usage of arithmetic crossover between the current solution and the global best solution. Secondly, an adaptive approach was used in the selection of the local search area. Specifically, it changes depending on the number of generations. This can be thought of as a dynamic step size. Third, the limit control parameter was dynamically adjusted depending on the generation number. Fourth, in the scout bee stage, new solutions were created using an effective solution generation mechanism instead of randomly generating them.
Unlike the standard ABC algorithm, the HABCES algorithm was able to reach effective solutions in a short time as a result of these four fundamental innovations. In particular, as the dimensions and difficulty of the problem increased, the success of the HABCES algorithm could be more clearly observed. This was particularly evident with the solution of global optimization problems. At the same time, the HABCES algorithm was generally more successful than the standard ABC, ABC and ABCES algorithms for solving global optimization problems.
To create effective models with ANN, a successful training algorithm is required. Therefore, this study focused on developing a more successful algorithm for ANN training. In ANN training, there are different parameters that affect performance in addition to the training algorithm. Number of inputs, number of layers, number of neurons, activation function, and transfer functions are among them. This was observed in application. The structure of the neural network affected the performance. This effect varied depending on the type of problem. In some problems, increasing the number of neurons improved the solution quality. In others, the opposite situation was observed. Within its limitations, the HABCES algorithm seemed to be effective on the basis of convergence graphs, output graphs, and simulation results.
Nonlinear systems are inherently complex and difficult. Effective results were obtained when using the HABCES algorithm for the identification of the related systems. In particular, it had a significant advantage over variants such as aABC and ABCES. In nonlinear static systems, although it varies according to the system, the training performance increased by up to 54.84% with reference to the aABC algorithm. In comparison with the ABCES algorithm, this improvement reached 32.56%. Similar success was achieved for nonlinear dynamic systems. For nonlinear dynamic systems, these improvements were 37.96% and 20.48%, respectively. These rates increased even more with respect to the standard ABC algorithm. Compared to the ABC algorithm, performance increases of up to 69.57% and 46.82% were achieved.
The proposed variant was used for ANN training for the identification of nonlinear systems in this study. As is widely known, ANN is used in modeling of many systems. In future studies, the HABCES algorithm can be used as a training algorithm for modeling different systems, or new variants can be suggested by creating new updates to increase the performance of the HABCES algorithm.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.; Arshad, H. State-of-the-art in artificial neural network applications: A survey. Heliyon 2018, 4, e00938. [Google Scholar] [CrossRef] [PubMed]
  2. Devikanniga, D.; Vetrivel, K.; Badrinath, N. Review of meta-heuristic optimization based artificial neural networks and its applications. J. Phys. Conf. Ser. 2019, 1362, 012074. [Google Scholar] [CrossRef]
  3. Nur, A.S.; Radzi, N.H.M.; Ibrahim, A.O. Artificial neural network weight optimization: A review. TELKOMNIKA Indones. J. Electr. Eng. 2014, 12, 6897–6902. [Google Scholar] [CrossRef]
  4. Kumar, D. Meta-heuristic Techniques to Train Artificial Neural Networks for Medical Image Classification: A Review. Recent Adv. Comput. Sci. Commun. (Former. Recent Pat. Comput. Sci.) 2022, 15, 513–530. [Google Scholar]
  5. Tayarani-N, M.-H.; Yao, X.; Xu, H. Meta-heuristic algorithms in car engine design: A literature survey. IEEE Trans. Evol. Comput. 2014, 19, 609–629. [Google Scholar] [CrossRef]
  6. Yang, B.; Wang, J.; Zhang, X.; Yu, T.; Yao, W.; Shu, H.; Zeng, F.; Sun, L. Comprehensive overview of meta-heuristic algorithm applications on PV cell parameter identification. Energy Convers. Manag. 2020, 208, 112595. [Google Scholar] [CrossRef]
  7. Wu, Y. A survey on population-based meta-heuristic algorithms for motion planning of aircraft. Swarm Evol. Comput. 2021, 62, 100844. [Google Scholar] [CrossRef]
  8. Yang, B.; Wang, J.; Yu, L.; Shu, H.; Yu, T.; Zhang, X.; Yao, W.; Sun, L. A critical survey on proton exchange membrane fuel cell parameter estimation using meta-heuristic algorithms. J. Clean. Prod. 2020, 265, 121660. [Google Scholar] [CrossRef]
  9. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  10. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  11. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  12. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  13. Zhao, S.; Zhang, T.; Ma, S.; Chen, M. Dandelion Optimizer: A nature-inspired metaheuristic algorithm for engineering applications. Eng. Appl. Artif. Intell. 2022, 114, 105075. [Google Scholar] [CrossRef]
  14. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  15. Ahmadianfar, I.; Heidari, A.A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An efficient optimization algorithm based on weighted mean of vectors. Expert Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  16. Karaboga, D. Artificial bee colony algorithm. Scholarpedia 2010, 5, 6915. [Google Scholar] [CrossRef]
  17. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  18. Yang, X.-S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  19. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  20. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  21. Yang, X.-S. Flower pollination algorithm for global optimization. In Proceedings of the International Conference on Unconventional Computing and Natural Computation, Orléan, France, 3–7 September 2012; pp. 240–249. [Google Scholar]
  22. Karaboga, D.; Gorkemli, B.; Ozturk, C.; Karaboga, N. A comprehensive survey: Artificial bee colony (ABC) algorithm and applications. Artif. Intell. Rev. 2014, 42, 21–57. [Google Scholar] [CrossRef]
  23. Karaboga, D.; Akay, B. Artificial bee colony (ABC) algorithm on training artificial neural networks. In Proceedings of the 2007 IEEE 15th Signal Processing and Communications Applications, Eskisehir, Turkey, 11–13 June 2007; pp. 1–4. [Google Scholar]
  24. Karaboga, D.; Akay, B.; Ozturk, C. Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks. In Proceedings of the International Conference on Modeling Decisions for Artificial Intelligence, Kitakyushu, Japan, 16–18 August 2007; pp. 318–329. [Google Scholar]
  25. Ozturk, C.; Karaboga, D. Hybrid artificial bee colony algorithm for neural network training. In Proceedings of the 2011 IEEE Congress of Evolutionary Computation (CEC), New Orleans, LA, USA, 5–8 June 2011; pp. 84–88. [Google Scholar]
  26. Karaboga, D.; Ozturk, C. Neural networks training by artificial bee colony algorithm on pattern classification. Neural Netw. World 2009, 19, 279. [Google Scholar]
  27. Kaya, E. A Comprehensive Comparison of the Performance of Metaheuristic Algorithms in Neural Network Training for Nonlinear System Identification. Mathematics 2022, 10, 1611. [Google Scholar] [CrossRef]
  28. Uzlu, E.; Akpınar, A.; Özturk, H.T.; Nacar, S.; Kankal, M. Estimates of hydroelectric generation using neural networks with the artificial bee colony algorithm for Turkey. Energy 2014, 69, 638–647. [Google Scholar] [CrossRef]
  29. Xu, F.; Pun, C.-M.; Li, H.; Zhang, Y.; Song, Y.; Gao, H. Training feed-forward artificial neural networks with a modified artificial bee colony algorithm. Neurocomputing 2020, 416, 69–84. [Google Scholar] [CrossRef]
  30. Kaya, E.; Baştemur Kaya, C. A novel neural network training algorithm for the identification of nonlinear static systems: Artificial bee colony algorithm based on effective scout bee stage. Symmetry 2021, 13, 419. [Google Scholar] [CrossRef]
  31. Ghanem, W.A.; Jantan, A. Training a neural network for cyberattack classification applications using hybridization of an artificial bee colony and monarch butterfly optimization. Neural Process. Lett. 2020, 51, 905–946. [Google Scholar] [CrossRef]
  32. Shah, H.; Tairan, N.; Garg, H.; Ghazali, R. A quick gbest guided artificial bee colony algorithm for stock market prices prediction. Symmetry 2018, 10, 292. [Google Scholar] [CrossRef]
  33. Yildiz, A.R. A new hybrid artificial bee colony algorithm for robust optimal design and manufacturing. Appl. Soft Comput. 2013, 13, 2906–2912. [Google Scholar] [CrossRef]
  34. Karaboga, D.; Kaya, E. An adaptive and hybrid artificial bee colony algorithm (aABC) for ANFIS training. Appl. Soft Comput. 2016, 49, 423–436. [Google Scholar] [CrossRef]
  35. Jadon, S.S.; Tiwari, R.; Sharma, H.; Bansal, J.C. Hybrid artificial bee colony algorithm with differential evolution. Appl. Soft Comput. 2017, 58, 11–24. [Google Scholar] [CrossRef]
  36. Li, H.; Pun, C.-M.; Xu, F.; Pan, L.; Zong, R.; Gao, H.; Lu, H. A hybrid feature selection algorithm based on a discrete artificial bee colony for Parkinson’s diagnosis. ACM Trans. Internet Technol. 2021, 21, 1–22. [Google Scholar] [CrossRef]
  37. Kefayat, M.; Ara, A.L.; Niaki, S.N. A hybrid of ant colony optimization and artificial bee colony algorithm for probabilistic optimal placement and sizing of distributed energy resources. Energy Convers. Manag. 2015, 92, 149–161. [Google Scholar] [CrossRef]
  38. Duan, H.-B.; Xu, C.-F.; Xing, Z.-H. A hybrid artificial bee colony optimization and quantum evolutionary algorithm for continuous optimization problems. Int. J. Neural Syst. 2010, 20, 39–50. [Google Scholar] [CrossRef]
  39. Awadallah, M.A.; Bolaji, A.L.; Al-Betar, M.A. A hybrid artificial bee colony for a nurse rostering problem. Appl. Soft Comput. 2015, 35, 726–739. [Google Scholar] [CrossRef]
  40. Mazini, M.; Shirazi, B.; Mahdavi, I. Anomaly network-based intrusion detection system using a reliable hybrid artificial bee colony and AdaBoost algorithms. J. King Saud Univ. Comput. Inf. Sci. 2019, 31, 541–553. [Google Scholar] [CrossRef]
  41. Stephan, P.; Stephan, T.; Kannan, R.; Abraham, A. A hybrid artificial bee colony with whale optimization algorithm for improved breast cancer diagnosis. Neural Comput. Appl. 2021, 33, 13667–13691. [Google Scholar] [CrossRef]
  42. Gaidhane, P.J.; Nigam, M.J. A hybrid grey wolf optimizer and artificial bee colony algorithm for enhancing the performance of complex systems. J. Comput. Sci. 2018, 27, 284–302. [Google Scholar] [CrossRef]
  43. Gupta, S.; Deep, K. Hybrid sine cosine artificial bee colony algorithm for global optimization and image segmentation. Neural Comput. Appl. 2020, 32, 9521–9543. [Google Scholar] [CrossRef]
  44. Badem, H.; Basturk, A.; Caliskan, A.; Yuksel, M.E. A new hybrid optimization method combining artificial bee colony and limited-memory BFGS algorithms for efficient numerical optimization. Appl. Soft Comput. 2018, 70, 826–844. [Google Scholar] [CrossRef]
  45. Mallala, B.; Papana, V.P.; Sangu, R.; Palle, K.; Chinthalacheruvu, V.K.R. Multi-Objective Optimal Power Flow Solution Using a Non-Dominated Sorting Hybrid Fruit Fly-Based Artificial Bee Colony. Energies 2022, 15, 4063. [Google Scholar] [CrossRef]
  46. Zhang, L.; Xuan, J.; Shi, T. Obtaining More Accurate Thermal Boundary Conditions of Machine Tool Spindle Using Response Surface Model Hybrid Artificial Bee Colony Algorithm. Symmetry 2020, 12, 361. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms when solving high-dimensional global optimization problems (F1–F15).
Figure 1. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms when solving high-dimensional global optimization problems (F1–F15).
Mathematics 10 03487 g001
Figure 2. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms when solving high-dimensional global optimization problems (F16–F20).
Figure 2. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms when solving high-dimensional global optimization problems (F16–F20).
Mathematics 10 03487 g002aMathematics 10 03487 g002b
Figure 3. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms when solving low-dimensional global optimization problems (F21–F32).
Figure 3. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms when solving low-dimensional global optimization problems (F21–F32).
Mathematics 10 03487 g003
Figure 4. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms in solving low-dimensional global optimization problems (F33–F43).
Figure 4. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms in solving low-dimensional global optimization problems (F33–F43).
Mathematics 10 03487 g004aMathematics 10 03487 g004b
Figure 5. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms when solving parity problems.
Figure 5. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms when solving parity problems.
Mathematics 10 03487 g005
Figure 6. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms for nonlinear static system identification.
Figure 6. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms for nonlinear static system identification.
Mathematics 10 03487 g006aMathematics 10 03487 g006b
Figure 7. Comparison of the real output and the predicted output for nonlinear static system identification.
Figure 7. Comparison of the real output and the predicted output for nonlinear static system identification.
Mathematics 10 03487 g007aMathematics 10 03487 g007b
Figure 8. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms for nonlinear dynamic system identification.
Figure 8. Comparison of the convergences of the ABC, aABC, ABCES and HABCES algorithms for nonlinear dynamic system identification.
Mathematics 10 03487 g008
Figure 9. Comparison of the real output and the predicted output for nonlinear dynamic system identification.
Figure 9. Comparison of the real output and the predicted output for nonlinear dynamic system identification.
Mathematics 10 03487 g009
Table 1. Benchmark functions used for solving global optimization problems.
Table 1. Benchmark functions used for solving global optimization problems.
FunctionFunction NameFormulation
F1Sphere f x = i = 1 n x i 2
F2Exponential f x = e x p 0.5 i = 1 n x i 2
F3High Conditioned Elliptic f x = i = 1 n 10 6 i 1 n 1 x i 2
F4Ackley f x = 20 e x p 0.2 1 n i = 1 n x i 2 e x p 1 n i = 1 n cos 2 π x i + 20 + e
F5Schwefels 1.2 f x = i = 1 n j = 1 i x j 2
F6Step f x = i = 1 n x i + 0.5 2
F7Sum of Different Powers f x = i = 1 n x i i + 1
F8Griewank f x = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1
F9Sum Squares f x = i = 1 n i x i 2
F10Rastrigin f x = i = 1 n x i 2 10 cos 2 π x i + 10
F11Penalized 1 f x = π n 10 sin 2 π y 1 + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2   + i = 1 n u x i ,   a ,   k ,   m y i = 1 + 1 4 x i + 1 u x i ,   a ,   k ,   m = k x i a m x i > a 0 a x i a k x i a m x i < a
F12Penalized 2 f x = 0.1 sin 2 3 π x 1 + i = 1 n 1 x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n   + i = 1 n u x i ,   a ,   k ,   m
u x i ,   a ,   k ,   m = k x i a m x i > a 0 a x i a k x i a m x i < a
F13Rosenbrock f x = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2
F14Levy f x = sin 2 π w 1 + i = 1 n 1 w i 1 2 1 + 10 sin 2 π w i + 1 + w d 1 2 + 1 + sin 2 2 π w d w i = 1 + x i 1 4
F15Zakharov f x = i = 1 n x i 2 + i = 1 n 0.5 i x i 2 + i = 1 n 0.5 i x i 4
F16Rotated Hyper-Ellipsoid f x = i = 1 n j = 1 i x j 2
F17Styblinski–Tang f x = 1 2 i = 1 n x i 4 16 x i 2 + 5 x i
F18Cigar f x = x 1 2 + 10 6 i = 2 n x i 2
F19Qing f x = i = 1 n x i 2 i 2
F20Yang 1 f x = i = 1 n ε i x i i
F21Bartels Conn f x = x 1 2 + x 2 2 x 1 x 2 + sin x 1 + cos x 2
F22Beale f x = 1.5 x 1 + x 1 x 2 2 + 2.25 x 1 + x 1 x 2 2 2 + 2.625 x 1 + x 1 x 2 3 2
F23Bohachevsky 1 f x = x 1 2 + 2 x 2 2 0.3 cos 3 π x 1 0.4 cos 4 π x 2 + 0.7
F24Bohachevsky 2 f x = x 1 2 + 2 x 2 2 0.3 cos 3 π x 1 cos 4 π x 2 + 0.3
F25Camel Function—Three Hump f x = 2 x 1 2 1.05 x 1 4 + x 1 6 6 + x 1 x 2 + x 2 2
F26Egg Crate f x = x 1 2 + x 2 2 + 25 sin 2 x 1 + sin 2 x 2
F27Himmelblau f x = x 1 2 + x 2 11 2 + x 1 + x 2 2 7 2
F28Leon f x = 100 x 2 x 1 2 2 + 1 x 1 2
F29Matyas f x = 0.26 x 1 2 + x 2 2 0.48 x 1 x 2
F30Mishra 3 f x = cos x 1 2 + x 2 2 + 0.01 x 1 + x 2
F31Price 1 f x = x 1 5 2 + x 2 5 2
F32Price 2 f x = 1 + sin 2 x 1 + sin 2 x 2 0.1 e x 1 2 x 2 2
F33Scahffer 1 f x = 0.5 + sin 2 x 1 2 + x 2 2 2 0.5 1 + 0.001 x 1 2 + x 2 2 2
F34Scahffer 2 f x = 0.5 + sin 2 x 1 2 x 2 2 2 0.5 1 + 0.001 x 1 2 + x 2 2 2
F35Trecanni f x = x 1 4 + 4 x 1 3 + 4 x 1 2 + x 2 2
F36Branin’s RCOS 1 f x = x 2 5.1 x 1 2 4 π 2 + 5 x 1 π 6 2 + 10 1 1 8 π cos x 1 + 10
F37Easom f x = cos x 1 cos x 2 e x 1 π 2 x 2 π 2
F38Six Hump Camel Function f x = 4 2.1 x 1 2 + x 1 4 3 x 1 2 + x 1 x 2 + 4 + 4 x 2 2 x 2 2
F39Cube f x = 100 x 2 x 1 3 2 + 1 x 1 2
F40Hartmann 3 f x = i = 1 4 a i e j = 1 3 A i j x j P i j 2
F41Hartmann 6 f x = i = 1 4 a i e j = 1 6 A i j x j P i j 2
F42Colville f x = 100 x 1 x 2 2 2 + 1 x 1 2 + 90 x 4 x 3 2 2 + 1 x 3 2     + 10.1 x 2 1 2 + x 4 1 2 + 19.8 x 2 1 x 4 1
F43Miele Cantrell f x = e x 1 x 2 4 + 100 x 2 x 3 6 + tan x 3 x 4 4 + x 1 8
Table 2. Comparison of the results of the ABC, aABC, ABCES and HABCES algorithms for solving high-dimensional global optimization problems.
Table 2. Comparison of the results of the ABC, aABC, ABCES and HABCES algorithms for solving high-dimensional global optimization problems.
FunctionSearch RangeDOptimal ValueABCABCESaABCHABCES (Proposed)
MeanStd.MeanStd.MeanStd.MeanStd.
F1[−5, 5]3003.0 × 10−41.1 × 10−32.5 × 10−62.4 × 10−62.2 × 10−56.7 × 10−55.7 × 10−128.2 × 10−12
F2[−1, 1]30−1−0.9999728.4 × 10−5−13.0 × 10−7−0.9999951.0 × 10−5−10
F3[−10, 10]3004.8 × 10−11.152152.4 × 10−27.2 × 10−22.7 × 10−16.9 × 10−19.0 × 10−92.0 × 10−8
F4[−32, 32]3005.3 × 10−16.0 × 10−11.0 × 10−17.5 × 10−23.7 × 10−11.597383.0 × 10−51.4 × 10−5
F5[−10, 10]3001.8 × 1023.6 × 1017.7 × 1013.1 × 1011.1 × 1022.8× 1014.7 × 1012.1 × 101
F6[−100, 100]3001.3 × 10−14.3 × 10−1002.7 × 10−15.7 × 10−100
F7[−1, 1]3009.4 × 10−92.9 × 10−81.5 × 10−122.8 × 10−126.9 × 10−122.6 × 10−119.1 × 10−188.0 × 10−18
F8[−600, 600]3001.7 × 10−12.8 × 10−11.7 × 10−22.2 × 10−25.3 × 10−26.7 × 10−22.5 × 10−36.8 × 10−3
F9[−10, 10]3002.8 × 10−11.484211.2 × 10−59.3 × 10−62.1 × 10−36.8 × 10−32.4 × 10−107.2 × 10−10
F10[−5, 5]3004.463951.679455.646021.72776.359172.263526.9 × 10−22.3 × 10−1
F11[−50, 50]3001.8 × 10−44.0 × 10−41.9 × 10−51.6 × 10−54.3 × 10−59.6 × 10−55.9 × 10−112.0 × 10−10
F12[−50, 50]3004.9 × 10−32.0 × 10−21.4 × 10−41.0 × 10−48.3 × 10−42.5 × 10−32.2 × 10−102.9 × 10−10
F13[−5, 5]3003.2 × 1012.8 × 1012.7 × 1012.5 × 1012.4 × 1012.0 × 1012.2 × 1012.1 × 101
F14[−10, 10]3007.1 × 10−42.1 × 10−33.8 × 10−53.9 × 10−57.3 × 10−51.3 × 10−41.1 × 10−102.3 × 10−10
F15[−5, 10]3003.1 × 1023.6 × 1019.3 × 1013.4 × 1012.3 × 1023.9 × 1018.0 × 1012.5 × 101
F16[−100, 100]3001.338477.198651.4 × 10−51.8 × 10−52.0 × 10−34.7 × 10−31.2 × 10−102.4 × 10−10
F17[−5, 5]30−1174.98−1142.8833.1328−1171.846.70843−1174.182.03502−1174.980
F18[−10, 10]3002.9 × 1016.2 × 1018.7 × 10−22.7 × 10−13.0 × 1011.1 × 1023.1 × 10−73.0 × 10−7
F19[−100, 100]3003.928176.787676.3 × 10−16.3 × 10−11.9 × 10−13.7 × 10−12.7 × 10−46.8 × 10−4
F20[−5, 5]3005.7 × 10−23.5 × 10−21.5 × 10−31.1 × 10−34.1 × 10−21.6 × 10−22.3 × 10−42.6 × 10−4
Table 3. Comparison of the results of the ABC, aABC, ABCES and the HABCES algorithms when solving low-dimensional global optimization problems.
Table 3. Comparison of the results of the ABC, aABC, ABCES and the HABCES algorithms when solving low-dimensional global optimization problems.
FunctionSearch RangeDOptimal ValueABCABCESaABCHABCES (Proposed)
MeanStd.MeanStd.MeanStd.MeanStd.
F21[−500, 500]2110101010
F22[−4, 4]201.5 × 10−31.5 × 10−33.5 × 10−47.1 × 10−41.2 × 10−42.2 × 10−44.4 × 10−61.4 × 10−5
F23[−100, 100]207.1 × 10−153.5 × 10−14000000
F24[−100, 100]201.1 × 10−94.8 × 10−92.3 × 10−108.5 × 10−104.8 × 10−142.6 × 10−1300
F25[−5, 5]201.7 × 10−95.8 × 10−91.4 × 10−105.4 × 10−101.1 × 10−178.1 × 10−182.9 × 10−183.1 × 10−18
F26[−5, 5]201.9 × 10−176.5 × 10−174.9 × 10−184.7 × 10−187.5 × 10−188.4 × 10−181.5 × 10−181.4 × 10−18
F27[−5, 5]201.5 × 10−73.5 × 10−73.3 × 10−61.3 × 10−53.7 × 10−112.0 × 10−103.1 × 10−182.9 × 10−18
F28[−1, 1]201.1 × 10−22.6 × 10−22.7 × 10−24.1 × 10−21.1 × 10−31.9 × 10−35.9 × 10−31.6 × 10−2
F29[−10, 10]201.8 × 10−33.8 × 10−31.5 × 10−42.7 × 10−43.2 × 10−54.7 × 10−57.5 × 10−61.1 × 10−5
F30[−10, 10]2−0.18465−0.1525690.027513−0.1750580.0269035−0.1485480.0326784−0.1868990.0396051
F31[−10, 10]208.8 × 10−142.4 × 10−138.6 × 10−111.8 × 10−101.7 × 10−172.1 × 10−171.7 × 10−181.7 × 10−18
F32[−10, 10]20.90.900094.5 × 10−40.95.6 × 10−70.94.4× 10−160.9066672.5 × 10−2
F33[−100, 100]2000000000
F34[−100, 100]205.6 × 10−142.3 × 10−137.4 × 10−182.8 × 10−170000
F35[−2, 2]202.1 × 10−182.8 × 10−184.9 × 10−198.1 × 10−195.1 × 10−196.2 × 10−192.5 × 10−204.7 × 10−20
F36[−5, 15]20.3978870.3978876.0 × 10−70.3978873.40 × 10−70.3978871.67 × 10−160.3978871.67 × 10−16
F37[−10, 10]2−1−10−10−10−10
F38[−5, 5]2−1.03163−1.031630−1.031630−1.031630−1.031630
F39[−10, 10]202.7 × 10−23.4 × 10−25.5 × 10−39.1 × 10−31.7 × 10−33.5 × 10−31.2 × 10−32.4 × 10−3
F40[0, 1]3−3.86278−3.862784.4 × 10−16−3.862784.4 × 10−16−3.862784.4 × 10−16−3.862784.4 × 10−16
F41[0, 1]6−3.32237−3.322327.0 × 10−5−3.322311.1 × 10−4−3.30314.3 × 10−2−3.290585.3 × 10−2
F42[−10, 10]402.410542.51857.1 × 10−16.8 × 10−18.4 × 10−18.1 × 10−14.1 × 10−15.3 × 10−1
F43[−10, 10]401.1 × 10−33.9 × 10−32.3 × 10−55.8 × 10−57.9 × 10−44.1 × 10−31.8 × 10−74.6 × 10−7
Table 4. Comparison of the results of the ABC, aABC, ABCES and HABCES algorithms when solving parity problems.
Table 4. Comparison of the results of the ABC, aABC, ABCES and HABCES algorithms when solving parity problems.
ExamplesNumber of GenerationsNetwork StructureABCABCESaABCHABCES (Proposed)
MeanStd.MeanStd.MeanStd.MeanStd.
XOR502-5-10.02709550.06539510.01342760.0246310.01066090.01674020.004068690.00643696
2-10-10.004408560.008754780.003709120.00704640.001248310.001964590.000516960.00079469
3 bit
parity
1003-5-10.03707020.03258310.04131620.03061310.0316670.03187230.0239720.0280218
3-10-10.02520050.03805090.01706710.02637390.02000670.02348350.008552150.00924428
4 bit
parity
1004-5-10.1281620.04154360.1357070.03904140.1285520.02903620.1184250.0281676
4-10-10.1040240.04254570.1157350.03882150.108540.03915390.09136250.0297292
Table 5. Information on the nonlinear static systems used.
Table 5. Information on the nonlinear static systems used.
SystemEquationInputsOutputNumber of Training/Test DataRange
S1 y = 2 sin π x 1 x1y80/20[0, 1]
S2 y = 10.391 x 1 0.4 x 2 0.6 + 0.36 x1, x2y80/20[0, 1]
S3 y = tanh x 1 + x 2 11 x1, x2y80/20[0, 1]
S4 y = 1 + x 1 0.5 + x 2 1 + x 3 1.5 x1, x2, x3y173/43[1, 6]
S5 y = x 1 5.5 2 + x 2 5.5 2 + x 3 2 x1, x2, x3y100/25[0, 1]
S6 y = e 2 x 1 sin π x 4 + sin x 2 x 3 x1, x2, x3, x4y100/25[0.25, 0.25]
Table 6. The results obtained when using the HABCES algorithm for nonlinear static system identification.
Table 6. The results obtained when using the HABCES algorithm for nonlinear static system identification.
SystemNetwork StructureTrainTest
MeanSd.MeanSd.
S11-4-10.001500.000610.002580.00086
1-8-10.001090.000500.002010.00086
1-12-10.000720.000310.001610.00068
S22-4-10.001280.000630.006910.00312
2-8-10.000730.000630.005210.00494
2-12-10.000580.000270.004530.00233
S32-4-10.000140.000070.002970.00173
2-8-10.000160.000110.003320.00213
2-12-10.000250.000120.004040.00191
S43-4-10.002140.000730.002530.00075
3-8-10.001540.000680.002070.00078
3-12-10.001740.000850.002120.00109
S53-4-10.000820.000500.004630.00391
3-8-10.000850.000530.008930.00996
3-12-10.001290.000850.011450.00980
S64-4-10.001480.000530.0015630.00086
4-8-10.001570.000440.001860.00071
4-12-10.001640.000560.002290.00081
Table 7. Comparison of the results of the ABC, aABC, ABCES and HABCES algorithms for nonlinear static system identification.
Table 7. Comparison of the results of the ABC, aABC, ABCES and HABCES algorithms for nonlinear static system identification.
ExamplesNetwork StructureABCABCESaABCHABCES (Proposed)
TrainTestTrainTestTrainTestTrainTest
S11-12-10.000990.001690.001000.001870.000860.001530.000720.00161
S22-12-10.001100.005900.000880.005850.000860.004800.000580.00453
S32-4-10.000460.004590.000310.003860.000190.003540.000140.00297
S43-8-10.002350.002780.001800.002200.001700.002110.001540.00207
S53-4-10.001830.007090.001280.005700.001180.010090.000820.00463
S64-4-10.002080.002270.001610.001670.001520.001520.001480.00156
Table 8. Performance increase achieved when using HABCES for nonlinear static system identification.
Table 8. Performance increase achieved when using HABCES for nonlinear static system identification.
SystemABC-HABCESaABC-HABCESABCES-HABCES
Train (%)Test (%)Train (%)Test (%)Train (%)Test (%)
S127.274.7328.0013.9016.27-
S247.2723.2234.0922.5632.565.62
S369.5735.2954.8423.0526.3116.10
S434.4725.5414.445.919.411.90
S555.1934.7035.9418.7730.5154.11
S628.8531.278.076.582.63-
Table 9. Nonlinear dynamic systems used.
Table 9. Nonlinear dynamic systems used.
SystemEquationInputsOutputNumber of Training/Test Data
D1 y k + 1 = y k y k 1 + 2 y k + 2.5 8.5 + y k 2 + y k 1 2 + u k u k = 2 cos 2 π k / 100 , k 200 1.2 sin 2 π k / 20 , 200 < k 500 y k ,   y k 1 ,   u k y k + 1 400/100
D2 y k + 1 = y k y k 1 y k + 2.5 1 + y k 2 + y k 1 2 + u k u k = sin 2 π k / 25 y k ,   y k 1 ,   u k y k + 1 200/50
D3 y k + 1 = y k 1 + y k 2 + u k 3 u k = sin 2 π k / 100 y k ,   u k y k + 1 200/50
D4 y k + 1 = y k 1 + y k 1 + u k 3 u k = sin π k / 25 y k ,   y k 1 ,   u k y k + 1 200/50
D5 y k + 1 = ( 1 + y k 2 ) 1 0.9 y k + u k u k = 0.75 sin π k 1 / 180 y k ,   u k y k + 1 400/100
D6 y k + 1 = 0.72 y k + 0.025 y k 1 u k + 0.01 u k 1 2 + 0.2 u k 2 u ( k ) =     sin ( π k / 25 ) ,     k < 250     1.0 , 250 k < 500     1.0 , 500 k < 750 0.3 sin ( π k / 25 ) + 0.1 sin ( π k / 32 ) + 0.6 sin ( π k / 10 ) , 750 k 1000 y k ,   y k 1 ,   u k , u k 1 , y k + 1 900/100
D7 y k + 1 = 0.3 y k + 0.6 y k 1 + N u k N u = 0.6 sin π u + 0.3 sin 3 π u + 0.1 sin 5 π u u k = sin 2 π k 250 y k ,   y k 1 ,   u k y k + 1 400/100
D8 y k + 1 = y k y k 1 y k 2 u k 1 y k 2 1 + u k 1 + y k 1 2 + y k 2 2 u k = sin 2 π k / 250 , k 500 0.8 sin 2 π k / 250 + 0.2 sin 2 π k / 25 , k > 500 y k ,   u k y k + 1 800/200
Table 10. The results obtained by using the HABCES algorithms in nonlinear dynamic system identification.
Table 10. The results obtained by using the HABCES algorithms in nonlinear dynamic system identification.
SystemNetwork
Structure
TrainTest
MeanSd.MeanSd.
D13-4-10.000510.000240.000880.00032
3-8-10.000390.000150.000720.00030
3-12-10.000520.000220.001060.00075
D23-4-10.000840.000260.000930.00031
3-8-10.000750.000240.000960.00053
3-12-10.000660.000230.000820.00041
D32-4-10.000700.000210.000950.00046
2-8-10.000570.000090.000700.00022
2-12-10.000670.000170.000850.00039
D43-4-10.004260.000780.004400.00081
3-8-10.003480.000380.003670.00048
3-12-10.003540.000400.003670.00052
D52-4-10.000260.000120.000260.00014
2-8-10.000240.000130.000240.00011
2-12-10.000260.000110.000300.00022
D64-4-10.000670.000310.001050.00045
4-8-10.000790.000290.001330.00048
4-12-10.000860.000420.001210.00063
D73-4-10.000350.000170.000630.00031
3-8-10.000300.000140.000500.00022
3-12-10.000340.000200.000590.00031
D82-4-10.000210.000090.000400.00011
2-8-10.000170.000060.000310.00008
2-12-10.000170.000060.000350.00011
Table 11. Comparison of the results of the ABC, aABC, ABCES and HABCES algorithms for nonlinear dynamic system identification.
Table 11. Comparison of the results of the ABC, aABC, ABCES and HABCES algorithms for nonlinear dynamic system identification.
ExamplesNetwork StructureABCABCESaABCHABCES (Proposed)
TrainTestTrainTestTrainTestTrainTest
D13-8-10.000610.000950.000460.000870.000470.000970.000390.00072
D23-12-10.000920.001050.000820.000900.000830.001010.000660.00082
D32-8-10.000700.000900.000700.000980.000600.000770.000570.00070
D43-8-10.003720.003890.003700.003840.003670.003750.003480.00367
D52-4-10.000390.000430.000340.000400.000280.000290.000260.00026
D64-4-10.001260.001430.001080.001600.000790.001100.000670.00105
D73-8-10.000490.000670.000410.000640.000360.000560.000300.00050
D82-8-10.000280.000450.000230.000390.000190.000370.000170.00031
Table 12. Performance increase achieved using HABCES for nonlinear dynamic system identification.
Table 12. Performance increase achieved using HABCES for nonlinear dynamic system identification.
SystemABC-HABCESaABC-HABCESABCES-HABCES
Train (%)Test (%)Train (%)Test (%)Train (%)Test (%)
D136.0624.2115.2117.2417.0225.77
D228.2621.9019.518.8920.4818.81
D318.5722.2218.5728.575.009.09
D46.455.655.944.425.172.13
D533.3339.5323.5335.007.1410.34
D646.8226.5737.9634.3715.194.55
D738.7725.3726.8221.8716.6610.71
D839.2831.1126.0820.5110.5216.21
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kaya, E. A New Neural Network Training Algorithm Based on Artificial Bee Colony Algorithm for Nonlinear System Identification. Mathematics 2022, 10, 3487. https://0-doi-org.brum.beds.ac.uk/10.3390/math10193487

AMA Style

Kaya E. A New Neural Network Training Algorithm Based on Artificial Bee Colony Algorithm for Nonlinear System Identification. Mathematics. 2022; 10(19):3487. https://0-doi-org.brum.beds.ac.uk/10.3390/math10193487

Chicago/Turabian Style

Kaya, Ebubekir. 2022. "A New Neural Network Training Algorithm Based on Artificial Bee Colony Algorithm for Nonlinear System Identification" Mathematics 10, no. 19: 3487. https://0-doi-org.brum.beds.ac.uk/10.3390/math10193487

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop