Next Article in Journal
Application of Decomposable Semi-Regenerative Processes to the Study of k-out-of-n Systems
Next Article in Special Issue
Dynamic Model of Contingency Flight Crew Planning Extending to Crew Formation
Previous Article in Journal
Relationships between the Chicken McNugget Problem, Mutations of Brauer Configuration Algebras and the Advanced Encryption Standard
Previous Article in Special Issue
Randomized Simplicial Hessian Update
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimating the Highest Time-Step in Numerical Methods to Enhance the Optimization of Chaotic Oscillators

by
Martín Alejandro Valencia-Ponce 
1,†,
Esteban Tlelo-Cuautle
1,*,† and
Luis Gerardo de la Fraga
2,†
1
Department of Electronics, INAOE, Tonantzintla, Puebla 72840, Mexico
2
Computer Science Department, CINVESTAV, Av. IPN 2508, Mexico City 07360, Mexico
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 9 July 2021 / Revised: 5 August 2021 / Accepted: 9 August 2021 / Published: 13 August 2021
(This article belongs to the Special Issue Optimization Theory and Applications)

Abstract

:
The execution time that takes to perform numerical simulation of a chaotic oscillator mainly depends on the time-step h. This paper shows that the optimization of chaotic oscillators can be enhanced by estimating the highest h in either one-step or multi-step methods. Four chaotic oscillators are used as a case study, and the optimization of their Kaplan-Yorke dimension ( D K Y ) is performed by applying three metaheuristics, namely: particle swarm optimization (PSO), many optimizing liaison (MOL), and differential evolution (DE) algorithms. Three representative one-step and three multi-step methods are used to solve the four chaotic oscillators, for which the estimation of the highest h is obtained from their stability analysis. The optimization results show the effectiveness of using a high h value for the six numerical methods in reducing execution time while maximizing the positive Lyapunov exponent ( L E + ) and D K Y of the chaotic oscillators by applying PSO, MOL, and DE algorithms.

1. Introduction

Chaotic oscillators are dynamical systems modeled by nonlinear ordinary differential equations (ODEs), in the form of initial value problems: x ˙ = f ( x ) , solved by the initial condition x ( t 0 ) = x 0 . As already known, the main characteristic of a chaotic system relies on its high sensitivity to initial conditions, meaning that a very small variation causes significant differences in the system’s response, thus causing random and disorderly behavior [1]. The term “chaos” is very often associated with disorder and unpredictability, so that chaotic oscillators are good candidates to develop cryptographic applications [2,3], design random number generators [4,5], and secure communication systems [6,7,8], among others.
The biggest challenge when solving nonlinear ODEs that model chaotic behavior relies on choosing both an appropriate numerical method and the selection of the highest step-size h that reduces execution time. This matters when optimizing characteristics of chaotic oscillators because a small h increases the execution time exponentially. On the one hand, choosing the highest h is not a trivial task; in fact, one must analyze the stability of the numerical method being applied to guarantee convergence to the solution. On the other hand, among the available one-step and multi-step numerical methods, if they are explicit or implicit, one must also choose the method executing the lowest execution time to increase the throughput and operating frequency in a chaotic system. For instance, in linear dynamical systems, h can be directly determined from the evaluation of the eigenvalues, which are related to the natural frequencies of the system. However, in non-linear dynamic systems, the problem is more complex, and one must analyze both the eigenvalues and the stability of the numerical method.
Recently, it has been demonstrated that both stochastic nature-inspired metaheuristics and deterministic global optimization methods are competitive and surpass one another in dependence on the available budget of function evaluations [9,10]. For instance, the authors in [11] applied a DIRECT-type technique [12] to a black-box global optimization problem with expensive function evaluations, which is challenging for numerical methods due to the practical limits on computational budgets often required by intelligent systems. Both DIRECT-type techniques and metaheuristics can be applied to optimize characteristics of chaotic oscillators, such as maximizing the positive Lyapunov exponent ( L E + ) or Kaplan-Yorke dimension ( D K Y ), where the challenge is solving the mathematical model multiple times until a minimum error is accomplished or a desired number of generations is executed. Therefore, the number of calls to the model and the time required to solve it are the main issues. In this manner, to reduce the execution time, we propose estimating the maximum h of a numerical method. The case study includes the application of three representative one-step and three multi-step numerical methods in order to solve four well-known chaotic oscillators. whose L E + and D K Y are maximized by applying three metaheuristics, namely, the particle swarm optimization (PSO), many optimizing liaison (MOL), and differential evolution (DE) algorithm. These metaheuristics were chosen because they have been successfully applied to optimize chaotic oscillators, such as in [13], where PSO is applied to solve the parameter identification problem of a fractional-order discrete chaotic system. Additionally, in [14], PSO is applied to chaotic systems formulating the problem as a multi-dimensional one, and in [15], PSO is applied again to optimize the parameters of the Rössler chaotic system. More recently, PSO has been used to explain a bifurcation parameter detection strategy, as shown in [16], and MOL and DE algorithms have also been applied to optimize integer/fractional-order chaotic systems in [17]. In this manner, PSO, MOL, and DE algorithms are applied herein using the same population and generations, in which L E + and D K Y are evaluated by the software time-series analysis (TISEAN) [18].
The rest of the manuscript is organized as follows: Section 2 shows four well-known chaotic oscillators taken as a case study, namely: Lorenz, Rössler, Lü, and an autonomous chaotic system introduced in [19]. Section 3 describes the six numerical methods applied to solve the four chaotic systems. Section 4 details the stability region of the six numerical methods and the computation of eigenvalues to estimate the highest h. Section 5 describes PSO, MOL, and DE algorithms. Section 6 describes the maximization of L E + and D K Y by applying PSO, MOL, and DE to the four chaotic oscillators. A discussion on the optimization results is given in Section 7. Finally, the conclusions are summarized in Section 8.

2. Chaotic Oscillators

Among the huge plethora of chaotic/hyperchaotic oscillators, which are classified according to their dynamical characteristics as self-excited and hidden attractors, this section shows four well-known models. The most well-known chaotic oscillator is the one introduced by Lorenz, whose mathematical model is given in (1) [20]. In this oscillator, chaotic behavior is observed by setting σ = 10 , ρ = 28 , and β = 8 / 3 , with initial conditions x 0 = y 0 = z 0 = 0.1 . Figure 1 shows the phase-space portraits of the Lorenz chaotic system.
x ˙ = σ ( y x ) y ˙ = x ( ρ z ) y z ˙ = x y β z
The second oscillator considered herein is the Rössler one, whose mathematical model is given in (2) [21]. The chaotic behavior is observed by setting a = b = 0.2 and c = 5.7 , with initial conditions x 0 = y 0 = z 0 = 1 . The phase-space portraits among its state variables are shown in Figure 2.
x ˙ = y z y ˙ = x + ( a y ) z ˙ = b + z ( x c )
The third case study is the Lü chaotic system given in (3) [22]. In this oscillator, the nonlinear function is approximated by a piecewise-linear (PWL) one, which is known as a saturated non-linear function (SNLF), as described in (4). This chaotic system has the advantage of adding more PWL functions to generate multi-directional and multi-scroll chaotic attractors [17].
x ˙ = y y ˙ = z z ˙ = a x b y c z + d 1 f ( x )
f ( x ) = k if x < b p m x if b p x b p k if x > b p
The Lü chaotic system generates chaotic behavior when the coefficients are set to a = b = c = d 1 = 0.7 , and using initial conditions x 0 = y 0 = z 0 = 0.1 . In this paper, the PWL functions has a saturation level of k = 5 and break points of b p = 1 , so that the slope of the SNLF can be evaluated as m = k / b p . The phase-space portraits are shown in Figure 3.
The fourth chaotic system is the autonomous chaotic oscillator described in (5) and introduced in [19]. This oscillator has a quadratic term x 2 , and the chaotic behavior arises by setting a = 1 , b = 1.1 , and c = 0.42 , and using initial conditions x 0 = 0.1 and y 0 = z 0 = 0 . The phase-space portraits are shown in Figure 4.
x ˙ = y y ˙ = z z ˙ = a x b y c z x 2

3. Numerical Methods

Solving an initial value problem requires the use of numerical methods, which in general can be of a one-step or multi-step type [23]. In both cases, the mathematical model is solved by choosing an appropriate h with the aim of reducing execution time. Besides, as shown in the following section, h is related to the stability of the numerical method [24].
As mentioned in [25], the main objective of a numerical method is to provide an acceptable approximation of the behavior of a dynamical system in continuous-time. However, as there exist a huge number of methods, this section shows the application of three representative one-step and three multi-step methods. Some of them are explicit, and others implicit. In the former case, the method requires only past values at each iteration n to update the current value at n + 1 , while the implicit method updates the value n + 1 at the same iteration n + 1 , so that they require an explicit method to estimate f n + i .
The simplest one-step explicit method is known as the Forward Euler (FE), named in honor of Leonhard Euler. The iterative equation is given in Table 1. On the other hand, the simplest one-step implicit method is known as Backward Euler (BE), whose iterative equation is given in Table 1, and where the requirement of an additional calculation to approximate the problem solution at iteration n + 1 can be clearly appreciated. Therefore, the Forward Euler method can be applied first to evaluate f ( x n + 1 , t n + 1 ) , and afterwards, one can obtain x n + 1 . Among the one-step methods, the Runge-Kutta family is quite important, and is very often used to simulate chaotic oscillators. The fourth-order Runge-Kutta (RK4) method is given in Table 1, and it was developed around 1900 by Carl David Tolm Runge and Martin Wilhelm Kutta [26]. RK4 is the most widely used due to its high accuracy, but when it is implemented on digital hardware, it requires more resources than FE and BE methods, as already shown in [23].
The most well-known explicit multi-step methods are called Adams-Bashforth, and the implicit multi-step ones are called Adams-Moulton algorithms. Similar to the one-step methods, an implicit multi-step one finds the solution of an initial value problem using an explicit method from the Adams-Bashforth family to estimate f ( x n + 1 ) . Another type of multi-step method is focused on solving stiff problems, and are known as Gears algorithms [24]. Table 2 shows three multi-step methods that are used herein to solve the four chaotic oscillators given in the previous section.

4. Stability Regions and Estimation of h

The stability region of a numerical method can be obtained by solving a first-order linear equation of the form y = f ( x , y ) , where f ( x , y ) = λ , for which λ is an eigenvalue. From the stability analysis, a numerical method is said to be numerically stable if the numerical error is not amplified, but it decreases with the evolution of the iterations. This behavior can be identified in the one-step methods known as Backward Euler and Trapezoidal. The numerical methods that do not have this property are said to be numerically unstable, and this behavior can be observed in Forward Euler in specific cases.
Let us solve the initial value problem of the form x ˙ = λ x , applying Forward Euler, Backward Euler, and Trapezoidal methods. Applying Forward Euler, one gets the iterative formulae given in (6), and applying Backward Euler, one gets (7), and applying the Trapezoidal method, one gets (8).
x n + 1 = ( 1 h λ ) x n ,
x n + 1 = x n 1 + h λ ,
x n + 1 = 1 h λ 2 1 + h λ 2 x n
To show that Forward Euler is numerically unstable, the iterations beginning with the initial condition x 0 become:
x 1 = ( 1 λ h ) x 0 x 2 = ( 1 λ h ) x 1 = ( 1 λ h ) 2 x 0 x n = ( 1 λ h ) n x 0
Therefore, if  ( 1 λ h ) > 1 , then x n as n ; consequently, for the Forward Euler to be numerically stable, it must be necessary that ( 1 λ h ) < 1 or 0 < λ h < 2 . In this case, if λ is a real and positive number, then h < 2 / λ .
The Backward Euler and Trapezoidal methods are numerically stable, because from (7) and (8), one can observe that x n 0 as n , and this ideally occurs regardless of h. The stability regions of FE, BE, and RK4 are shown in Figure 5. A similar analysis is performed for the multi-step methods, so that the stability regions of Adams-Bashforth 6, Adams-Moulton 6, and Gear 2 are shown in Figure 6. Looking at these figures, one can see that if the eigenvalues of a function increase, then h must decrease, and vice versa.
Analyzing the stability regions of the numerical methods helps to verify that the eigenvalues are inversely related to the value of h. In this manner, one must compute the eigenvalues ( λ ) of a dynamical system to estimate an h that guarantees stability. Taking the Lorenz system given in (1) as an example, it has three equilibrium points: E P 1 = ( 8.4852 , 8.4852 , 27 ) , E P 2 = ( 8.4852 , 8.4852 , 27 ) and E P 3 = ( 0 , 0 , 0 ) . The Jacobian matrix is given in (9), which must be evaluated at the three equilibrium points E P * = ( x * , y * , z * ) to obtain the eigenvalues listed in (10). Table 3 shows the Jacobians, equilibrium points, and eigenvalues of the chaotic oscillators described in Section 2.
J ( x * , y * , z * ) = σ σ 0 ρ z * 1 x * y * x * β
E P 1 : λ ( 1 , 2 , 3 ) = ( 13.8545 , 0.0939 ± j 10.1945 ) E P 2 : λ ( 1 , 2 , 3 ) = ( 13.8545 , 0.0939 ± j 10.1945 ) E P 3 : λ ( 1 , 2 , 3 ) = ( 2.6667 , 22.8277 , 11.8277 )
After computing the eigenvalues, one can evaluate the stability conditions, so that FE’s stability is guaranteed when 0 < h < 2 / λ , where λ is the largest eigenvalue of the system, that is,  λ = 22.8277 . Substituting λ in the inequality, one finds that FE is stable for h < 0.0876 . However, in practice, this h value varies due to local and round-off errors of the methods. For instance, the local error is defined as the error that occurs at t = t n + 1 , assuming that x n is the initial condition; whereas a total error is defined as the current accumulated error from t = 0 to t = t n + 1 , with initial condition x 0 . In [27], these errors are described as local and global, where the local error by truncation (LTE) is first calculated, and then some form of stability is used to show that the global error can be limited in LTE terms. The global error refers to the approximated solution minus the exact solution error, and LTE refers to the error produced by the finite difference derivative approximation, and is therefore something that can be easily estimated using Taylor series expansions. Obviously, these are not the unique errors in numerical methods: according to [28,29], there are various sources of errors, and some of them are errors in the input data, rounding errors during calculations, simplifications in mathematical models, and even human errors. Besides, the stability analysis helps to estimate the highest h from which one can test lower values after observing the desired chaotic behavior.

5. PSO, MOL, and DE Algorithms

5.1. Particle Swarm Optimization Algorithm

The particle swarm optimization (PSO) algorithm is a sub-field of computational intelligence belonging to the field of swarm and collective intelligence. Similar swarm algorithms are the Ant Colony Optimization [30] and Artificial Bee Colony [31]. PSO is based on a mathematical model developed by Kennedy and Eberhart in 1995 [32], which describes the social behavior of birds and/or fish through a model that is based on the basic principles of self-organization that can be used to describe complex systems, as for the chaotic oscillators. The population is based on particles forming a search group for the purpose of finding food, and generally, each individual continues his search according to his own experience and the experience of the whole group.
The main idea in PSO begins by initializing a set of particles in a search space, given a favorable initial position, assigning an initial velocity vector, and allowing the particles to change their position and velocity at each iteration according to some random parameters. The algorithm updates the position and velocity of the particles that follow the particle with the best result associated to their social behavior. Each particle remembers its best position and recognizes if its current position is the best among the other particles, that is, the global best. Mathematically, the update equations are given in (11) to describe the velocity and (12) to describe the position, at iteration ith, respectively. rand() is a function that returns some random real numeric values between 0 and 1; p b e s t is the best position of the particle and g b e s t represents the best global position among all the particles. c 1 and c 2 are two parameters that represent the confidence of the particle itself, named “cognition” and “swarm”.
v i ( t + 1 ) = v i ( t ) + c 1 rand ( ) ( p b e s t ( t ) p i ( t ) ) + c 2 rand ( ) ( g b e s t ( t ) p i ( t ) )
p i ( t + 1 ) = p i ( t ) + v i ( t + 1 )

5.2. Many Optimizing Liaisons Algorithm

MOL is a simplified version of PSO proposed by Kenedy [33], but according to [34], after some research, it was named Many Optimizing Liaisons. Basically, MOL is based on eliminating the best-known position of the particle p b e s t in (11), which updates to (13):
v i ( t + 1 ) = w v i ( t ) + c 2 rand ( ) ( g b e s t ( t ) p i ( t ) ) .
As shown in [35], MOL is faster and shorter than PSO, so that the selection of parameters is simpler compared to PSO. In addition, MOL is a purely social algorithm tending to follow the best swarm’s particle ( g b e s t ). The w in (13) is the inertia weight that maintains a balance between global and local search, so that the exploration process of MOL quickly finds an optimum solution with a lesser number of iterations. In this paper, w and c 2 are set to 0.31 and 2, respectively.

5.3. Differential Evolution Algorithm

DE is a simple and effective evolutionary algorithm inspired by the theory of the evolution of species proposed by Darwin. DE is used to solve global optimization problems in a continuous domain. According to [36], DE works in two phases: initialization and evolution. In the first phase, the population is randomly generated, and afterwards, it goes through mutation, crossover, and selection processes, which are repeated until a stop criterion is met. During the initialization, the population is saved into a D-dimensional vector x j G = { x 1 , j G , x 2 , j G , , x D , j G } for j = { 1 , 2 , , N p } , where N p is the population’s size and G denotes the maximum number of generations. In the evolution phase, new individuals are generated by performing mutation (14), crossover (15), and selection (16) operations. In the mutation process, a mutant vector V j G is generated for each target vector X j G in generation G with the following form:
V j G = X r 1 G + F ( X r 2 G X r 3 G ) ,
where F is the scale factor with a value between 0 and 1; and r 1 , r 2 , r 3 { 1 , 2 , , N p } are three different random scalars. The crossover process is performed using the target vector, mutant vector, and a crossover probability C r , whose value is set between 0 and 1 in order to generate a trial vector U j G = { u 1 , j G , u 2 , j G , , u D , j G } , which is generated as:
u i , j G = v i , j G i f rand j C r x i , j G otherwise
where i { 1 , 2 , , D } . Finally, in the selection process, the target and trial vectors are compared according to their fitness value, and the best of them survives for the next generation. The selection process is performed by (16).
X j G + 1 = U j G i f f ( U j G ) f ( X j G ) X j G otherwise

6. Optimization of LE+ and Kaplan-Yorke Dimension

6.1. Lyapunov Exponents and Kaplan-Yorke dimension

The high sensitivity to initial conditions of a chaotic oscillator is appreciated by analyzing two orbits produced by two quite close and different initial values. The orbits separate exponentially over time, causing the orbits to separate exponentially too. This phenomenon is quantified by evaluating the positive Lyapunov exponent ( L E + ) [37], which describes local instability in a chaotic motion. The existence of an L E + means that there exists a high probability that the system has chaotic behavior [38].
For a chaotic oscillator that has three ODEs, like the systems described in Section 2, it has three Lyapunov exponents, where one must be positive, one zero, and one negative. The ordering of the Lyapunov exponents ( L E ) makes it possible to evaluate the Kaplan-Yorke dimension through (17), where k is an integer such that the sum of ( L E i ) is not negative. The  L E + and D K Y shown in Table 4 for the four chaotic oscillators were evaluated using TISEAN [18], whose chaotic time series were generated by applying the six numerical methods described in Table 1 and Table 2, for the state variable x. One can also see the execution time of the numerical method (TimeNumMet) and the one taken by TISEAN:
D K Y = k + i = 1 k L E i L E k + 1

6.2. Maximizing L E + and D K Y by PSO, MOL and DE Using the Highest h

The optimization of chaotic oscillators takes a long execution time, mainly associated to the numerical method and TISEAN. In this paper, the six methods described in Section 3 are applied to the four chaotic oscillators presented in Section 2. As already shown in Section 4, the selection of an appropriate h depends on the method’s stability region. In this manner, the highest h for each method and for each chaotic oscillator is given in Table 5. As one can see, both TISEAN and the numerical method take less execution time. In all cases, the highest h is allowed by Runge Kutta 4. However, RK4 requires more additional calculations compared to the Adams-Moulton method, which allows an acceptable high h value. In optimizing L E + and D K Y , the method that better satisfies the trade-off between the number of calculations and the highest h is Gear of order 2, while the Backward Euler, in general, is the method that accepts the smallest h. Another issue is that, as h increases, D K Y decreases, so that we solve this challenge by optimizing D K Y by varying the design parameters of the four chaotic systems by applying the PSO, MOL, and DE algorithms. Thus, the pseudo-codes of PSO and DE are given in Algorithms 1 and 2, respectively. In all cases, the constraint is guaranteeing the highest h.
As one can observe from Table 4 and Table 5, the longest execution time is required by TISEAN. Due to this limitation, some restrictions were established to determine that the system has unstable behavior. One of the characteristics that determines this instability is based on analyzing the eigenvalues of the chaotic oscillator. According to [39], if there are i and j such that R e [ λ i ] < 0 and R e [ λ j ] > 0 , and if the system has two complex eigenvalues and one real, then the system is said to be unstable.
Algorithm 1: PSO.
  • procedurePSO( n P o p , M a x I t )
  •     Generate a file including the numerical method and the chaotic oscillator (CO)
  •     for  i = 1 : n P o p  do
  •         Randomly initialize the design variables
  •         Randomly replace the design variables into the file to evaluate L E + and D K Y
  •         Simulate the CO i in TISEAN
  •         Get D K Y of the output file according to TISEAN
  •         Update p b e s t and g b e s t particles checking the constraints
  •     end for
  •     for  i t = 1 : M a x I t  do
  •         for  i = 1 : n P o p  do
  •            Copy particle i to p
  •            Update the particle p velocity according to (11)
  •            Update the particle p position (design variables) according to (12)
  •            Replace the new design variables into the file
  •            Simulate the CO p in TISEAN
  •            Compare particles i and p
  •            Update p b e s t and g b e s t particles checking the constraints.
  •         end for
  •     end for
  • end procedure
Algorithm 2: DE.
  • procedureDE( G , N p , C r , F , D )
  •     Generate a file including the numerical method and the chaotic oscillator (CO)
  •     for  i = 1 : N p  do
  •         Randomly initialize the design variables
  •         Randomly replace the design variables into the file to evaluate L E + and D K Y
  •         Simulate the CO i in TISEAN
  •         Get D K Y of the output file according to TISEAN
  •         Save the evaluation results for each individual
  •     end for
  •     for  i t = 1 : G  do
  •         for  i = 1 : N p  do
  •            Randomly select three different individuals x a , x b and x c .
  •            Generate the target vector according to (14)
  •            Perform the crossover process according to (15)
  •            Replace the new design variables into the file
  •            Simulate the CO in TISEAN
  •            Evaluate each individual according to (16)
  •            Update the population by selecting the individuals with the greatest fitness value
  •         end for
  •     end for
  • end procedure

7. Feasible Solutions Provided by PSO, MOL and DE

To perform the D K Y optimization by applying PSO, MOL and DE, a number of generations equal to 100 was set for each metaheuristic, with a population equal to the number of design parameters multiplied by ten. The search spaces for the design parameters of the four chaotic oscillators were set as follows: 0.01 < σ < 30 , 0.01 < ρ < 50 and 0.01 < β < 10 for Lorenz; 0.01 < a , b < 10 and 0.01 < c < 30 for Rössler; 0.01 < a , b , c , d 1 < 10 for Lü, and 0.01 < a , b , c < 10 for the Autonomous Chaotic System.
The optimization results are given in Table 6, where it can be appreciated that the execution time evaluating D K Y and the one taken by TISEAN are enhanced, while keeping the highest h allowed by the numerical method. In this manner, for the Lorenz and Lü oscillators, the Gear of order 2 was used to perform the optimization, while for Rössler and the Autonomous Chaotic System, the Adams-Moulton of order 6 was applied. Analyzing the data in Table 5 and comparing them with those given in Figure 7, it can be seen that after the optimization process, the execution time for optimizing the Rössler and the Autonomous Chaotic System oscillators is significantly enhanced.
Figure 7 shows the evolution of the D K Y for each chaotic oscillator applying the PSO, MOL and DE algorithms.

8. Conclusions

Performing the simulation of a chaotic system is time-consuming, and reducing its execution time is very challenging because there are many considerations that must be accomplished in order to guarantee chaotic behavior. This paper showed that the execution time is enhanced by estimating the highest h for each numerical method, which has been performed by stability analysis of the methods. The case study included four well-known chaotic oscillators, three presentative one-step methods, and three multi-step methods. We demonstrated that one can maximize D K Y by applying metaheuristics, such as PSO, MOL and DE algorithms, while maintaining the highest h for the numerical method. As a result, the behavior of the three metaheuristics over 100 generations was shown in Figure 7, where the best global evolution of D K Y for the four chaotic oscillators applying PSO, MOL and DE is appreciated. Finally, the optimization results listed in Table 6 confirmed the suitability of applying metaheuristics in the optimization of chaotic systems, and the usefulness of estimating the highest h to enhance execution time.

Author Contributions

Investigation, M.A.V.-P. and E.T.-C.; Writing—review and editing, M.A.V.-P., E.T.-C. and L.G.d.l.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, L.; Liu, J.; Lu, Z.R. Incremental response sensitivity approach for parameter identification of chaotic and hyperchaotic systems. Nonlinear Dyn. 2017, 89, 153–167. [Google Scholar] [CrossRef]
  2. Muhammad, A.S.; Özkaynak, F. SIEA: Secure Image Encryption Algorithm Based on Chaotic Systems Optimization Algorithms and PUFs. Symmetry 2021, 13, 824. [Google Scholar] [CrossRef]
  3. Sun, X.; Shao, Z.; Shang, Y.; Liang, M.; Yang, F. Multiple-image encryption based on cascaded gyrator transforms and high-dimensional chaotic system. Multimed. Tools Appl. 2021, 80, 15825–15848. [Google Scholar] [CrossRef]
  4. Stoller, S.; Campbell, K.A. Demonstration of Three True Random Number Generator Circuits Using Memristor Created Entropy and Commercial Off-the-Shelf Components. Entropy 2021, 23, 371. [Google Scholar] [CrossRef]
  5. Rezk, A.A.; Madian, A.H.; Radwan, A.G.; Soliman, A.M. Multiplierless chaotic pseudo random number generators. AEU-Int. J. Electron. Commun. 2020, 113, 152947. [Google Scholar] [CrossRef]
  6. Bendoukha, S.; Abdelmalek, S.; Ouannas, A. Secure communication systems based on the synchronization of chaotic systems. In Mathematics Applied to Engineering, Modelling, and Social Issues; Springer: Cham, Switzerland, 2019; pp. 281–311. [Google Scholar]
  7. Rusyn, V.; Mamat, M.; Azharul, F.; Sanjaya, W.M.; Sambas, A.; Dwipriyoko, E.; Sutoni, A. Computer Modelling of the Information Properties of Hyper Chaotic Lorenz System and Its Application in Secure Communication System. J. Phys. Conf. Ser. 2021, 1764, 012205. [Google Scholar] [CrossRef]
  8. Weng, H.; Zhang, C.; Chen, P.; Chen, R.; Xu, J.; Liao, Y.; Liang, Z.; Shen, D.; Zhou, L.; Ke, J. A quantum chaotic image cryptosystem and its application in IoT secure communication. IEEE Access 2021, 9, 20481–20492. [Google Scholar]
  9. Sergeyev, Y.D.; Kvasov, D.; Mukhametzhanov, M. On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget. Sci. Rep. 2018, 8, 1–9. [Google Scholar] [CrossRef] [Green Version]
  10. Cavoretto, R.; De Rossi, A.; Mukhametzhanov, M.S.; Sergeyev, Y.D. On the search of the shape parameter in radial basis functions using univariate global optimization methods. J. Glob. Optim. 2021, 79, 305–327. [Google Scholar] [CrossRef]
  11. Paulavičius, R.; Sergeyev, Y.D.; Kvasov, D.E.; Žilinskas, J. Globally-biased BIRECT algorithm with local accelerators for expensive global optimization. Expert Syst. Appl. 2020, 144, 113052. [Google Scholar] [CrossRef]
  12. Jones, D.R.; Martins, J.R. The DIRECT algorithm: 25 years Later. J. Glob. Optim. 2021, 79, 521–566. [Google Scholar] [CrossRef]
  13. Peng, Y.; Sun, K.; He, S.; Peng, D. Parameter identification of fractional-order discrete chaotic systems. Entropy 2019, 21, 27. [Google Scholar] [CrossRef] [Green Version]
  14. He, Q.; Wang, L.; Liu, B. Parameter estimation for chaotic systems by particle swarm optimization. Chaos Solitons Fractals 2007, 34, 654–661. [Google Scholar] [CrossRef]
  15. Rosalie, M.; Kieffer, E.; Brust, M.R.; Danoy, G.; Bouvry, P. Bayesian optimisation to select Rössler system parameters used in Chaotic Ant Colony Optimisation for Coverage. J. Comput. Sci. 2020, 41, 101047. [Google Scholar] [CrossRef]
  16. Matsushita, H.; Kurokawa, H.; Kousaka, T. Bifurcation analysis by particle swarm optimization. Nonlinear Theory Appl. IEICE 2020, 11, 391–408. [Google Scholar] [CrossRef]
  17. Tlelo-Cuautle, E.; De La Fraga, L.G.; Guillén-Fernández, O.; Silva-Juárez, A. Optimization of Integer/Fractional Order Chaotic Systems by Metaheuristics and Their Electronic Realization; CRC Press: Boca Raton, FL, USA, 2021. [Google Scholar]
  18. Hegger, R.; Kantz, H.; Schreiber, T. Practical implementation of nonlinear time series methods: The TISEAN package. Chaos Interdiscip. J. Nonlinear Sci. 1999, 9, 413–435. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Pandey, A.; Baghel, R.; Singh, R. Analysis and circuit realization of a new autonomous chaotic system. Int. J. Electron. Commun. Eng. 2012, 5, 487–495. [Google Scholar]
  20. Lorenz, E.N. Deterministic nonperiodic flow. In The Theory of Chaotic Attractors; Springer: Cham, Switzerland, 2004; pp. 25–36. [Google Scholar]
  21. Letellier, C.; Rossler, O.E. Rossler attractor. Scholarpedia 2006, 1, 1721. [Google Scholar] [CrossRef]
  22. Lu, J.; Chen, G.; Yu, X.; Leung, H. Design and analysis of multiscroll chaotic attractors from saturated function series. IEEE Trans. Circuits Syst. I Regul. Pap. 2004, 51, 2476–2490. [Google Scholar] [CrossRef] [Green Version]
  23. Tlelo-Cuautle, E.; Pano-Azucena, A.D.; Guillén-Fernández, O.; Silva-Juárez, A. Analog/Digital Implementation of Fractional Order Chaotic Circuits and Applications; Springer: Cham, Switzerland, 2020. [Google Scholar]
  24. Chua, L.O. Algorithms and Computational Techniques. In Computer-Aided Analysis of Electronic Circuits; Prentice Hall: Hoboken, NJ, USA, 1975. [Google Scholar]
  25. Epperson, J.F. An Introduction to Numerical Methods and Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2021. [Google Scholar]
  26. Runge, C. Über die numerische Auflösung von Differentialgleichungen. Math. Ann. 1895, 46, 167–178. [Google Scholar] [CrossRef] [Green Version]
  27. LeVeque, R.J. Finite Difference Methods for Ordinary and Partial Differential Equations: Steady-State and Time-Dependent Problems; SIAM: Philadelphia, PA, USA, 2007. [Google Scholar]
  28. Dahlquist, G.; Björck, Å. Numerical Methods in Scientific Computing; SIAM: Philadelphia, PA, USA, 2008; Volume I. [Google Scholar]
  29. Shampine, L.F.; Allen, R.C.; Pruess, S. Fundamentals of Numerical Computing; Wiley: New York, NY, USA, 1997; Volume 1. [Google Scholar]
  30. Dorigo, M.; Stützle, T. Ant colony optimization: Overview and recent advances. In Handbook of Metaheuristics; Springer: Cham, Switzerland, 2019; pp. 311–351. [Google Scholar]
  31. Wahid, A.; Behera, S.; Mohapatra, D. Artificial Bee Colony and its Application: An Overview. Int. J. Adv. Res. Comput. Eng. Technol. (IJARCET) 2015, 4, 1475–1480. [Google Scholar]
  32. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  33. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation, IEEE World Congress on Computational Intelligence (Cat. No. 98TH8360), Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  34. Saha, N.; Panda, A.; Panda, S. Speed control with torque ripple reduction of switched reluctance motor by many optimizing liaison technique. J. Electr. Syst. Inf. Technol. 2018, 5, 829–842. [Google Scholar] [CrossRef]
  35. Mohanty, M.; Kumar Sahu, R.; Panda, S. A novel hybrid many optimizing liaisons gravitational search algorithm approach for AGC of power systems. Automatika 2020, 61, 158–178. [Google Scholar] [CrossRef]
  36. Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar]
  37. Cui, L.; Chen, C.; Jin, J.; Yu, F. Dynamic Analysis and FPGA Implementation of New Chaotic Neural Network and Optimization of Traveling Salesman Problem. Complexity 2021, 2021, 5521192. [Google Scholar] [CrossRef]
  38. Tian, K.; Grebogi, C.; Ren, H.P. Chaos Generation With Impulse Control: Application to Non-Chaotic Systems and Circuit Design. IEEE Trans. Circuits Syst. I Regul. Pap. 2021, 68, 3012–3022. [Google Scholar] [CrossRef]
  39. Parker, T.S.; Chua, L. Practical Numerical Algorithms for Chaotic Systems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
Figure 1. Phase-space portraits of the Lorenz chaotic system given in (1).
Figure 1. Phase-space portraits of the Lorenz chaotic system given in (1).
Mathematics 09 01938 g001
Figure 2. Phase-space portraits of the Rössler chaotic system given in (2).
Figure 2. Phase-space portraits of the Rössler chaotic system given in (2).
Mathematics 09 01938 g002
Figure 3. Phase-space portraits of the Lü chaotic system given in (3).
Figure 3. Phase-space portraits of the Lü chaotic system given in (3).
Mathematics 09 01938 g003
Figure 4. Phase-space portraits of the autonomous chaotic oscillator given in (5).
Figure 4. Phase-space portraits of the autonomous chaotic oscillator given in (5).
Mathematics 09 01938 g004
Figure 5. Stability regions of one-step methods listed in Table 1.
Figure 5. Stability regions of one-step methods listed in Table 1.
Mathematics 09 01938 g005
Figure 6. Stability regions of multi-step methods listed in Table 2.
Figure 6. Stability regions of multi-step methods listed in Table 2.
Mathematics 09 01938 g006
Figure 7. Best global evolution of the D K Y for the four chaotic oscillators applying PSO, MOL and DE.
Figure 7. Best global evolution of the D K Y for the four chaotic oscillators applying PSO, MOL and DE.
Mathematics 09 01938 g007
Table 1. One-step methods.
Table 1. One-step methods.
MethodEquation
Forward Euler (FE) x n + 1 = x n + h f ( x n , t n )
Backward Euler (BE) x n + 1 = x n + h f ( x n + 1 , t n + 1 )
Runge-Kutta 4 (RK4) x n + 1 = x n + h 6 ( k 1 + 2 k 2 + 2 k 3 + k 4 )
k 1 = f ( x n , t n )
k 2 = f ( x n + h 2 , t n + h k 1 2 )
k 3 = f ( x n + h 2 , t n + h k 2 2 )
k 4 = f ( x n + h , t n + k 3 h )
Table 2. Multi-step methods.
Table 2. Multi-step methods.
MethodEquation
Adams-Bashforth 6 x n + 1 = x n + h 1440 ( 4277 f ( x n , t n ) 7923 f ( x n 1 , t n 1 )
+ 9982 f ( x n 2 , t n 2 ) 7298 f ( x n 3 , t n 3 )
+ 2877 f ( x n 4 , t n 4 ) f ( x n 5 , t n 5 ) )
Adams-Moulton 6 x n + 1 = x n h 1440 ( 475 f ( x n + 1 ) + 1427 f ( x n ) 798 f ( x n 1 )
+ 482 f ( x n 2 ) 173 f ( x n 3 ) + 27 f ( x n 4 ) )
Gear 2 (G2) x n + 1 = 4 3 x n 1 3 x n 1 + h ( 2 3 f ( x n + 1 , t n + 1 ) )
Table 3. Jacobian, Equilibrium points, and Eigenvalues of the chaotic systems given in (1)–(5).
Table 3. Jacobian, Equilibrium points, and Eigenvalues of the chaotic systems given in (1)–(5).
SystemJacobianEquilibrium PointsEigenvalues
Lorenz σ σ 0 ρ z * 1 x * y * x * β (−8.4852,−8.4852, 27)
(8.4852,8.4852,27)
(0,0,0)
13.8545 , 0.0939 ± j 10.1945
13.8545 , 0.0939 ± j 10.1945
2.6667 , 22.8277 , 11.8277
Rössler 0 1 1 1 a 0 z * 0 x * c (0.0070,−0.0351,0.0351)
(5.6929,−28.4648,28.4648)
5.6869 , 0.0970 ± j 0.9951
0.1929 , 0.0000045 ± j 5.4280
0 1 0 0 0 1 a b c (0,0,0)
(1,0,0)
1.0743 , 0.8872 ± j 1.3488
0.8480 , 0.0740 ± j 0.9055
Autonomous Chaotic System 0 1 0 0 0 1 2 x * a b c (0,0,0)
(−1,0,0)
0.7451 , 0.1625 ± j 1.1471
0.5897 , 0.5048 ± j 1.2002
Table 4. L E + and D K Y of (1)–(5) using a small h.
Table 4. L E + and D K Y of (1)–(5) using a small h.
One-Step MethodMulti-Step Method
FEBERK4G2AB6AM6
h0.010.0010.010.010.0010.01
L E + 0.07070.06040.05030.05010.04860.0460
Lorenz D K Y 333333
TimeNumMet(s)1.1616.573.062.6945.384.36
TimeTISEAN(s)879.6836.953789.79819.92101.204788.99
h0.010.010.010.010.010.01
L E + 0.31120.24430.24050.23910.19000.2409
Rössler D K Y 2.77862.79272.75882.75182.73622.7374
TimeNumMet(s)1.171.523.111.964.454.15
TimeTISEAN(s)626.17700.56658.64640.1648.62646.63
h0.010.010.010.010.010.01
L E + 0.08000.09390.10620.08620.10200.0738
D K Y 2.89152.75682.87442.90392.88792.9289
TimeNumMet(s)1.221.593.172.264.734.53
TimeTISEAN(s)596.02638.57600.47614.01585.01597.74
h0.0010.010.010.010.010.01
L E + 0.26880.01450.00900.01740.01220.0088
Autonomous Chaotic System D K Y 2.90472.83362.65982.77102.73742.6427
TimeNumMet(s)11.841.573.091.793.924.79
TimeTISEAN(s)26.405392.06366.67360.04362.61367.65
Table 5. L E + and D K Y of (1)–(5) using the highest h allowed by each numerical method.
Table 5. L E + and D K Y of (1)–(5) using the highest h allowed by each numerical method.
One-Step MethodMulti-Step Method
FEBERK4G2AB6AM6
hmax0.02450.00200.10700.06000.00460.057
L E + 0.03800.12320.06870.08860.04620.0727
Lorenz D K Y 2.10093.06402.06602.22563.70002.3617
TimeTISEAN(s)670.9664,901.5330.33109.9914,498.11153.18
TimeNumMet(s)0.437.690.360.298.790.74
hmax0.08940.08000.23990.09000.01010.1300
L E + 0.23460.24750.02040.02150.21330.0881
Rössler D K Y 2.39092.68662.04392.09032.76762.3724
TimeTISEAN(s)138.15109.9528.3495.593693.7453.98
TimeNumMet(s)0.110.180.130.183.720.42
hmax0.13000.08501.50000.90000.07700.8000
L E + 0.01770.01810.15500.16210.03720.1518
D K Y 2.18052.18392.01952.20132.02762.6127
TimeTISEAN(s)1675.193134.7337.9579.913909.7596.65
TimeNumMet(s)0.831.970.230.215.550.61
hmax0.00330.026010.16330.09810.1926
L E + 0.16830.00280.09760.01040.08690.0152
Autonomous Chaotic System D K Y 2.90562.21152.21551.77641.95432.1544
TimeTISEAN(s)5385507.136.932.3163.4116.8
TimeNumMet(s)2.950.880.050.160.620.24
Table 6. Results performed by PSO, MOL and DE for the four chaotic oscillators. Time(s) includes the time of the numerical method and TISEAN.
Table 6. Results performed by PSO, MOL and DE for the four chaotic oscillators. Time(s) includes the time of the numerical method and TISEAN.
OscillatorPSOMOLDE
Design Parameters L E + D K Y Time(s)Design Parameters L E + D K Y Time(s)Design Parameters L E + D K Y Time(s)
Lorenz σ = 2.1146
ρ = 14.7896
β = 0.2390
0.05863.310866.45 σ = 1.8941
ρ =13.3546
β = 0.1357
0.06223.416157.47 σ = 2.1144
ρ = 12.8497
β = 0.2154
0.06713.451569.63
Rösslera = 0.3233
b = 0.6000
c = 5.5711
0.31912.8128216.83a = 0.2000
b = 0.3984
c = 5.9857
0.23592.649143.2a = 0.2615
b = 0.5772
c = 7.3303
0.29312.881817.35
a = 3.2570
b = 2.6660
c = 1.1400
d1 = 2.3578
0.31795.502979.14a = 3.1472
b = 2.6865
c = 1.0686
d1 = 2.6899
0.31815.820180.69a = 2.91
b = 2.3913
c = 1.0533
d1 = 3.1351
0.31185.671583.53
Autonomous Chaotic Systema = 1.7787
b = 1.8787
c = 0.3815
0.02512.254321.78a = 1.5159
b = 1.6162
c = 0.3916
0.02672.256721.82a = 1.3767
b = 1.4767
c = 0.3861
0.00732.370319.94
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Valencia-Ponce , M.A.; Tlelo-Cuautle, E.; de la Fraga, L.G. Estimating the Highest Time-Step in Numerical Methods to Enhance the Optimization of Chaotic Oscillators. Mathematics 2021, 9, 1938. https://0-doi-org.brum.beds.ac.uk/10.3390/math9161938

AMA Style

Valencia-Ponce  MA, Tlelo-Cuautle E, de la Fraga LG. Estimating the Highest Time-Step in Numerical Methods to Enhance the Optimization of Chaotic Oscillators. Mathematics. 2021; 9(16):1938. https://0-doi-org.brum.beds.ac.uk/10.3390/math9161938

Chicago/Turabian Style

Valencia-Ponce , Martín Alejandro, Esteban Tlelo-Cuautle, and Luis Gerardo de la Fraga. 2021. "Estimating the Highest Time-Step in Numerical Methods to Enhance the Optimization of Chaotic Oscillators" Mathematics 9, no. 16: 1938. https://0-doi-org.brum.beds.ac.uk/10.3390/math9161938

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop