Next Article in Journal
Prediction of Intrinsically Disordered Proteins Using Machine Learning Algorithms Based on Fuzzy Entropy Feature
Next Article in Special Issue
A Greedy Heuristic for Maximizing the Lifetime of Wireless Sensor Networks Based on Disjoint Weighted Dominating Sets
Previous Article in Journal
A Feature Selection Algorithm Performance Metric for Comparative Analysis
Previous Article in Special Issue
An Improved Greedy Heuristic for the Minimum Positive Influence Dominating Set Problem in Social Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Chaos and Stability in a New Iterative Family for Solving Nonlinear Equations

by
Alicia Cordero
*,
Marlon Moscoso-Martínez
and
Juan R. Torregrosa
Institute for Multidisciplinary Mathematics, Universitat Politècnica de València, Camino de Vera s/n, 46022 València, Spain
*
Author to whom correspondence should be addressed.
Submission received: 10 March 2021 / Revised: 22 March 2021 / Accepted: 23 March 2021 / Published: 24 March 2021
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)

Abstract

:
In this paper, we present a new parametric family of three-step iterative for solving nonlinear equations. First, we design a fourth-order triparametric family that, by holding only one of its parameters, we get to accelerate its convergence and finally obtain a sixth-order uniparametric family. With this last family, we study its convergence, its complex dynamics (stability), and its numerical behavior. The parameter spaces and dynamical planes are presented showing the complexity of the family. From the parameter spaces, we have been able to determine different members of the family that have bad convergence properties, as attracting periodic orbits and attracting strange fixed points appear in their dynamical planes. Moreover, this same study has allowed us to detect family members with especially stable behavior and suitable for solving practical problems. Several numerical tests are performed to illustrate the efficiency and stability of the presented family.

1. Introduction

Many problems in Computational Sciences and other disciplines can be stated in the form of a nonlinear equation or nonlinear systems using mathematical modeling. In particular, a large number of problems in Applied Mathematics and Engineering are solved by finding the solutions of these equations.
In the literature, there are many methods and families of iterative schemes that have been designed by using different procedures to approximate the simple roots of a nonlinear equation f ( x ) = 0 , where f : I R R is a real function defined in an open interval I. We can find in [1,2,3] several surveys and overviews of the iterative schemes published in the last years. Each method has a different behavior. This behavior is characterized with the efficiency criteria and the complex dynamics tools.
In this paper, we introduce a new family of multistep iterative schemes to solve nonlinear equations, which contains as an element of this family, a particular method presented in [4]. This family is built from the Ostrowski’s scheme, adding a Newton step with a “frozen” derivative and using a divided difference operator. Therefore, the family has a three-step iterative expression. Furthermore, it has three arbitrary parameters named α , β , and γ , which can take real or complex values, and an order of convergence of at least four. The order of convergence will be discussed in Section 2.
From the error equation we observe by fixing two parameters in function of the third one, an uniparametric family of sixth-order iterative methods is obtained. We analyze the dynamical behavior of this family in terms of values of the parameter, in order to detect its elements with good stability properties and others with chaotic behavior. The concept of chaos has been widely discussed (see, for example, in [5]) and it is commonly understood as the presence of complex orbit structure and extreme sensitivity of orbits to small perturbations. Moreover, the presence of unstable periodic orbits of all periods is also included in the concept of chaotic system. For this study, we use tools of discrete complex dynamics that we introduce in Section 3.
In Section 4, we present the performance of the presented schemes on several test functions. These numerical tests allow us to confirm the results obtained in the dynamical section and to compare our schemes with other known ones. The manuscript finishes with some conclusions and the references used in it.
The parametric family object of study in this manuscript has the following iterative expression:
y k = x k f ( x k ) f ( x k ) , z k = y k f ( y k ) 2 f [ x k , y k ] f ( x k ) , x k + 1 = z k α + β u k + γ v k f ( z k ) f ( x k ) ,
where u k = 1 f [ x k , y k ] f ( x k ) ; v k = f ( x k ) f [ x k , y k ] ; k = 0 , 1 , 2 , . . . ; and α , β , and γ are arbitrary parameters.
The divided difference operator f [ · , · ] : I × I R × R L ( R ) defined by Ortega and Rheinboldt in [6], satisfies
f [ x , y ] ( x y ) = f ( x ) f ( y ) , x , y I .

2. Convergence of the New Family

In this section, we perform the convergence analysis of the new triparametric iterative family. Furthermore, we propose a strategy to reduce the triparametric scheme to an uniparametric scheme in order to accelerate the convergence.
Theorem 1.
Let f : I R R be a sufficiently differentiable function on an open interval I and ξ I a simple root of the nonlinear equation f ( x ) = 0 . Suppose that f ( x ) is continuous and sufficiently differentiable in an environment of the simple root ξ, and x 0 is an initial estimate close enough to ξ. Then, the sequence { x k } k 0 obtained by using the expression (1) converges to ξ with an order of convergence of four, being its error equation
e k + 1 = ( 1 α γ ) C 2 C 2 2 C 3 e k 4 + O e k 5 ,
where e k = x k ξ , C q = 1 q ! f ( q ) ( ξ ) f ( ξ ) and q = 2 , 3 , . . .
Proof 
Let ξ be a simple root of f ( x ) (that is, f ( ξ ) = 0 and f ( ξ ) 0 ) and x k = ξ + e k . Using Taylor expansion of f ( x k ) and f ( x k ) around ξ , we have
f ( x k ) = f ( ξ + e k ) = f ( ξ ) + f ( ξ ) e k + 1 2 ! f ( ξ ) e k 2 + 1 3 ! f ( ξ ) e k 3 + 1 4 ! f ( i v ) ( ξ ) e k 4 + O ( e k 5 ) = f ( ξ ) e k + 1 2 ! f ( ξ ) f ( ξ ) e k 2 + 1 3 ! f ( ξ ) f ( ξ ) e k 3 + 1 4 ! f ( i v ) ( ξ ) f ( ξ ) e k 4 + O ( e k 5 ) = f ( ξ ) e k + C 2 e k 2 + C 3 e k 3 + C 4 e k 4 + O ( e k 5 ) ,
and
f ( x k ) = f ( ξ + e k ) = f ( ξ ) + f ( ξ ) e k + 1 2 ! f ( ξ ) e k 2 + 1 3 ! f ( i v ) ( ξ ) e k 3 + O ( e k 4 ) = f ( ξ ) 1 + f ( ξ ) f ( ξ ) e k + 1 2 ! f ( ξ ) f ( ξ ) e k 2 + 1 3 ! f ( i v ) ( ξ ) f ( ξ ) e k 3 + O ( e k 4 ) = f ( ξ ) 1 + 2 C 2 e k + 3 C 3 e k 2 + 4 C 4 e k 3 + O ( e k 4 ) ,
where C q = 1 q ! f ( q ) ( ξ ) f ( ξ ) , q = 2 , 3 , . . .
Dividing (3) by (4), we get
f ( x k ) f ( x k ) = e k C 2 e k 2 + 2 C 2 2 C 3 e k 3 4 C 2 3 7 C 2 C 3 + 3 C 4 e k 4 + O e k 5 .
Replacing (5) in the first step of family (1), we have
y k = ξ + C 2 e k 2 2 C 2 2 C 3 e k 3 + 4 C 2 3 7 C 2 C 3 + 3 C 4 e k 4 + O e k 5 .
Using Taylor expansion again, similar to (3), to develop f ( y k ) around ξ , we get
f ( y k ) = f ( ξ ) C 2 e k 2 2 C 2 2 C 3 e k 3 + 5 C 2 3 7 C 2 C 3 + 3 C 4 e k 4 + O e k 5 .
With (3), (6), and (7), we calculate the divided difference operator defined in (2), obtaining
f [ x k , y k ] = f ( ξ ) 1 + C 2 e k + C 2 2 + C 3 e k 2 2 C 2 3 3 C 2 C 3 C 4 e k 3 + O e k 4 .
Then, substituting (3), (4), (6), and (8) in the second step of family (1), we have
z k = ξ + C 2 3 C 2 C 3 e k 4 + O e k 5 .
Using Taylor series once again, similar to (3), to expand f ( z k ) around ξ , we get
f ( z k ) = f ( ξ ) C 2 3 C 2 C 3 e k 4 + O e k 5 .
Replacing (4) and (8) in u k and v k of family (1), we have
u k = C 2 e k 3 C 2 2 2 C 3 e k 2 + 8 C 2 3 10 C 2 C 3 + 3 C 4 e k 3 + O e k 4 ,
v k = 1 + C 2 e k 2 C 2 2 C 3 e k 2 + 3 C 2 3 2 C 2 C 3 + C 4 e k 3 + O e k 4 .
Finally, substituting (4), (9)–(12) in the third step of family (1), we get
x k + 1 = ξ + ( 1 α γ ) C 2 C 2 2 C 3 e k 4 + O e k 5 ,
being the error equation
e k + 1 = ( 1 α γ ) C 2 C 2 2 C 3 e k 4 + O e k 5 ,
and the proof is finished. □
From Theorem 1, it follows that the new triparametric family of iterative methods has an order of convergence of four for any real or complex values of the parameters α , β , and γ . However, convergence can be speed-up if only one parameter is held and the family is reduced to an uniparametric iterative scheme. The latter can be seen in Theorem 2.
Theorem 2.
Let f : I R R be a sufficiently differentiable function on an open interval I and ξ I a simple root of the nonlinear equation f ( x ) = 0 . Suppose that f ( x ) is continuous and sufficiently differentiable in an environment of the simple root ξ, and x 0 is an initial estimate close enough to ξ. Then, the sequence { x k } k 0 obtained by using the expression (1) converges to ξ with an order of convergence of six, provided that β = 1 + α and γ = 1 α , being its error equation
e k + 1 = 6 C 2 5 7 C 2 3 C 3 + C 2 C 3 2 e k 6 + O e k 7 ,
where e k = x k ξ , C q = 1 q ! f ( q ) ( ξ ) f ( ξ ) and q = 2 , 3 , . . .
Proof 
Let ξ be a simple root of f ( x ) (that is, f ( ξ ) = 0 and f ( ξ ) 0 ) and x k = ξ + e k . Using Taylor expansion of f ( x k ) and f ( x k ) around ξ , we have
f ( x k ) = f ( ξ + e k ) = f ( ξ ) + f ( ξ ) e k + 1 2 ! f ( ξ ) e k 2 + + 1 6 ! f ( v i ) ( ξ ) e k 6 + O ( e k 7 ) = f ( ξ ) e k + 1 2 ! f ( ξ ) f ( ξ ) e k 2 + + 1 6 ! f ( v i ) ( ξ ) f ( ξ ) e k 6 + O ( e k 7 ) = f ( ξ ) e k + C 2 e k 2 + C 3 e k 3 + C 4 e k 4 + C 5 e k 5 + C 6 e k 6 + O ( e k 7 ) ,
and
f ( x k ) = f ( ξ + e k ) = f ( ξ ) + f ( ξ ) e k + 1 2 ! f ( ξ ) e k 2 + + 1 5 ! f ( v i ) ( ξ ) e k 5 + O ( e k 6 ) = f ( ξ ) 1 + f ( ξ ) f ( ξ ) e k + 1 2 ! f ( ξ ) f ( ξ ) e k 2 + + 1 5 ! f ( v i ) ( ξ ) f ( ξ ) e k 5 + O ( e k 6 ) = f ( ξ ) 1 + 2 C 2 e k + 3 C 3 e k 2 + 4 C 4 e k 3 + 5 C 5 e k 4 + 6 C 6 e k 5 + O ( e k 6 ) ,
where C q = 1 q ! f ( q ) ( ξ ) f ( ξ ) , q = 2 , 3 , . . .
Dividing (15) by (16), we get
f ( x k ) f ( x k ) = e k C 2 e k 2 + 2 C 2 2 C 3 e k 3 4 C 2 3 7 C 2 C 3 + 3 C 4 e k 4 + 8 C 2 4 20 C 2 2 C 3 + 6 C 3 2 + 10 C 2 C 4 4 C 5 e k 5 16 C 2 5 52 C 2 3 C 3 + 28 C 2 2 C 4 17 C 3 C 4 + C 2 33 C 3 2 13 C 5 + 5 C 6 e k 6 + O e k 7 .
Replacing (17) in the first step of family (1), we have
y k = ξ + C 2 e k 2 2 C 2 2 C 3 e k 3 + 4 C 2 3 7 C 2 C 3 + 3 C 4 e k 4 8 C 2 4 20 C 2 2 C 3 + 6 C 3 2 + 10 C 2 C 4 4 C 5 e k 5 + 16 C 2 5 52 C 2 3 C 3 + 28 C 2 2 C 4 17 C 3 C 4 + C 2 33 C 3 2 13 C 5 + 5 C 6 e k 6 + O e k 7 .
Using Taylor expansion again, similar to (15), to expand f ( y k ) around ξ , we get
f ( y k ) = f ( ξ ) C 2 e k 2 2 C 2 2 C 3 e k 3 + 5 C 2 3 7 C 2 C 3 + 3 C 4 e k 4 2 6 C 2 4 12 C 2 2 C 3 + 3 C 3 2 + 5 C 2 C 4 2 C 5 e k 5 + 28 C 2 5 73 C 2 3 C 3 + 34 C 2 2 C 4 17 C 3 C 4 + C 2 37 C 3 2 13 C 5 + 5 C 6 e k 6 + O e k 7 .
With (15), (18), and (19), we calculate the divided difference operator defined in (2), obtaining
f [ x k , y k ] = f ( ξ ) 1 + C 2 e k + C 2 2 + C 3 e k 2 2 C 2 3 3 C 2 C 3 C 4 e k 3 + 4 C 2 4 8 C 2 2 C 3 + 2 C 3 2 + 4 C 2 C 4 + C 5 e k 4 + 8 C 2 5 + 20 C 2 3 C 3 11 C 2 2 C 4 + 5 C 3 C 4 + C 2 9 C 3 2 + 5 C 5 + C 6 e k 5 + O e k 6 .
Then, substituting (15), (16), (18), and (20) in the second step of family (1), we have
z k = ξ + C 2 3 C 2 C 3 e k 4 2 2 C 2 4 4 C 2 2 C 3 + C 3 2 + C 2 C 4 e k 5 + 10 C 2 5 30 C 2 3 C 3 + 12 C 2 2 C 4 7 C 3 C 4 + 3 C 2 6 C 3 2 C 5 e k 6 + O e k 7 .
Using Taylor series once again, similar to (15), to expand f ( z k ) around ξ , we get
f ( z k ) = f ( ξ ) C 2 3 C 2 C 3 e k 4 2 2 C 2 4 4 C 2 2 C 3 + C 3 2 + C 2 C 4 e k 5 + 10 C 2 5 30 C 2 3 C 3 + 12 C 2 2 C 4 7 C 3 C 4 + 3 C 2 6 C 3 2 C 5 e k 6 + O e k 7 .
Replacing (16) and (20) in u k and v k of family (1), we have
u k = C 2 e k 3 C 2 2 2 C 3 e k 2 + 8 C 2 3 10 C 2 C 3 + 3 C 4 e k 3 + 20 C 2 4 + 37 C 2 2 C 3 8 C 3 2 14 C 2 C 4 + 4 C 5 e k 4 + 48 C 2 5 118 C 2 3 C 3 + 51 C 2 2 C 4 22 C 3 C 4 + C 2 55 C 3 2 18 C 5 + 5 C 6 e k 5 + O e k 6 ,
v k = 1 + C 2 e k 2 C 2 2 C 3 e k 2 + 3 C 2 3 2 C 2 C 3 + C 4 e k 3 + 3 C 2 4 + 11 C 2 2 C 3 4 C 3 2 8 C 2 C 4 + 4 C 5 e k 4 + 10 C 2 3 C 3 + 14 C 2 2 C 4 + C 2 11 C 3 2 10 C 5 + 5 2 C 3 C 4 + C 6 ) e k 5 + O e k 6 .
Finally, substituting (16) and (21)–(24) in the third step of family (1), we get
x k + 1 = ξ + ( 1 α γ ) C 2 C 2 2 C 3 e k 4 + ( 4 + 6 α β + 5 γ ) C 2 4 + ( 8 10 α + β 9 γ ) C 2 2 C 3 2 ( 1 α γ ) C 3 2 2 ( 1 α γ ) C 2 C 4 e k 5 + ( 10 22 α + 9 β 14 γ ) C 2 5 ( 30 53 α + 15 β 39 γ ) C 2 3 C 3 + 2 ( 6 8 α + β 7 γ ) C 2 2 C 4 7 ( 1 α γ ) C 3 C 4 + C 2 ( 18 25 α + 4 β 21 γ ) C 3 2 3 ( 1 α γ ) C 5 e k 6 + O e k 7 ,
being the error equation
e k + 1 = ( 1 α γ ) C 2 C 2 2 C 3 e k 4 + ( 4 + 6 α β + 5 γ ) C 2 4 + ( 8 10 α + β 9 γ ) C 2 2 C 3 2 ( 1 α γ ) C 3 2 2 ( 1 α γ ) C 2 C 4 e k 5 + ( 10 22 α + 9 β 14 γ ) C 2 5 ( 30 53 α + 15 β 39 γ ) C 2 3 C 3 + 2 ( 6 8 α + β 7 γ ) C 2 2 C 4 7 ( 1 α γ ) C 3 C 4 + C 2 ( 18 25 α + 4 β 21 γ ) C 3 2 3 ( 1 α γ ) C 5 e k 6 + O e k 7 .
To cancel the factors accompanying e k 4 and e k 5 in (26), it must be satisfied that α + γ = 1 , 6 α β + 5 γ = 4 and 10 α β + 9 γ = 8 . It is easy to show that this system of equations has infinite solutions for
β = 1 + α and γ = 1 α ,
where α is a free parameter. Therefore, replacing (27) in (26), we obtain
e k + 1 = 6 C 2 5 7 C 2 3 C 3 + C 2 C 3 2 e k 6 + O e k 7 ,
and the proof is finished. □
From Theorem 2, it follows that, if we only hold parameter α in (1), the new triparametric family of iterative methods is reduced to an uniparametric family with an order of convergence of six for any real or complex values of the parameters α , β and γ , as long as (27) is satisfied. Therefore, the iterative expression of the new uniparametric family, dependent only on parameter α and which we will call CMT( α ) family, is defined as
y k = x k f ( x k ) f ( x k ) , z k = y k f ( y k ) 2 f [ x k , y k ] f ( x k ) , x k + 1 = z k α + ( 1 + α ) u k + ( 1 α ) v k f ( z k ) f ( x k ) ,
where u k = 1 f [ x k , y k ] f ( x k ) , v k = f ( x k ) f [ x k , y k ] , and k = 0 , 1 , 2 , . . .
Because of the results obtained with the convergence analysis carried out, from now on we will only work with CMT( α ) family of iterative methods and, to select the best members of this family, we will use the complex dynamics tools discussed in Section 3.

3. Complex Dynamics Behavior

This topic refers to the study of the behavior of a rational function associated with an iterative family or method. From the numerical point of view, the dynamical properties of the referred rational function give us important information about its stability and reliability. The parameter spaces of a family of methods, built from the critical points, allow us to understand the performance of the different members of the family, helping us in the election of a particular one. The dynamical planes show the behavior of these particular methods in terms of the basins of attraction of their fixed points, periodic points, etc. A basin of attraction provides us to visually interpret how a method works based on several initial estimates.
In this section, we present the study of the complex dynamics of CMT ( α ) family given in (29). To do this, we construct a rational operator associated with the family, on a generic low-degree nonlinear polynomial, and we analyze the stability and convergence of the corresponding fixed and critical points. Then, we construct the parameter spaces of the free critical points and generate dynamical planes of some methods of the family for good and bad values of α , in terms of stability.

3.1. Rational Operator

The rational operator can be built on any nonlinear function; however, we construct this operator on quadratic polynomials, as the criterion of stability or instability of a method applied to these polynomials can be generalized for other nonlinear functions.
Proposition 1.
Let p ( x ) = ( x a ) ( x b ) be a generic quadratic polynomial with roots a , b R . Therefore, the rational operator R α ( x ) associated with CMT(α) family given in (29) and applied on p ( x ) , is
R α ( x ) = x 6 x 6 + 5 x 5 + 12 x 4 + 19 x 3 + 21 x 2 + 14 x + α + 5 ( α + 5 ) x 6 + 14 x 5 + 21 x 4 + 19 x 3 + 12 x 2 + 5 x + 1 ,
with α C an arbitrary parameter. Furthermore, if α { 77 , 1 , 1 , 5 } , R α ( x ) is simplified as shown
R 77 ( x ) = x 6 x 5 + 6 x 4 + 18 x 3 + 37 x 2 + 58 x + 72 72 x 5 + 58 x 4 + 37 x 3 + 18 x 2 + 6 x + 1 ,
R 1 ( x ) = x 6 x 4 + 3 x 3 + 5 x 2 + 6 x + 4 4 x 4 + 6 x 3 + 5 x 2 + 3 x + 1 ,
R 1 ( x ) = x 6 x 4 + 4 x 3 + 7 x 2 + 8 x + 6 6 x 4 + 8 x 3 + 7 x 2 + 4 x + 1 ,
R 5 ( x ) = x 6 x 4 + 5 x 3 + 11 x 2 + 14 x + 10 10 x 4 + 14 x 3 + 11 x 2 + 5 x + 1 .
Proof 
Let p ( x ) = ( x a ) ( x b ) be a generic quadratic polynomial with roots a , b R . We apply the iterative scheme given in (29) on p ( x ) and obtain a rational function A p , α ( x ) which depends on the roots a , b R and a parameter α C . Then, if we use Möbius transformation (see in [7,8,9]) in A p , α ( x ) with
h ( w ) = w a w b ,
that satisfies h ( ) = 1 , h ( a ) = 0 and h ( b ) = , we get
R α ( x ) = h A p , α h 1 ( x ) = x 6 x 6 + 5 x 5 + 12 x 4 + 19 x 3 + 21 x 2 + 14 x + α + 5 ( α + 5 ) x 6 + 14 x 5 + 21 x 4 + 19 x 3 + 12 x 2 + 5 x + 1 ,
which only depends on an arbitrary parameter α C . Furthermore, if we factor numerator and denominator of (35), it is easy to show that for α { 77 , 1 , 1 , 5 } some roots coincide and simplify R α ( x ) , as it is observed in Equations (31)–(34), and the proof is finished. □
From Proposition 1, for four values of α the rational operator R α ( x ) is simpler, so there will be fewer fixed and critical points that can improve the stability of the associated methods. This will be seen in Section 3.2 and Section 3.3.

3.2. Analysis and Stability of Fixed Points

We calculate the fixed points of the rational operator R α ( x ) given in (30) and analyze their stability.
Proposition 2.
The fixed points of R α ( x ) are the roots of the equation R α ( x ) = x . That is, x = 0 , x = and the following strange fixed points:
  • e x 1 = 1 (if α 77 ), and
  • e x i ( α ) that correspond to the 10 roots of polynomial x 10 + 6 x 9 + 18 x 8 + 37 x 7 + 58 x 6 ( α 67 ) x 5 + 58 x 4 + 37 x 3 + 18 x 2 + 6 x + 1 , where i = 2 , . . . , 11 .
The total number of different fixed points varies with the value of α:
  • If α C and α { 77 , 1 , 1 , 5 , 307 } , then R α ( x ) has 13 fixed points.
  • If α = 77 , then e x 1 = 1 is not a fixed point and R α ( x ) has 12 fixed points.
  • If α { 1 , 1 , 5 } , then R α ( x ) has 11 fixed points.
  • If α = 307 , then e x 1 = e x 2 = e x 3 = 1 and R α ( x ) has 11 fixed points.
The pairs of conjugated strange fixed points, satisfying e x i = 1 e x j for i j , are e x 2 and e x 3 , e x 4 and e x 5 , e x 6 and e x 9 , e x 7 and e x 8 , and e x 10 and e x 11 .
From Proposition 2, we establish there is a minimum of 11 and a maximum of 13 fixed points. Of these, 0 and ∞ correspond to the roots of the original quadratic polynomial p ( x ) , and the strange fixed point e x 1 = 1 (if α 77 ) corresponds to the divergence of the original method, before Möbius transformation.
Proposition 3.
The stability of the strange fixed point e x 1 = 1 , α C { 77 } , verifies:
(i) 
If 384 77 + α < 1 , then e x 1 is an attractor.
(ii) 
If 384 77 + α > 1 , then e x 1 is a repulsor.
(iii) 
If 384 77 + α = 1 , then e x 1 is parabolic.
e x 1 is never a superattractor because 384 77 + α 0 . The superattracting fixed points that satisfy | R α ( x ) | = 0 are x = 0 , x = and the following strange fixed points:
  • e x 4 , e x 5 for α = 0.949874 ± 0.16946 i ,
  • e x 6 , e x 9 for α = 2.40285 ± 1.11088 i , and
  • e x 10 , e x 11 for α = 178.653 .
The repulsive fixed points, which always satisfy | R α ( x ) | > 1 , are the strange fixed points e x 2 and e x 3 .
It is clear that 0 and ∞ are always superattracting fixed points, but the stability of the rest of fixed points depends on the values of parameter α . From Proposition 3, there are 6 strange fixed points that can become superattractors for certain values of α . This means that there would be a basin of attraction of the strange fixed point and it could cause the method not to converge to the solution.
Figure 1 shows the stability surface of the strange fixed point e x 1 . In this figure, the zones of attraction (yellow surface) and repulsion (gray surface) are observed, being the first one much greater than the second one. Note that for values of α inside disk, e x 1 is a repulsor; and, for off-disk values of α , e x 1 is an attractor. Therefore, it is in our interest to always work inside the disk because the strange fixed point e x 1 = 1 comes from the divergence of the original method and, therefore, it is better for the performance of the iterative method that the divergence is repulsive.
From Proposition 2, the study of the stability of strange fixed points is reduced by a half. This is because each pair of conjugated strange fixed points exhibits the same stability characteristics. Furthermore, due to Proposition 3, e x 2 and e x 3 are always repulsors regardless of the value of α . Thus, Figure 2 shows the stability surfaces of the remaining 8 strange fixed points, which can be attracting or repulsive depending on the value of α , for analysis.

3.3. Analysis of Critical Points

We calculate the critical points of the rational operator R α ( x ) given in (30).
Proposition 4.
The critical points of R α ( x ) are the roots of the equation R α ( x ) = 0 . That is, x = 0 , x = and the following free critical points:
  • c r 1 = 1 ,
  • c r 2 = i ,
  • c r 3 = i , and
  • c r i ( α ) that correspond to the 6 roots of polynomial ( 6 α + 30 ) x 6 + ( α + 103 ) x 5 + ( 2 α + 206 ) x 4 + ( 6 α + 246 ) x 3 + ( 2 α + 206 ) x 2 + ( α + 103 ) x + 6 α + 30 , where i = 4 , . . . , 9 .
The total number of different critical points varies with the value of α:
  • If α C and α { 77 , 5 , 1 , 1 , 5 } , then R α ( x ) has 11 critical points.
  • If α { 77 , 5 , 1 } , then R α ( x ) is simplified or reduced and has 9 critical points.
  • If α { 1 , 5 } , then R α ( x ) is simplified and has 7 critical points.
The pairs of conjugated free critical points, satisfying c r i = 1 c r j for i j , are c r 2 and c r 3 , c r 4 and c r 5 , c r 6 and c r 7 , and c r 8 and c r 9 .
From Proposition 4, we establish there is a minimum of 7 and a maximum of 11 critical points. Of these, 0 and ∞ correspond to the roots of the original quadratic polynomial p ( x ) . The free critical points c r 1 = 1 , c r 2 = i , and c r 3 = i are pre-images of the strange fixed point e x 1 = 1 . Therefore, the stability of c r 1 , c r 2 , and c r 3 will correspond to the stability of e x 1 (see Section 3.2). Moreover, the dynamical study of the free critical points is reduced by a half because each pair of conjugated free critical points presents the same stability characteristics. This will be seen in Section 3.4.

3.4. Parameter Spaces

The dynamical behavior of operator R α ( x ) depends on the values of parameter α . The parameter space is defined as a mesh in the complex plane where each point of this mesh corresponds to a different value of α . Its graphical representation shows the convergence analysis of a method of CMT ( α ) family associated with this α using one of the free critical points c r ( α ) given in Proposition 4 as initial estimate. The resulting graphic is made in Matlab R2020a programming package with a resolution of 1000 × 1000 pixels. If a method converges to any of the roots starting from c r ( α ) in a maximum of 80 iterations with a tolerance of 10 3 , the pixel is colored red; in other cases, the pixel is colored black.
Each value of α that belongs to the same connected component of the parameter space results in subsets of schemas with similar dynamical behavior. Therefore, it is interesting to find regions of the parameter space as stable as possible (red regions), because these values of α will give us the best members of the family in terms of numerical stability.
CMT( α ) family has a maximum of 9 free critical points. Of these, c r 1 , c r 2 , and c r 3 have the same parameter space which corresponds to the stability surface of e x 1 (see Figure 1), because they are pre-images of this point. The remaining free critical points, c r 4 to c r 9 , are conjugated in pairs (see Proposition 4), which gives rise to 3 different parameter spaces. These parameter spaces, named P 1 (for x = c r 4 , c r 5 ), P 2 (for x = c r 6 , c r 7 ), and P 3 (for x = c r 8 , c r 9 ), are shown in Figure 3.
From Figure 3b,c, we observe that the parameter spaces P 2 and P 3 have similar characteristics; then, we can select any of them for analysis.
On the one hand, if we choose values of α inside the stability regions (red regions) of the parameter spaces, for example, α = 1 , 0 , 1 , the methods associated with these parameters will show good dynamical behavior in terms of numerical stability. Furthermore, note that these particular values of α simplify the iterative scheme of CMT( α ) family given in (29) by canceling a term in its third step. This is especially useful to improve the computational efficiency of the associated method because the processing times required to reach the solution are reduced (see Section 4).
On the other hand, if we choose values of α outside the stability regions (black regions) of the parameter spaces, for example α = 300 ,   200 ,   400 , the methods associated with these parameters will show poor dynamical behavior in terms of numerical stability.
The methods associated with the values of α treated above are discussed in Section 3.5.

3.5. Dynamical Planes

We begin this section by presenting how we generate a dynamical plane that will allow us to see the stability of a method for a specific value of α . This is defined as a mesh in the complex plane where each point of this mesh corresponds to a different value of the initial estimate x 0 . Its graphical representation shows the convergence of the method to any of the roots starting from x 0 with a maximum of 50 iterations and a tolerance of 10 3 . Fixed points are illustrated with a white circle “○”, critical points with a white square “□” and attractors with a white asterisk “∗”. Moreover, the basins of attraction are depicted in different colors. The resulting graphic is made in Matlab R2020a with a resolution of 1000 × 1000 pixels.
Here, we study the stability of some CMT( α ) family methods through the use of dynamical planes. We will consider the methods proposed in Section 3.4 for values of α inside and outside the stability regions of the parameter spaces.
On the one hand, examples of methods inside the stability region are given for α = 1 ,   0 ,   1 . Their dynamical planes with some convergence orbits in yellow are shown in Figure 4. Note that all three methods present only two basins of attraction associated with the roots: the basin of 0 colored in orange and the basin of ∞ colored in blue. Furthermore, there are no black areas of non-convergence to the solution. Consequently, these methods show good dynamical behavior: they are very stable. Of these methods, the best member of CMT( α ) family is for α = 1 , as it has fewer strange fixed points and free critical points.
On the other hand, examples of methods outside the stability region are given for α = 300 ,   200 ,   400 . Their dynamical planes with some convergence orbits in yellow are shown in Figure 5. Note that all three methods present more than two basins of attraction, that is, there are other basins of attraction that do not correspond to the roots. The basins of 0 and ∞ are colored in orange and blue, respectively, and the other basins are colored in black, red, and green. Figure 5a shows the convergence to an attracting periodic orbit of period 2. Figure 5b,c shows the convergence to an attracting strange fixed point. Furthermore, let us remark that in the three figures the basin of 0 is very small, due to the presence of the other basins of attraction, which reduces the chances of convergence to the solution. Likewise, there are black areas of slow convergence of the methods. Consequently, these methods have poor dynamical behavior: they are unstable.

4. Numerical Results

Here, we perform several numerical tests in order to check the theoretical convergence and stability results of CMT( α ) family obtained in previous sections. To do this, we use some stable and unstable methods of (29). These methods are applied on five nonlinear test functions, whose expressions and corresponding roots are
f 1 ( x ) = sin ( x ) x 2 + 1 , ξ 0.6367326508 , f 2 ( x ) = cos ( x ) x exp ( x ) + x 2 , ξ 0.6391540963 , f 3 ( x ) = x 3 + 4 x 2 10 , ξ 1.3652300134 , f 4 ( x ) = x 2 + 2 x + 5 2 sin ( x ) x 2 + 3 , ξ 2.3319676559 , f 5 ( x ) = x 4 + sin π x 2 3 16 , ξ 0.9059869793 .
Thus, we performed two experiments. In a first experiment, we carried out a efficiency analysis of CMT( α ) family through a comparative study between one of its stable methods and five different methods given in the literature: Newton of order 2, Ostrowski of order 4, and three other methods of order 6 proposed by Alzahrani et al. in [10] (ABA), Chun and Ham in [11] (CH), and Amat et al. in [12] (AHR). In a second experiment, we carried out a stability analysis of CMT ( α ) family using six of its methods obtained with three good and three bad values of parameter α , in terms of stability.
In the development of the numerical tests we start the iterations with different initial estimates: close ( x 0 ξ ), far ( x 0 10 ξ ), and very far ( x 0 100 ξ ) to the root ξ , respectively. This allows us to measure, to some extent, how demanding the methods are relative to the initial estimation for finding a solution.
The calculations are developed in Matlab R2020a programming package using variable precision arithmetics with 200 digits of mantissa. For each method, we analyze the number of iterations (iter) required to converge to the solution, so that the stopping criteria | x k + 1 x k | < 10 100 or | f ( x k + 1 ) | < 10 100 are satisfied. Note that | x k + 1 x k | represents the error estimation between two consecutive iterations and | f ( x k + 1 ) | is the residual error of the nonlinear test function. This stopping criterium does not need the exact solution, on the contrary of absolute error, and differs from recent ones as CESTAC (see in [13]) in the absence of additional calculations or functional evaluations, as f ( x k + 1 ) is needed for the following iteration and its absolute values is an efficient control element of the proximity to the exact root, where f is zero. Indeed, although a precision of one hundred exact digits is not usually necessary in the applications, we employ this value in the stopping criterium as it is useful to check the robustness and effectiveness of the numerical methods.
To check the theoretical order of convergence (p), we calculate the approximate computational order of convergence (ACOC) given by Cordero and Torregrosa in [14]. In the numerical results presented below, if the ACOC vector inputs do not stabilize their values throughout the iterative process, it is marked as “-”; and, if any of the methods used does not reach convergence in a maximum of 50 iterations, it is marked as “nc”.
To illustrate the computational efficiency of each used method, the processing time (tcpu) in seconds required by the iterative scheme to converge to the solution is measured. This value is determined as the arithmetic mean of 10 runs of the method.

4.1. First Experiment: Efficiency Analysis of CMT ( α ) Family

In this experiment, we carried out a comparative study between a stable method of CMT ( α ) family and the methods of Newton, Ostrowski, ABA, CH and AHR, in order to contrast their numerical performances in nonlinear equations. We consider as a stable member of CMT ( α ) family the method associated with α = 1 , that is, CMT(1).
Thereby, in Table 1, Table 2 and Table 3 we show the numerical results of the six known methods, considering close, far, and very far initial estimates. Furthermore, in Figure 6 we show graphics that summarize these results for the number of iterations (iter) and the processing time (tcpu).
Therefore, from the results of the first experiment we conclude that CMT( α ) family has an excellent numerical performance considering a stable member ( α = 1 ) as a representative. This conclusion has been made based on the following aspects from Table 1, Table 2 and Table 3: CMT (1) method has the lowest error and lowest number of iterations (iter). However, the mean of the execution time (tcpu) varies according to the nonlinear test function used and the inherent complexity that the iterative scheme of the method presents on the nonlinear function. In several cases, the tcpu of the CMT (1) method is significantly lower than the 6th order ABA, CH and AHR methods. The theoretical convergence order is also verified by the ACOC, which is close to 6.

4.2. Second Experiment: Stability Analysis of CMT( α ) Family

In this experiment, we carried out a stability analysis of CMT ( α ) family considering some values of α inside the stability regions of the parameter spaces ( α = 1 ,   0 ,   1 ) and outside of them ( α = 300 ,   200 ,   400 ).
Thus, in Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 we show the numerical performance of iterative methods associated with these values of α for close, far, and very far initial estimations. The results for α = 1 were already presented in the first experiment; however, these are presented again due to the different conditions in which each experiment was performed.
On the one hand, from Table 4, Table 5 and Table 6 we observe that the methods associated with α = 1 ,   0 ,   1 always converge to the solution, although the number of iterations (iter) needed differs for any initial estimate and nonlinear test function. Thus, in estimations close to the root, the methods converge to ξ with a minimum iter of 3 and a maximum of 7. When the initial guess is far from the root, they converge to ξ with a minimum iter of 4 and a maximum of 22. When the starting estimations are very far from the root, the iterative schemes converge to ξ with a minimum iter of 6 and a maximum of 37.
On the other hand, from the results shown in Table 7, Table 8 and Table 9, we see that the methods associated with α = 300 , 200 , 400 do not always converge to the solution, confirming the conclusions obtained in the dynamical analysis. The convergence highly depends on the initial estimation and the nonlinear test function used. Thus, for estimations close to the root, these methods do not converge to the solution in up to 2 test functions. Moreover, for estimations far and very far from the root, they do not converge to the solution even for any function.
Consequently, we conclude that the methods for α = 1 ,   0 ,   1 are stable, have the lowest processing times (tcpu), and always converge to the solution for any initial estimate and nonlinear test function used. The methods for α = 300 ,   200 ,   400 are unstable, chaotic, have the highest tcpu, and tend not to converge to the solution according to the initial estimate and the nonlinear test function used. With this, the theoretical results obtained in previous sections about the dynamical behavior of CMT( α ) family are verified.

5. Conclusions

In this paper, a new family of iterative methods was designed to solve nonlinear equations from Ostrowski scheme, adding a Newton step with a “frozen” derivative and using a divided difference operator. This family, named CMT ( α , β , γ ), has a three-step iterative expression and three arbitrary parameters which can take any real or complex value.
In the convergence analysis of the new family, we obtained an order of convergence of four just like the order of the Ostrowski method. However, we managed to speed-up the convergence to six by setting the parameters β and γ as a function of α , resulting in an uniparametric CMT ( α ) family.
In the dynamical study, we constructed parameters spaces of the free critical points of the rational operator associated with the uniparametric family. These parameter spaces allowed us to understand the performance of the different members of the family, helping us to choose stable (for α = 1 ,   0 ,   1 , . . . ) and unstable (for α = 300 ,   200 ,   400 , . . . ) methods. Furthermore, we generated dynamical planes to show the behavior of these particular methods.
From numerical results, the order of convergence is verified by the ACOC, which is close to 6. The CMT ( α ) family proved to have an excellent numerical performance considering stable members as representatives. In general, this family has low errors and number of iterations to converge to the solution. However, the processing time (tcpu) varies depending on the nonlinear test functions used and the inherent complexity that the iterative schemes of the methods present when they are applied to said functions. In several cases, the tcpu of stable methods is significantly lower than other sixth-order methods developed so far. Furthermore, the methods for α = 1 ,   0 ,   1 proved to be stable, have the lowest tcpu, and always converge to the solution for any initial estimate and nonlinear test function used. The methods for α = 300 ,   200 ,   400 proved to be unstable, chaotic, have the highest tcpu, and tend not to converge to the solution according to the initial estimate and the nonlinear test function used. This verifies the theoretical results obtained in convergence analysis and dynamical study of CMT( α ) family.

Author Contributions

Conceptualization, A.C. and J.R.T.; methodology, A.C. and M.M.-M.; software, M.M.-M.; validation, M.M.-M.; formal analysis, J.R.T.; investigation, A.C.; writing—original draft preparation, M.M.-M.; writing—review and editing, A.C.; supervision, J.R.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by Ministerio de Ciencia, Innovación y Universidades PGC2018-095896-B-C22.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers for their comments and suggestions, as they have improved the final version of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Neta, B. Numerical Methods for the Solution of Equations; Net-A-Sof: Monterey, CA, USA, 1983. [Google Scholar]
  2. Petković, M.; Neta, B.; Petković, L.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations, 1st ed.; Academic Press: Boston, MA, USA, 2013. [Google Scholar]
  3. Amat, S.; Busquier, S. Advances in Iterative Methods for Nonlinear Equations; Springer: Cham, Switzerland, 2017. [Google Scholar]
  4. Artidiello, S.; Cordero, A.; Torregrosa, J.; Penkova, M. Design and multidimensional extension of iterative methods for solving nonlinear problems. Appl. Math. Comput. 2017, 293, 194–203. [Google Scholar] [CrossRef]
  5. Hunt, B.R.; Ott, E. Defining chaos. Chaos Interdiscip. J. Nonlinear Sci. 2015, 25. [Google Scholar] [CrossRef] [PubMed]
  6. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  7. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  8. Amat, S.; Busquier, S.; Plaza, S. Review of some iterative root—Finding methods from a dynamical point of view. SCIENTIA Ser. A Math. Sci. 2004, 10, 3–35. [Google Scholar]
  9. Blanchard, P. Complex analytic dynamics on the Riemann sphere. Bull. Am. Math. Soc. 1984, 11, 85–141. [Google Scholar] [CrossRef] [Green Version]
  10. Alzahrani, A.; Behl, R.; Alshomrani, A. Some higher-order iteration functions for solving nonlinear models. Appl. Math. Comput. 2018, 334, 80–93. [Google Scholar] [CrossRef]
  11. Chun, C.; Ham, Y. Some sixth-order variants of Ostrowski root-finding methods. Appl. Math. Comput. 2007, 193, 389–394. [Google Scholar] [CrossRef]
  12. Amat, S.; Hernández, M.; Romero, N. Semilocal convergence of a sixth order iterative method for quadratic equations. Appl. Numer. Math. 2012, 62, 833–841. [Google Scholar] [CrossRef]
  13. Noeiaghdam, S.; Sidorov, D.; Zamyshlyaeva, A.; Tynda, A.; Dreglea, A. A valid dynamical control on the reverse osmosis system using the CESTAC method. Mathematics 2021, 9, 48. [Google Scholar] [CrossRef]
  14. Cordero, A.; Torregrosa, J.R. Variants of Newton’s Method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
Figure 1. Stability surface of e x 1 = 1 (in gray color, the complex area where the fixed point is repulsive, being attracting in the rest).
Figure 1. Stability surface of e x 1 = 1 (in gray color, the complex area where the fixed point is repulsive, being attracting in the rest).
Algorithms 14 00101 g001
Figure 2. Stability surfaces of 8 strange fixed points (in gray color, the complex area where each fixed point is repulsive, being attracting in the rest).
Figure 2. Stability surfaces of 8 strange fixed points (in gray color, the complex area where each fixed point is repulsive, being attracting in the rest).
Algorithms 14 00101 g002
Figure 3. Parameter spaces of free critical points (in red color, the complex area where the corresponding critical point converges to 0 or ∞, that is, the stability region).
Figure 3. Parameter spaces of free critical points (in red color, the complex area where the corresponding critical point converges to 0 or ∞, that is, the stability region).
Algorithms 14 00101 g003
Figure 4. Dynamical planes for methods inside the stability region (basin of attraction of 0 in orange color; in blue color, the basin of ∞).
Figure 4. Dynamical planes for methods inside the stability region (basin of attraction of 0 in orange color; in blue color, the basin of ∞).
Algorithms 14 00101 g004
Figure 5. Dynamical planes for methods outside the stability region (basin of attraction of 0 in orange color; in blue color, the basin of ∞; in green or red color, the basin of attracting strange fixed points).
Figure 5. Dynamical planes for methods outside the stability region (basin of attraction of 0 in orange color; in blue color, the basin of ∞; in green or red color, the basin of attracting strange fixed points).
Algorithms 14 00101 g005
Figure 6. Numerical results of the first experiment.
Figure 6. Numerical results of the first experiment.
Algorithms 14 00101 g006
Table 1. Numerical performance of iterative methods in nonlinear equations for x 0 close to ξ .
Table 1. Numerical performance of iterative methods in nonlinear equations for x 0 close to ξ .
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | iterACOCtcpu
f 1 CMT(1) 7.6395 × 10 19 1.8769 × 10 110 35.51480.1257
x 0 = −1.6Newton 3.2063 × 10 84 7.2243 × 10 168 820.1225
Ostrowski 3.6277 × 10 39 2.1775 × 10 155 43.99880.1036
ABA 6.3941 × 10 19 5.2542 × 10 111 35.54720.1201
CH 3.9619 × 10 19 3.095 × 10 112 35.53360.1173
AHR 6.9779 × 10 86 045.99890.1381
f 2 CMT(1) 1.1915 × 10 19 3.2336 × 10 114 46.07170.2913
x 0 = −0.4Newton 6.977 × 10 101 9.2573 × 10 201 1020.2747
Ostrowski 3.6009 × 10 28 5.8295 × 10 111 43.99930.1899
ABA 6.593 × 10 46 046.00550.4278
CH 4.0133 × 10 50 6.8135 × 10 208 55.99510.5636
AHR 1.4561 × 10 73 0105.99910.7038
f 3 CMT(1) 5.868 × 10 64 075.99570.4654
x 0 = 0.4Newton 3.2665 × 10 83 8.6382 × 10 165 1020.2818
Ostrowski 1.3665 × 10 51 5.077 × 10 204 53.99990.1682
ABA 2.5625 × 10 27 4.3729 × 10 160 55.89330.224
CH 2.4971 × 10 24 4.0266 × 10 142 95.84980.4374
AHR 2.1589 × 10 36 0125.95210.5912
f 4 CMT(1)1.2572 × 10−32 3.2096 × 10 195 35.7170.6075
x 0 = 1.3Newton 7.2803 × 10 95 1.2821 × 10 189 720.4947
Ostrowski 1.0395 × 10 64 1.5574 × 10 207 440.535
ABA 4.7735 × 10 26 8.9685 × 10 156 35.94190.598
CH 1.6112 × 10 32 1.7919 × 10 194 35.69610.6046
AHR 3.0816 × 10 22 1.1423 × 10 131 35.68120.4497
f 5 CMT(1) 2.5535 × 10 53 6.4242 × 10 207 65.91321.2222
x 0 = −1.9Newton 3.4167 × 10 84 8.1562 × 10 167 820.6295
Ostrowski 4.1408 × 10 38 2.4627 × 10 142 44.01460.5521
ABA 5.637 × 10 65 1.9467 × 10 208 56.01070.9561
CH 1.0828 × 10 43 1.0707 × 10 207 66.22121.1314
AHR 5.6988 × 10 26 4.6285 × 10 106 45.78550.5939
Table 2. Numerical performance of iterative methods in nonlinear equations for x 0 far from ξ .
Table 2. Numerical performance of iterative methods in nonlinear equations for x 0 far from ξ .
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | iterACOCtcpu
f 1 CMT(1) 4.3721 × 10 23 6.595 × 10 136 45.70930.163
x 0 = −6Newton 4.549 × 10 85 1.4542 × 10 169 1020.1527
Ostrowski 7.5454 × 10 40 4.0753 × 10 158 53.99890.1487
ABA 6.5662 × 10 25 6.1621 × 10 147 45.7750.1607
CH 9.4464 × 10 24 5.6868 × 10 140 45.73260.1811
AHR 9.9786 × 10 85 055.99880.1578
f 2 CMT(1) 6.0086 × 10 60 0165.99750.9939
x 0 = −6Newton 2.9103 × 10 57 1.6107 × 10 113 1220.2714
Ostrowski 1.7318 × 10 82 6.8135 × 10 208 840.3234
ABA 2.9737 × 10 18 6.4713 × 10 106 10-0.6234
CH 4.8167 × 10 51 0145.99550.8618
AHR 1.0711 × 10 58 065.99710.3006
f 3 CMT(1) 4.2145 × 10 24 1.1268 × 10 140 105.84160.4353
x 0 = −14Newtonncncncncnc
Ostrowski 2.3325 × 10 76 03740.9868
ABA 9.1479 × 10 18 9.0509 × 10 103 246.25421.1023
CH 5.0027 × 10 98 0175.99970.7088
AHRncncncncnc
f 4 CMT(1) 4.6353 × 10 98 2.3361 × 10 207 55.99950.978
x 0 = −23Newton 9.6577 × 10 79 1.3216 × 10 156 1020.683
Ostrowski 8.1672 × 10 31 5.8293 × 10 122 53.99560.6646
ABA 1.3364 × 10 69 2.3361 × 10 207 55.99610.9691
CH 4.543 × 10 99 2.3361 × 10 207 55.99960.9597
AHR 1.7793 × 10 56 2.3361 × 10 207 55.98980.6951
f 5 CMT(1) 3.9117 × 10 41 9.6363 × 10 207 56.17660.9564
x 0 = −9Newton 1.2423 × 10 55 1.9722 × 10 109 920.615
Ostrowski 2.9225 × 10 29 1.7446 × 10 112 53.98210.6514
ABA 2.0254 × 10 31 5.3498 × 10 181 55.51530.9702
CH 6.524 × 10 29 3.5709 × 10 159 66.25581.1451
AHR 1.6141 × 10 41 9.7687 × 10 148 125.82221.592
Table 3. Numerical performance of iterative methods in nonlinear equations for x 0 very far from ξ .
Table 3. Numerical performance of iterative methods in nonlinear equations for x 0 very far from ξ .
FunctionMethod | x k + 1 x k | | f ( x k + 1 ) | iterACOCtcpu
f 1 CMT(1) 6.8586 × 10 80 065.99810.2413
x 0 = −60Newton 3.1826 × 10 73 7.1179 × 10 146 1320.2003
Ostrowski 1.2267 × 10 100 0740.1793
ABA 1.3417 × 10 77 065.99780.273
CH 1.4971 × 10 82 065.99840.2776
AHR 8.7686 × 10 61 3.8934 × 10 208 75.9920.2246
f 2 CMT(1) 5.9893 × 10 27 5.2167 × 10 158 66.03790.3503
x 0 = −60Newton 1.6537 × 10 59 5.201 × 10 118 1520.3125
Ostrowski 8.0088 × 10 72 0840.2956
ABA 2.9305 × 10 56 0106.00240.5679
CH 8.4413 × 10 48 075.9940.399
AHR 6.4484 × 10 60 075.99740.318
f 3 CMT(1) 3.7145 × 10 76 0135.99830.6398
x 0 = −140Newtonncncncncnc
Ostrowski 6.9267 × 10 37 3.3507 × 10 145 493.9991.216
ABA 7.5885 × 10 54 0115.99070.4246
CH 4.8283 × 10 28 2.1045 × 10 164 215.89890.8005
AHR 3.4494 × 10 58 6.2295 × 10 207 125.99280.3997
f 4 CMT(1) 9.2602 × 10 68 2.3361 × 10 207 65.99541.0547
x 0 = −230Newton 8.9492 × 10 96 1.1348 × 10 190 1420.8454
Ostrowski 7.8874 × 10 37 5.0705 × 10 146 73.99850.8196
ABA 2.5587 × 10 21 9.9754 × 10 126 66.23821.0537
CH 2.2055 × 10 60 2.3361 × 10 207 66.00791.0555
AHRncncncncnc
f 5 CMT(1) 2.8545 × 10 38 1.0707 × 10 207 66.26651.0249
x 0 = −90Newton 9.6307 × 10 58 6.4804 × 10 114 1220.7181
Ostrowski 6.1241 × 10 52 6.9183 × 10 202 83.99990.9291
ABA 1.2306 × 10 20 2.1729 × 10 114 66.84911.0378
CH 3.4946 × 10 26 1.6995 × 10 147 65.75671.0301
AHR 8.5778 × 10 51 1.0901 × 10 182 255.9023.0345
Table 4. Numerical performance of CMT (−1) method in nonlinear equations.
Table 4. Numerical performance of CMT (−1) method in nonlinear equations.
Function x 0 | x k + 1 x k | | f ( x k + 1 ) | iterACOCtcpu
Close to ξ
f 1 −1.6 1.8646 × 10 19 2.7591 × 10 114 35.55590.1216
f 2 −0.4 1.3898 × 10 46 046.00380.2775
f 3 0.4 9.0583 × 10 50 055.98730.2321
f 4 1.3 1.9771 × 10 32 7.3778 × 10 194 35.67910.6628
f 5 −1.9 4.057 × 10 47 1.606 × 10 206 66.05861.2462
Far from ξ
f 1 −6 1.4965 × 10 24 7.3749 × 10 145 45.75940.1606
f 2 −6 7.3835 × 10 26 1.1396 × 10 151 14-0.8807
f 3 −14 1.009 × 10 18 1.3833 × 10 108 225.72410.9937
f 4 −23 3.2059 × 10 100 2.3361 × 10 207 55.99961.0545
f 5 −9 4.5305 × 10 85 1.168 × 10 207 76.00341.446
Very far from ξ
f 1 −60 1.264 × 10 85 065.99880.2385
f 2 −60 8.3236 × 10 19 2.3391 × 10 109 96.00550.5682
f 3 −140 6.8807 × 10 19 1.3913 × 10 109 105.72970.4723
f 4 −230 1.1069 × 10 48 2.3361 × 10 207 66.01951.2992
f 5 −90 2.3226 × 10 65 1.168 × 10 207 65.98081.3969
Table 5. Numerical performance of CMT (0) method in nonlinear equations.
Table 5. Numerical performance of CMT (0) method in nonlinear equations.
Function x 0 | x k + 1 x k | | f ( x k + 1 ) | iterACOCtcpu
Close to ξ
f 1 −1.6 3.9254 × 10 19 2.9279 × 10 112 35.53340.1219
f 2 −0.4 1.0637 × 10 28 1.328 × 10 168 46.02630.2689
f 3 0.4 4.828 × 10 30 2.1036 × 10 176 65.91740.2482
f 4 1.3 1.6112 × 10 32 1.7919 × 10 194 35.69610.6771
f 5 −1.9 3.0345 × 10 27 1.8831 × 10 154 66.50221.2896
Far from ξ
f 1 −6 8.7386 × 10 24 3.5638 × 10 140 45.73340.1602
f 2 −6 6.7903 × 10 26 8.9867 × 10 152 9-0.6206
f 3 −14 8.1206 × 10 24 4.7631 × 10 139 115.84070.491
f 4 −23 4.2612 × 10 99 2.3361 × 10 207 55.99961.0585
f 5 −9 4.1362 × 10 41 2.3361 × 10 207 65.92651.2619
Very far from ξ
f 1 −60 1.1445 × 10 82 065.99850.2395
f 2 −60 3.277 × 10 54 075.99660.4971
f 3 −140 3.695 × 10 64 0375.99591.6934
f 4 −230 5.2233 × 10 59 2.3361 × 10 207 66.00881.2644
f 5 −90 8.2696 × 10 19 2.984 × 10 103 65.56021.2865
Table 6. Numerical performance of CMT (1) method in nonlinear equations.
Table 6. Numerical performance of CMT (1) method in nonlinear equations.
Function x 0 | x k + 1 x k | | f ( x k + 1 ) | iterACOCtcpu
Close to ξ
f 1 −1.6 7.6395 × 10 19 1.8769 × 10 110 35.51480.124
f 2 −0.4 1.1915 × 10 19 3.2336 × 10 114 46.07170.2474
f 3 0.4 5.868 × 10 64 075.99570.3128
f 4 1.3 1.2572 × 10 32 3.2096 × 10 195 35.7170.7052
f 5 −1.9 2.5535 × 10 53 6.4242 × 10 207 65.91321.3006
Far from ξ
f 1 −6 4.3721 × 10 23 6.595 × 10 136 45.70930.1619
f 2 −6 6.0086 × 10 60 0165.99751.0008
f 3 −14 4.2145 × 10 24 1.1268 × 10 140 105.84160.446
f 4 −23 4.6353 × 10 98 2.3361 × 10 207 55.99951.0401
f 5 −9 3.9117 × 10 41 9.6363 × 10 207 56.17661.0393
Very far from ξ
f 1 −60 6.8586 × 10 80 065.99810.2654
f 2 −60 5.9893 × 10 27 5.2167 × 10 158 66.03790.3777
f 3 −140 3.7145 × 10 76 0135.99830.5816
f 4 −230 9.2602 × 10 68 2.3361 × 10 207 65.99541.2349
f 5 −90 2.8545 × 10 38 1.0707 × 10 207 66.26651.2801
Table 7. Numerical performance of CMT (−300) method in nonlinear equations.
Table 7. Numerical performance of CMT (−300) method in nonlinear equations.
Function x 0 | x k + 1 x k | | f ( x k + 1 ) | iterACOCtcpu
Close to ξ
f 1 −1.6 1.454 × 10 49 3.8934 × 10 208 46.01270.1743
f 2 −0.4 7.7 × 10 75 0406.00062.5385
f 3 0.4ncncncncnc
f 4 1.3 4.1603 × 10 29 3.3384 × 10 172 35.33650.621
f 5 −1.9 1.6341 × 10 58 2.5794 × 10 206 55.74181.1787
Far from ξ
f 1 −6ncncncncnc
f 2 −6ncncncncnc
f 3 −14 3.9697 × 10 26 4.0419 × 10 151 76.07090.328
f 4 −23ncncncncnc
f 5 −9 2.8218 × 10 76 6.0348 × 10 207 85.97881.5886
Very far from ξ
f 1 −60 4.4607 × 10 32 3.3463 × 10 187 96.04530.3717
f 2 −60ncncncncnc
f 3 −140 1.7723 × 10 57 0216.00440.9822
f 4 −230 1.249 × 10 29 2.4449 × 10 175 395.13867.4938
f 5 −90 6.6349 × 10 43 4.8668 × 10 209 226.01314.4952
Table 8. Numerical performance of CMT (200) method in nonlinear equations.
Table 8. Numerical performance of CMT (200) method in nonlinear equations.
Function x 0 | x k + 1 x k | | f ( x k + 1 ) | iterACOCtcpu
Close to ξ
f 1 −1.6 2.1496 × 10 56 045.99210.1499
f 2 −0.4ncncncncnc
f 3 0.4ncncncncnc
f 4 1.3 4.0045 × 10 33 1.6998 × 10 196 35.33250.6325
f 5 −1.9 1.3149 × 10 70 9.6363 × 10 207 46.04960.8213
Far from ξ
f 1 −6 6.1599 × 10 40 3.8934 × 10 208 75.96730.2711
f 2 −6ncncncncnc
f 3 −14ncncncncnc
f 4 −23 4.3946 × 10 33 2.9689 × 10 196 75.3391.4742
f 5 −9 8.369 × 10 63 2.9162 × 10 205 115.9642.0915
Very far from ξ
f 1 −60 1.9877 × 10 20 1.8239 × 10 118 145.75650.5598
f 2 −60ncncncncnc
f 3 −140ncncncncnc
f 4 −230 2.7541 × 10 49 1.5574 × 10 207 155.95863.1228
f 5 −90 7.8278 × 10 51 9.6363 × 10 207 156.16633.2771
Table 9. Numerical performance of CMT (400) method in nonlinear equations.
Table 9. Numerical performance of CMT (400) method in nonlinear equations.
Function x 0 | x k + 1 x k | | f ( x k + 1 ) | iterACOCtcpu
Close to ξ
f 1 −1.6 2.9103 × 10 44 045.98050.1439
f 2 −0.4ncncncncnc
f 3 0.4ncncncncnc
f 4 1.3 1.139 × 10 35 1.5574 × 10 207 35.24940.6218
f 5 −1.9 5.8131 × 10 53 3.1147 × 10 207 45.7540.8023
Far from ξ
f 1 −6ncncncncnc
f 2 −6ncncncncnc
f 3 −14ncncncncnc
f 4 −23ncncncncnc
f 5 −9ncncncncnc
Very far from ξ
f 1 −60ncncncncnc
f 2 −60ncncncncnc
f 3 −140ncncncncnc
f 4 −230ncncncncnc
f 5 −90ncncncncnc
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cordero, A.; Moscoso-Martínez, M.; Torregrosa, J.R. Chaos and Stability in a New Iterative Family for Solving Nonlinear Equations. Algorithms 2021, 14, 101. https://0-doi-org.brum.beds.ac.uk/10.3390/a14040101

AMA Style

Cordero A, Moscoso-Martínez M, Torregrosa JR. Chaos and Stability in a New Iterative Family for Solving Nonlinear Equations. Algorithms. 2021; 14(4):101. https://0-doi-org.brum.beds.ac.uk/10.3390/a14040101

Chicago/Turabian Style

Cordero, Alicia, Marlon Moscoso-Martínez, and Juan R. Torregrosa. 2021. "Chaos and Stability in a New Iterative Family for Solving Nonlinear Equations" Algorithms 14, no. 4: 101. https://0-doi-org.brum.beds.ac.uk/10.3390/a14040101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop