# Dynamics and Fractal Dimension of Steffensen-Type Methods

^{1}

^{2}

^{*}

Next Article in Journal

Previous Article in Journal

Institute of Telecommunications and Multimedia Applications (iTEAM), Universitat Politècnica de Valencia, Camino de Vera, s/n, 46022-Valencia, Spain

Institute of Multidisciplinary Mathematics, Universitat Politècnica de València, Camino de Vera, s/n,46022-Valencia, Spain

Author to whom correspondence should be addressed.

Academic Editor: Stefano Mariani

Received: 20 March 2015 / Accepted: 25 May 2015 / Published: 1 June 2015

In this paper, the dynamical behavior of different optimal iterative schemes for solving nonlinear equations with increasing order, is studied. The tendency of the complexity of the Julia set is analyzed and referred to the fractal dimension. In fact, this fractal dimension can be shown to be a powerful tool to compare iterative schemes that estimate the solution of a nonlinear equation. Based on the box-counting algorithm, several iterative derivative-free methods of different convergence orders are compared.

A large number of problems in science and engineering require the solution of a nonlinear equation f(z) = 0. There are different techniques to tackle this problem, and one of the most commonly used is the numerical solution with iterative methods. The Newton’s method is a well-known iterative scheme to estimate the solution of nonlinear equations

$${z}_{n+1}={z}_{n}-\frac{f({z}_{n})}{{f}^{\prime}({z}_{n})},\phantom{\rule{1em}{0ex}}n=0,1,2,\dots $$

However, in many situations it is not possible to evaluate the derivative. In Section 2, several optimal derivative-free methods, with increasing orders of convergence, are introduced, in order to avoid the evaluation of the derivative.

The efficiency index, introduced by Ostrowski (see [1]), feeds back the comparison between methods in terms of efficiency. It is defined as I = p^{1}^{/d}, where p is the order of convergence and d is the number of functional evaluations per step. A method is optimal if p = 2^{d−}^{1}, as Kung and Traub conjectured in [2].

The quality of an iterative method can be measured by distinct parameters, as the efficiency index, the order of convergence, …, but the stability is a capital element that needs to be analyzed. The dynamical planes of the iterative schemes supply this information graphically, and are developed in Section 3. Several authors have studied and compared the stability of different known iterative methods by means of their dynamical planes, firstly in the work of Varona [3] and later on developed by Amat et al. [4], Neta et al. [5,6], among others.

The dynamical behavior of Newton’s method on polynomials, as studied in [7,8] among others, shows that its stability is not guaranteed in the whole dynamical plane. Finding the fractal dimension of an iterative method is a technique to measure this Julia set and, therefore, a useful tool to characterize the stability of a method. In Section 4, the fractal dimension of the introduced methods is evaluated.

In this paper, we analyze the dynamical behavior of four derivative-free schemes, with orders of convergence 2, 4, 8 and 16, on different quadratic and cubic polynomials. From this analysis, some results can be conjectured. For example, the Julia sets of the corresponding rational functions have less complexity and the basins of attraction obtained by the different schemes are wider, becoming more similar to that of Newton’s method, as the order of convergence increases. From a numerical point of view, these facts are checked by the fractal dimension of each procedure, that gets increasingly close to Newton’s one.

The well-known Steffensen’s method (see [9]) replaces the derivative of Equation (1) by the forward difference, so its iterative expression is
where v_{n} = z_{n} + f(z_{n}). The composition technique (described, for instance, in [10]) allows up the implementation of high-order methods. From two methods with orders of convergence p_{1} and p_{2} is possible to perform a method of p_{1} · p_{2} order. In [11], the authors describe a technique based on composition and Padé-like approximants to obtain optimal derivative-free methods.

$${z}_{n+1}={z}_{n}-\frac{{[f({z}_{n})]}^{2}}{f({v}_{n})-f({z}_{n})}$$

We first compose Steffensen’s method with Newton’s scheme obtaining the fourth-order scheme
where v_{n} = z_{n} + f(z_{n}). Now, in order to avoid the evaluation of f′(y_{n}), we approximate it by the derivative m′(y_{n}) of the following first degree Padé-like approximant
where a_{1}, a_{2} and a_{3} are real parameters to be determined satisfying the following conditions:
and

$$\begin{array}{l}\phantom{\rule{0.8em}{0ex}}{y}_{n}={z}_{n}-\frac{f{({z}_{n})}^{2}}{f({v}_{n})-f({z}_{n})}\\ {z}_{n+1}={y}_{n}-\frac{f({y}_{n})}{{f}^{\prime}({y}_{n})}\end{array}$$

$$m(t)=\frac{{a}_{1}+{a}_{2}(t-{y}_{n})}{1+{a}_{3}(t-{y}_{n})}$$

$$m({z}_{n})=f({z}_{n})$$

$$m({y}_{n})=f({y}_{n})$$

$$m({v}_{n})=f({v}_{n})$$

From Equation (5), it is immediately obtained

$${a}_{1}=f({y}_{n})$$

Conditions (4) and (6), give, respectively,
and
where f[z_{n}, y_{n}] denotes the divided difference
$\frac{f({z}_{n})-f({y}_{n})}{{z}_{n}-{y}_{n}}$. After some algebraic manipulations, the following values are obtained for the parameters:
and
where
$f[{z}_{n},{y}_{n},{v}_{n}]=\frac{f[{z}_{n},{y}_{n}]-f[{y}_{n},{v}_{n}]}{{z}_{n}-{v}_{n}}$ is a second order divided difference.

$${a}_{2}-f({z}_{n}){a}_{3}=f[{z}_{n},{y}_{n}]$$

$${a}_{2}-f({v}_{n}){a}_{3}=f[{v}_{n},{y}_{n}]$$

$${a}_{2}=f[{y}_{n},{v}_{n}]-\frac{f({v}_{n})f[{z}_{n},{y}_{n},{v}_{n}]}{f[{v}_{n},{z}_{n}]}$$

$${a}_{3}=-\frac{f[{z}_{n},{y}_{n},{v}_{n}]}{f[{z}_{n},{v}_{n}]}$$

The derivative of the Padé approximant evaluated in y_{n} can be expressed as

$${m}^{\prime}({y}_{n})=\frac{f[{z}_{n},{y}_{n}]f[{y}_{n},{v}_{n}]}{f[{z}_{n},{v}_{n}]}$$

Substituting f′(y_{n}) from Equation (3) by its approximation m′(y_{n}), Cordero et al. in [11] obtained an optimal fourth-order iterative method denoted by M4, whose expression is:

$$\begin{array}{l}\phantom{\rule{0.8em}{0ex}}{y}_{n}={z}_{n}-\frac{f{({z}_{n})}^{2}}{f({v}_{n})-f({z}_{n})}\\ {x}_{n+1}={y}_{n}-\frac{f({y}_{n})f[{z}_{n},{v}_{n}]}{f[{z}_{n},{y}_{n}]f[{y}_{n},{v}_{n}]}\end{array}$$

By composing again the resulting scheme with Newton’s method, and estimating the last derivative by a Padé-like approximant of order two, the obtained optimal eighth-order method, denoted by M8, has the iterative scheme

$$\{\begin{array}{l}{y}_{n}={z}_{n}-\frac{{[f({z}_{n})]}^{2}}{f({v}_{n})-f({z}_{n})}\\ {u}_{n}={y}_{n}-\frac{f({y}_{n})f[{z}_{n},{v}_{n}]}{f[{z}_{n},{y}_{n}]f[{y}_{n},{v}_{n}]}\\ {z}_{n+1}={u}_{n}-\frac{f({u}_{n})f[{z}_{n},{y}_{n},{v}_{n}]}{f[{y}_{n},{v}_{n},{u}_{n}]f[{u}_{n},{z}_{n},{y}_{n}]({u}_{n}-{z}_{n})+f[{z}_{n},{y}_{n},{v}_{n}]f[{y}_{n},{u}_{n}]}\end{array}$$

In a similar way, we can obtain a new optimal 16th-order method, denoted by M16, by composing the scheme M8 with Newton’s method and estimating the last derivative by a Padé-like approximant of third degree
obtaining the iterative expression
where
${c}_{5}=-\frac{f[{z}_{n},{y}_{n},{v}_{n},{u}_{n},{w}_{n}]}{f[{z}_{n},{y}_{n},{v}_{n},{u}_{n}]}$, c_{4} = f [z_{n}; y_{n}; v_{n};w_{n}] + c_{5}f [z_{n}; y_{n}; v_{n}], c_{3} = f [z_{n}; y_{n};w_{n}] − c_{4}(z_{n} + y_{n} − 2w_{n}) + c_{5}f [z_{n}; y_{n}]and f [·, ·, ·, ·] and f [·, ·, ·, ·, ·] are the divided differences of order three and four, respectively.

$$m(t)=\frac{{c}_{1}+{c}_{2}(t-{z}_{n})+{c}_{3}{(t-{z}_{n})}^{2}+{c}_{4}{(t-{z}_{n})}^{3}}{1++{c}_{5}(t-{z}_{n})}$$

$$\{\begin{array}{l}{y}_{n}={z}_{n}-\frac{{[f({z}_{n})]}^{2}}{f({v}_{n})-f({z}_{n})}\\ {u}_{n}={y}_{n}-\frac{f({y}_{n})f[{z}_{n},{v}_{n}]}{f[{z}_{n},{y}_{n}]f[{y}_{n},{v}_{n}]}\\ {w}_{n}={u}_{n}-\frac{f({u}_{n})f[{z}_{n},{y}_{n},{v}_{n}]}{f[{y}_{n},{v}_{n},{u}_{n}]f[{u}_{n},{z}_{n},{y}_{n}]({u}_{n}-{z}_{n})+f[{z}_{n},{y}_{n},{v}_{n}]f[{y}_{n},{u}_{n}]}\\ {z}_{n+1}={w}_{n}-\frac{f({w}_{n})}{f[{z}_{n},{w}_{n}]+({z}_{n}-{w}_{n})\{{c}_{5}f[{z}_{n},{w}_{n}]-{c}_{3}-{c}_{4}({z}_{n}-{w}_{n})\}}\end{array}$$

This procedure can be extended in order to obtain optimal derivative-free iterative schemes with convergence order 2^{k−}^{1}, k = 2, 3, 4, 5, …. All the methods designed in this way are optimal in the sense of Kung-Traub’s conjecture, since the methods of order 2^{k−}^{1} need k functional evaluation per iteration, k = 2, 3, …

The dynamical planes provides an at-a-glance status of the convergence and stability of a method. In this section, the dynamical planes of the introduced methods are shown. In order to get a better understanding, some dynamical concepts are recalled. For a more extensive and comprehensive review of these concepts, see for example [7,8].

Let R : Ĉ → Ĉ be a rational function, where Ĉ is the Riemann sphere. The orbit of a point z_{0} ∈ Ĉ is defined as {z_{0}, R(z_{0}), …, R^{n}(z_{0}), …}.

The dynamical behavior of the orbit of a point on the complex plane can be classified depending on its asymptotic behavior. In this way, a point z_{0} ∈ C is a fixed point of R if R(z_{0}) = z_{0}. A fixed point is attracting, repelling or neutral if |R′(z_{0})| is less than, greater than or equal to 1, respectively. Moreover, if |R′(z_{0})| = 0, the fixed point is superattracting.

Let
${z}_{f}^{*}$ be an attracting fixed point of the rational function R. Its basin of attraction
$\mathcal{A}({z}_{f}^{*})$ is defined as the set of pre-images of any order such that
$\mathcal{A}({z}_{f}^{*})=\left\{{z}_{0}\in \widehat{\mathrm{C}}:{R}^{n}({z}_{0})\to {z}_{f}^{*},n\to \infty \right\}$. In our calculations, we usually consider the region [−5, 5] × [−5, 5] of the complex plane, with 600 × 600 points and we apply the corresponding iterative method starting in every z_{0} in this area. If the sequence generated by iterative method reaches a zero z^{*} of the polynomial with a tolerance |z_{k} – z^{*}| < 10^{−}^{3} and a maximum of 40 iterations, we decide that z_{0} is in the basin of attraction of these zero and we paint this point in a color previously selected for this root. In the same basin of attraction, the number of iterations needed to achieve the solution is showed in darker or brighter colors (the less iterations, the brighter color). Black color denotes lack of convergence to any of the roots (with the maximum of iterations established) or convergence to the infinity.

The set of points whose orbits tends to an attracting fixed point (or an attracting periodic orbit)
${z}_{f}^{*}$ is defined as the Fatou set,
$\mathcal{F}(R)$. The complementary set, the Julia set
$\mathcal{J}(R)$, is the closure of the set consisting of its repelling fixed points, and establishes the boundaries between the basins of attraction.

If the rational function R is associated to the fixed point operator of the developed methods in Section 2 applied on a polynomial f(z), denoted generically as O_{f}(z), it is possible to find its fixed and critical points. The fixed points z_{f} verify O_{f}(z) = z, and the critical points z_{c} validate
${O}_{f}^{\prime}(z)=0$.

The expressions
where v = z +f(z), y = S_{f}(z), u = F_{f}(z) and w = E_{f}(z), are the fixed point operators of Steffensen’s, M4, M8 and M16 methods, respectively.

$${S}_{f}(z)=z-\frac{{f}^{2}(z)}{f(v)-f(z)}$$

$${F}_{f}(z)=y-\frac{f(y)f[z,v]}{f[z,y]-f[y,v]}$$

$${E}_{f}(z)=u-\frac{f(u)f[z,y,v]}{f[y,v,u]f[u,z,y](u-z)+f[z,y,v]f[y,u]}$$

$${X}_{f}(z)=w-\frac{f(w)}{f[z,w]+(z-w)\{{c}_{5}f[z,w]-{c}_{3}-{c}_{4}(z-w)\}}$$

From the dynamical point of view, conjugacy classes play an important role in the understanding of the behavior of classes of maps in the following sense. Let us consider a map z → M_{f}(z) whose M_{f} is any iterative root-finding map. Since a conjugacy preserves fixed and periodic points, as well as their basins of attraction, the dynamical information concerning f is carried by the character of the fixed points of M_{f}. So, for polynomials p of degree greater than or equal to k, it is interesting to build up parametrized families of polynomial p_{µ}, as simple as possible, so that there exist a conjugacy between M_{pµ} and M_{p}.

In order to study affine conjugacy classes of an iterative method M_{f}, we need to establish a result that is is called Scaling Theorem. As the authors proved in [12], Steffensen’s method does not satisfy the Scaling Theorem, and this statement can be proved in a similar way for M4, M8 and M16. Then, these schemes have not conjugacy classes and we must study their dynamics on specific polynomials. The behavior of each method is analyzed on four different polynomials: p_{c} (z) = z^{2} + c and q_{c} (z) = z^{3} + c, where c ∈ {1, i}. The introduced fixed point operators satisfy the symmetry property
${O}_{{f}_{\overline{c}}}(\overline{z})=\overline{{O}_{{f}_{c}}(z)}$, ∀c, z ∈ **C**. Consequently, for polynomials with c ∈ **R**, the dynamical planes are symmetric respect to the abscissas axis. For polynomials with c ∈ **C**, the dynamical planes of
${O}_{{f}_{\overline{c}}}(z)$ are a vertical reflection of
${O}_{{f}_{c}}(z)$. Therefore, the study of p_{c}(z) and q_{c}(z), with c ∈ {1, i}, directly involves the knowledge of c ∈ {−1, −i}.

The dynamical planes of S_{f}(z), F_{f}(z), E_{f}(z) and X_{f}(z), when they are applied to p_{{1}_{,i}_{}}(z) and q_{{1}_{,i}_{}}(z), are displayed in Figures 1–4. The basins of attraction are colored in orange and blue—for quadratic polynomials—and also in green—for cubic polynomials. The black points are those that do not converge to any of the attracting fixed points. We observe that the basins of attraction get wider when the order of convergence is increased, obtaining fast convergence in regions of initial guesses where the original scheme is divergent. A reason for this behavior can be found in [12], where the authors prove that the infinity is an attracting fixed point of Steffensen’s method, and it is repulsive in case of M4 (in a similar way, it can be checked that it is also repulsive for M8 and M16).

As Figures 1–4 show, except for Steffensen’s method, the complexity of the dynamical planes gets smoother as the order of convergence increases.

The dynamical planes of the fixed point operator associated to Newton’s method, N_{f}(z), when it is applied to p_{c}(z) = z^{2} + c and q_{c}(z) = z^{3} + c, with c ∈ {1, i}, are shown in Figure 5.

If we focus on the evolution of M4 to M16, passing through M8, it is observed that they are closer to the Newton’s dynamical planes, for each polynomial.

The fractal dimension of the Julia set is a useful tool to analyze how close is a dynamical plane to another. Usually, the fractal dimension is obtained by the box-counting algorithm, based on the Hausdorff dimension. The foundations of this algorithm (see, for instance, [13]) goes by covering the Julia set by boxes small and smaller, in order to find
where ϵ is the length of the boxes, N(ϵ) is the account of boxes that cover the Julia set, and D is the fractal dimension. Computationally, the value of D is obtained as the slope of the regression line of log ϵ vs. log N(ϵ).

$$D=-\underset{\u03f5\to 0}{\mathrm{lim}}\frac{\mathrm{log}(N(\u03f5))}{\mathrm{log}(\u03f5)}$$

Table 1 gathers the fractal dimension of operators (12), (13), (14) and N_{f}(z) when they are applied to the previously submitted polynomials in the top rows. The comparison of each method to Newton’s one is made by the percentage of their fractal dimension, shown in bottom rows.

Graphically, we observe that the dynamical plane of the different methods on an specific polynomial “tends” to the corresponding dynamical plane of Newton’s one. We can see, for example, in Figure 4, how the different pictures are looking increasingly to Figure 5d. Also, the Julia sets have less complexity as the order of convergence increases. In a numerical sense, taking into account the fractal dimension of each method, it gets close and closer from Newton’s one.

There are some studies about the fractal dimension of Newton’s dynamical plane. For instance, in [14] the fractal dimension of N_{qc}(z), with c = −1, is D = 1.44692 in the square [−2.5, 2.5] × [−2.5, 2.5], whereas our calculus in this region gets D = 1.4055. Also, in [15], in the square [−1, 1] × [−1, 1], D ≈ 1.42, while in our study, the value of the fractal dimension in this square is D = 1.4242. The exact value depends on the details of the algorithm, such as the thickness of the Julia set or the chosen sequence of ϵ. However, our purpose is the comparison of the methods, so finding the fractal dimensions with the same algorithm is enough.

The authors thank to the anonymous referees for their useful comments and suggestions to improve the final version of the manuscript.

This research was partially supported by Ministerio de Economía y Competitividad MTM2014-52016-C02-02 and FONDOCYT 2014-1C1-088, República Dominicana.

The contributions of all of the authors have been similar. All of them have worked together to develop the present manuscript.

The authors declare no conflict of interest.

- Ostrowski, A.M. Solutions of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
- Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach.
**1974**, 21, 643–651. [Google Scholar] - Varona, J.L. Graphic and numerical comparison bteween iterative methods. Math. Intell.
**2002**, 24, 37–46. [Google Scholar] - Amat, S.; Busquier, S.; Bermúdez, C.; Plaza, S. On two families of high order Newton type methods. Appl. Math. Lett.
**2012**, 25, 2209–2217. [Google Scholar] - Neta, B.; Scott, M.; Chun, C. Basins of attraction for several methods to find simple roots of nonlinear equations. Appl. Math. Comput.
**2012**, 218, 10548–10556. [Google Scholar] - Chun, C.; Neta, B. An analysis of a new family of eighth-order optimal methods. Appl. Math. Comput.
**2014**, 245, 86–107. [Google Scholar][Green Version] - Devaney, R.L. An Introduction to Chaotic Dynamical Systems; Addison-Wesley Publishing Company: New York, NY, USA, 1989. [Google Scholar]
- Blanchard, P. Complex Analytic Dynamics on the Riemann Sphere. Bull. AMS.
**1984**, 11, 85–141. [Google Scholar] - Cordero, A.; Torregrosa, J.R. A class of Steffensen type methods with optimal order of convergence. Appl. Math. Comput.
**2011**, 217, 7653–7659. [Google Scholar] - Ortega, J.M.; Rheinboldt, W.G. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A new technique to obtain derivative-free optimal iterative methods for solving nonlinear equations. J. Comput. Appl. Math.
**2012**, 252, 95–102. [Google Scholar] - Chicharro, F.; Cordero, A.; Gutiérrez, J.M.; Torregrosa, J.R. Complex dynamics of derivative-free methods for nonlinear equations. Appl. Math. Comput.
**2013**, 219, 7023–7035. [Google Scholar] - Peitgen, H.O.; Jürgens, H.; Saupe, D. Chaos and Fractals: New Frontiers of Science, 2nd ed; Springer-Verlag: New York, NY, USA, 2004. [Google Scholar]
- Gutiérrez, J.M.; Magreñán, Á.A.; Varona, J.L. The “Gauss-Seidelization” of iterative methods for solving nonlinear equations in the complex plane. Appl. Math. Comput.
**2011**, 218, 2467–2479. [Google Scholar] - Epureanu, B.I.; Greenside, H.S. Fractal basins of attraction associated with a damped Newton’s method. SIAM Rev.
**1998**, 40, 102–109. [Google Scholar]

Method | p_{1}(z) | p_{i}(z) | q_{1}(z) | q_{i}(z) | |
---|---|---|---|---|---|

F_{f}(z) | Dimension | 1.7146 | 1.6034 | 1.7039 | 1.6314 |

Percentage | 58.32% | 66.73% | 81.30% | 84.91% | |

E_{f}(z) | Dimension | 1.4477 | 1.1937 | 1.6616 | 1.5928 |

Percentage | 69.08% | 89.64% | 83.37% | 86.97% | |

X_{f}(z) | Dimension | 1.3610 | 1.3790 | 1.5808 | 1.5363 |

Percentage | 73.48% | 77.59% | 87.63% | 90.17% | |

N_{f}(z) | Dimension | 1.0000 | 1.0700 | 1.3853 | 1.3854 |

Percentage | 100.00% | 100.00% | 100.00% | 100.00% |

© 2015 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).