Next Article in Journal
Effective Data Acquisition Protocol for Multi-Hop Heterogeneous Wireless Sensor Networks Using Compressive Sensing
Next Article in Special Issue
Some Matrix Iterations for Computing Generalized Inverses and Balancing Chemical Equations
Previous Article in Journal
Code Synchronization Algorithm Based on Segment Correlation in Spread Spectrum Communication
Previous Article in Special Issue
Newton-Type Methods on Generalized Banach Spaces and Applications in Fractional Calculus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Some Improved Harmonic Mean Newton-Like Methods for Solving Systems of Nonlinear Equations

1
African Network for Policy Research & Advocacy for Sustainability (ANPRAS), Midlands, Curepipe 52501, Mauritius
2
Department of Mathematics, Pondicherry Engineering College, Pondicherry 605014, India
*
Author to whom correspondence should be addressed.
Algorithms 2015, 8(4), 895-909; https://0-doi-org.brum.beds.ac.uk/10.3390/a8040895
Submission received: 5 September 2015 / Revised: 23 September 2015 / Accepted: 24 September 2015 / Published: 9 October 2015
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems)

Abstract

:
In this work, we have developed a fourth order Newton-like method based on harmonic mean and its multi-step version for solving system of nonlinear equations. The new fourth order method requires evaluation of one function and two first order Fréchet derivatives for each iteration. The multi-step version requires one more function evaluation for each iteration. The proposed new scheme does not require the evaluation of second or higher order Fréchet derivatives and still reaches fourth order convergence. The multi-step version converges with order 2 r + 4 , where r is a positive integer and r 1 . We have proved that the root α is a point of attraction for a general iterative function, whereas the proposed new schemes also satisfy this result. Numerical experiments including an application to 1-D Bratu problem are given to illustrate the efficiency of the new methods. Also, the new methods are compared with some existing methods.

1. Introduction

An often discussed problem in many applications of science and technology is to find a real zero of a system of nonlinear equations F ( x ) = 0 , where F ( x ) = ( f 1 ( x ) , f 2 ( x ) , . . . , f n ( x ) ) T , x = ( x 1 , x 2 , . . . , x n ) T , f i : R n R , i = 1 , 2 , , n and F : D R n R n is a smooth map and D is an open and convex set, where we assume that α = ( α 1 , α 2 , . . . , α n ) T is a zero of the system and x ( 0 ) = x 1 ( 0 ) , x 2 ( 0 ) , . . . , x n ( 0 ) T is an initial guess sufficiently close to α. For example, problems of the above type arise while solving boundary value problems for differential equations. The differential equations are reduced to system of nonlinear equations, which are in turn solved by the familiar Newton’s iteration method having convergence order two [1]. The Newton method (2ndNM) is given by
x ( k + 1 ) = G 2 n d N M ( x ( k ) ) = x ( k ) - u ( x ( k ) ) , u ( x ( k ) ) = [ F ( x ( k ) ) ] - 1 F ( x ( k ) )
Homeier [2] has proposed a third order iterative method called Harmonic Mean Newton’s method for solving a single nonlinear equation. Analogous to this method [2], we consider the following extension to solve a system of nonlinear equation F ( x ) = 0 , henceforth called as 3 r d H M :
x ( k + 1 ) = G 3 r d H M ( x ( k ) ) = x ( k ) - 1 2 [ F ( x ( k ) ) ] - 1 + [ F ( x ( k ) - u ( x ( k ) ) ) ] - 1 F ( x ( k ) )
We note that 1 2 [ F ( x ( k ) ) ] - 1 + [ F ( x ( k ) - u ( x ( k ) ) ) ] - 1 is the average of the inverses of two Jacobians. In general, such third order methods free of second derivatives like Equation (2) can be used for solving system of nonlinear equations. These methods require one function evaluation and two first order Fréchet derivative evaluations. The convergence analysis of a few such methods using point of attraction theory can be found in [3]. This 3 r d H M method is more efficient than Halley’s method because it does not require the evaluation of a third order tensor of n 3 values while finding the number of function evaluations.
Furthermore, the 3 r d H M methods are less efficient than two-step fourth order Newton’s method ( 4 t h N R )
x ( k + 1 ) = G 4 t h N R ( x ( k ) ) = G 2 n d N M ( x ( k ) ) - F ( G 2 n d N M ( x ( k ) ) ) - 1 F ( G 2 n d N M ( x ( k ) ) )
which was recently rediscovered by Noor et al. [4] using the variational iteration technique. Recently Sharma et al. [5] developed the fourth order method, which is given by
x ( k + 1 ) = G 4 t h S G S ( x ( k ) ) = x ( k ) - W ( x ( k ) ) u ( x ( k ) ) , W ( x ( k ) ) = - 1 2 I + 9 8 [ F ( y ( x ( k ) ) ) ] - 1 F ( x ( k ) ) + 3 8 [ F ( x ( k ) ) ] - 1 F ( y ( x ( k ) ) ) y ( x ( k ) ) = x ( k ) - 2 3 u ( x ( k ) )
Cordero et al. [6] presented a sixth order method, which is given by
x ( k + 1 ) = G 6 t h C H M T ( x ( k ) ) = z ( x ( k ) ) - [ F ( x ( k ) - u ( x ( k ) ) ) ] - 1 F ( z ( x ( k ) ) ) z ( x ( k ) ) = x ( k ) - u ( x ( k ) - 2 I - F ( x ( k ) ) - 1 F ( x ( k ) - u ( x ( k ) ) ) [ F ( x ( k ) ) ] - 1 F ( x ( k ) - u ( x ( k ) ) )
Recently, an improved fourth order version from a third order method for solving a single nonlinear equation is found in [7]. In the current paper, similar to the method found in [7], a multivariate version having fourth order convergence is proposed. The rest of this paper is organized as follows. In Section 2, we present a new algorithm (optimal) that has fourth order convergence by using only three function evaluations and a multi-step version with order 2 r + 4 , where r is a positive integer and r 1 for solving systems of nonlinear equations. In Section 3, we study the convergence analysis of the new methods using the point of attraction theory. Section 4 presents numerical examples and comparison with some existing methods. Furthermore, we also study an application problem, i.e., the 1-D Bratu problem [8]. A brief conclusion is given in Section 5.

2. Development of the Methods

Babajee [7] has recently improved the 3 r d H M method to get a fourth order method for single equation
y k = x k - 2 3 f ( x k ) f ( x k ) x k + 1 = x k - 1 2 f ( x k ) f ( x k ) + f ( x k ) f ( y k ) 1 - 1 4 f ( y k ) f ( x k ) - 1 + 1 2 f ( y k ) f ( x k ) - 1 2
This method is one of the member in the family of higher order multi-point iterative methods based on power mean for solving single nonlinear equation by Babajee et al. [9].
We next extend the above idea to the multivariate case. For the method given in Equation (2), we propose an improved fourth order Harmonic Mean Newton’s method ( 4 t h H M ) for solving systems of nonlinear equations as follows:
x ( k + 1 ) = G 4 t h H M ( x ( k ) ) = x ( k ) - H 1 ( x ( k ) ) A ( x ( k ) ) F ( x ( k ) ) H 1 ( x ( k ) ) = I - 1 4 ( τ ( x ( k ) ) - I ) + 1 2 ( τ ( x ( k ) ) - I ) 2 , τ ( x ( k ) ) = [ F ( x ( k ) ) ] - 1 F ( y ( x ( k ) ) ) A ( x ( k ) ) = 1 2 [ F ( x ( k ) ) ] - 1 + [ F ( y ( x ( k ) ) ) ] - 1 , y ( x ( k ) ) = x ( k ) - 2 3 u ( x ( k ) )
where I is the n × n identity matrix. We further improve the 4 t h H M method by additional function evaluations to get a multi-step version called ( 2 r + 4 ) t h HM method given by
x ( k + 1 ) = G ( 2 r + 4 ) t h H M ( x ( k ) ) = μ r ( x ( k ) ) μ j ( x ( k ) ) = μ j - 1 ( x ( k ) ) - H 2 ( x ( k ) ) A ( x ( k ) ) F ( μ j - 1 ( x ( k ) ) ) H 2 ( x ( k ) ) = 2 I - τ ( x ( k ) ) , j = 1 , 2 , . . . , r , r 1 μ 0 ( x ( k ) ) = G 4 t h H M ( x ( k ) )
Note that this multi-step version has order 2 r + 4 , where r is a positive integer and r 0 . The case r = 0 is the 4 t h H M method.

3. Convergence Analysis

The main theorem is going to be demonstrated by means of the n-dimensional Taylor expansion of the functions involved. In the following, we use certain notations and results found in [10]:
Let F : D R n R n be sufficiently Fréchet differentiable in D. Suppose the qth derivative of F at u R n , q 1 , is the q-linear function F ( q ) ( u ) : R n × × R n R n such that F ( q ) ( u ) ( v 1 , , v q ) R n . Given α + h R n , which lies in a neighborhood of a solution α of the nonlinear system F ( x ) = 0 , Taylor’s expansion can be applied (assuming Jacobian matrix F ( α ) is nonsingular) to obtain
F ( α + h ) = F ( α ) h + q = 2 p - 1 C q h q + O ( h p )
where C q = ( 1 / q ! ) [ F ( α ) ] - 1 F ( q ) ( α ) , q 2 . It is noted that C q h q R n since F ( q ) ( α ) L ( R n × × R n , R n ) and [ F ( α ) ] - 1 L ( R n ) . Also, we can expand F ( α + h ) in Taylor series
F ( α + h ) = F ( α ) I + q = 2 p - 1 q C q h q - 1 + O ( h p )
where I is the identity matrix. It is also noted that q C q h q - 1 L ( R n ) . Denote e ( k ) = x ( k ) - α , so the error at the ( k + 1 ) th iteration is e ( k + 1 ) = L e ( k ) p + O ( e ( k ) p + 1 ) , where L is a p-linear function L L ( R n × × R n , R n ) is called the error equation and p is the order of convergence. Observe that e ( k ) p is ( e ( k ) , e ( k ) , , e ( k ) ) .
In order to prove the convergence order for the Equation (6), we need to recall some important definitions and results from the theory of point of attraction.
Definition (Point of Attraction). [11] Let G : D R n R n . Then α is a point of attraction of the iteration
x ( k + 1 ) = G ( x ( k ) ) , k = 0 , 1 , . . .
if there is an open neighborhood S of α defined by
S ( α ) = { x R n | x - α < δ } , δ > 0 ,
such that S D and, for any x ( 0 ) S , the iterating { x ( k ) } defined by Equation (10) all lie in D and converge to α.
Theorem 1 (Ostrowski Theorem). [11] Assume that G : D R n R n has a fixed point α i n t ( D ) and G ( x ) is Fréchet differentiable on α. If
ρ ( G ( α ) ) = σ < 1
then α is a point of attraction for x ( k + 1 ) = G ( x ( k ) ) .
We now prove a general result that shows α is a point of attraction of a general iteration function G ( x ) = P ( x ) - Q ( x ) R ( x ) .
Theorem 2. Let F : D R n R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of α D , which is a solution of the system F ( x ) = 0 . Suppose that P , Q , R : D R n R n are sufficiently Fréchet differentiable functionals (depending on F) at each point in D with P ( α ) = α , Q ( α ) 0 and R ( α ) = 0 . Then, there exists a ball
S = S ¯ ( α , δ ) = α - x δ S 0 , δ > 0 ,
on which the mapping
G : S R n , G ( x ) = P ( x ) - Q ( x ) R ( x ) , for all   x S
is well-defined; moreover, G is Fréchet differentiable at α, thus
G ( α ) = P ( α ) - Q ( α ) R ( α ) .
Proof: Clearly, G ( α ) = α .
G ( x ) - G ( α ) - G ( α ) ( x - α ) = P ( x ) - Q ( x ) R ( x ) - α - ( P ( α ) - Q ( α ) R ( α ) ) ( x - α ) P ( x ) - α - P ( α ) ( x - α ) + - Q ( x ) R ( x ) + Q ( α ) R ( α ) ( x - α ) , using triangle inequality .
Since P ( x ) is differentiable in α and P ( α ) = α , we can assume that δ was chosen sufficiently small such that
P ( x ) - α - P ( α ) ( x - α ) ϵ x - α ,
for all x S with ϵ > 0 depending on δ and ϵ = 0 in case P ( x ) = x .
Since P, Q and R are continuously differentiable functions, then Q , R and R are bounded:
Q ( x ) K 1 , R ( x ) K 2 , R ( x ) K 3 .
Now by mean value theorem for integrals
Q ( x ) = Q ( α ) + 0 1 Q ( α + t ( x - α ) ) d t ( x - α )
and
R ( x ) = 0 1 R ( α + s ( x - α ) ) d s ( x - α ) ,
so that
Q ( x ) R ( x ) - Q ( α ) R ( α ) ( x - α ) = Q ( α ) 0 1 R ( α + s ( x - α ) ) - R ( α ) d s ( x - α ) 2 + 0 1 0 1 Q ( α + t ( x - α ) ) R ( α + s ( x - α ) ) d t d s ( x - α ) 2 Q ( α ) 0 1 0 1 R ( α + s λ ( x - α ) ) d s d λ s ( x - α ) 2 + 0 1 0 1 Q ( α + t ( x - α ) ) R ( α + s ( x - α ) ) d t d s ( x - α ) 2 , using triangle inequality ,
Q ( α ) 0 1 0 1 R ( α + s λ ( x - α ) ) d s d λ | s | x - α 2 + 0 1 0 1 Q ( α + t ( x - α ) ) R ( α + s ( x - α ) ) d t d s x - α 2 , using Schwartz inequality , K 3 2 Q ( α ) + K 1 K 2 x - α 2 , since Q , R and R are bounded , δ K 3 2 Q ( α ) + K 1 K 2 x - α , since x - α δ .
Combining, we have
G ( x ) - G ( α ) - G ( α ) ( x - α ) δ ϵ + K 3 2 Q ( α ) + K 1 K 2 x - α
which shows that G ( x ) is differentiable in α since δ and ϵ are arbitrary and Q ( α ) , K 1 , K 2 and K 3 are constants. Thus G ( α ) = P ( α ) - Q ( α ) R ( α ) .
Theorem 3. Let F : D R n R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of α R n that is a solution of the system F ( x ) = 0 . Let us suppose that x S = S ¯ ( α , δ ) and F ( x ) is continuous and nonsingular in α, and x ( 0 ) is close enough to α. Then α is a point of attraction of the sequence { x ( k ) } obtained using the iterative expression Equation (6). Furthermore, the sequence converges to α with order 4, where the error equation obtained is
e ( k + 1 ) = G 4 t h H M ( x ( k ) ) - α = L 1 e ( k ) 4 + O ( e ( k ) 5 ) , L 1 = 79 27 C 2 3 - 8 9 C 2 C 3 - 1 9 C 3 C 2 + 1 9 C 4
Proof: We first show that α is a point of attraction using Theorem 2. In this case,
P ( x ) = x , Q ( x ) = H 1 ( x ) A ( x ) , R ( x ) = F ( x ) .
Now, since F ( α ) = 0 , we have
y ( α ) = α - 2 3 [ F ( α ) ] - 1 F ( α ) = α , τ ( α ) = F ( α ) - 1 F ( y ( α ) ) = [ F ( α ) ] - 1 F ( y ( α ) ) = I , H 1 ( α ) = I , A ( α ) = 1 2 [ F ( α ) ] - 1 + [ F ( y ( α ) ) ] - 1 = [ F ( α ) ] - 1 , Q ( α ) = H 1 ( α ) A ( α ) = I [ F ( α ) ] - 1 = [ F ( α ) ] - 1 0 , R ( α ) = F ( α ) = 0 , R ( α ) = F ( α ) , P ( α ) = α , P ( α ) = I ,
G ( α ) = P ( α ) - Q ( α ) R ( α ) = I - [ F ( α ) ] - 1 F ( α ) = 0 ,
so that ρ ( G ( α ) ) = 0 < 1 and by Ostrowski’s theorem, α is a point of attraction of Equation (6).
We next establish the fourth order convergence of this method. From Equation (8) and Equation (9) we obtain
F ( x ( k ) ) = F ( α ) e ( k ) + C 2 e ( k ) 2 + C 3 e ( k ) 3 + C 4 e ( k ) 4 + O ( e ( k ) 5 )
and
F ( x ( k ) ) = F ( α ) I + 2 C 2 e ( k ) + 3 C 3 e ( k ) 2 + 4 C 4 e ( k ) 3 + 5 C 5 e ( k ) 4 + O ( e ( k ) 5 ) ,
where e ( k ) = x ( k ) - α .
We have
[ F ( x ( k ) ) ] - 1 = I + X 1 e ( k ) + X 2 e ( k ) 2 + X 3 e ( k ) 3 [ F ( α ) ] - 1 + O ( e ( k ) 4 )
where X 1 = - 2 C 2 , X 2 = 4 C 2 2 - 3 C 3 and X 3 = - 8 C 2 3 + 6 C 2 C 3 + 6 C 3 C 2 - 4 C 4 .
Then
[ F ( x ( k ) ) ] - 1 F ( x ( k ) ) = e ( k ) - C 2 e ( k ) 2 + 2 ( C 2 2 - C 3 ) e ( k ) 3 + O ( e ( k ) 4 ) ,
and the expression for y ( x ( k ) ) is
y ( x ( k ) ) = α + 1 3 e ( k ) + 2 3 C 2 e ( k ) 2 - 4 3 ( C 2 2 - C 3 ) e ( k ) 3 + ( 2 C 4 - 8 3 C 2 C 3 - 2 C 3 C 2 + 8 C 2 3 ) e ( k ) 4 + O ( e ( k ) 5 ) .
The Taylor expansion of the Jacobian matrix F ( y ( x ( k ) ) ) is
F ( y ( x ( k ) ) ) = F ( α ) [ I + 2 C 2 ( y ( x ( k ) ) - α ) + 3 C 3 ( y ( x ( k ) ) - α ) 2 + 4 C 4 ( y ( x ( k ) ) - α ) 3 + 5 C 5 ( y ( x ( k ) ) - α ) 4 ] + O ( e ( k ) 5 ) = F ( α ) I + N 1 e ( k ) + N 2 e ( k ) 2 + N 3 e ( k ) 3 + O ( e ( k ) 4 ) , N 1 = 2 3 C 2 , N 2 = 4 3 C 2 2 + 1 3 C 3 , N 3 = - 8 3 C 2 3 + 8 3 C 2 C 3 + 4 3 C 3 C 2 + 4 27 C 4 .
Therefore,
τ ( x ( k ) ) = [ F ( x ( k ) ) ] - 1 F ( y ( x ( k ) ) ) = I + ( N 1 + X 1 ) e ( k ) + ( N 2 + X 1 N 1 + X 2 ) e ( k ) 2 + ( N 3 + X 1 N 2 + X 2 N 1 + X 3 ) e ( k ) 3 + O ( e ( k ) 4 ) = I - 4 3 C 2 e ( k ) + ( 4 C 2 2 - 8 3 C 3 ) e ( k ) 2 + - 32 3 C 2 3 + 8 C 2 C 3 + 16 3 C 3 C 2 - 104 27 C 4 e ( k ) 3 + O ( e ( k ) 4 )
and then
H 1 ( x ( k ) ) = I - 1 4 τ ( x ( k ) ) - I + 1 2 τ ( x ( k ) ) - I 2 = I + 1 3 C 2 e ( k ) + - 1 9 C 2 2 + 2 3 C 3 e ( k ) 2 + - 8 3 C 2 3 + 14 9 C 2 C 3 - 4 3 C 3 C 2 + 26 27 C 4 e ( k ) 3 + O ( e ( k ) 4 )
Also,
[ F ( y ( x ( k ) ) ) ] - 1 = I - N 1 e ( k ) + ( N 1 2 - N 2 ) e ( k ) 2 + N 1 N 2 + N 2 N 1 - N 1 3 - N 3 e ( k ) 3 [ F ( α ) ] - 1 + O ( e ( k ) 4 ) = I + Y 1 e ( k ) + Y 2 e ( k ) 2 + Y 3 e ( k ) 3 [ F ( α ) ] - 1 + O ( e ( k ) 4 ) ,
where Y 1 = - 2 3 C 2 , Y 2 = - 8 9 C 2 2 - 1 3 C 3 , Y 3 = 112 27 C 2 3 - 22 9 C 2 C 3 - 10 9 C 3 C 2 - 4 27 C 4
On the other hand, using Equation (14) and Equation (16), the harmonic mean can be expressed as
A ( x ( k ) ) = [ I - 4 3 C 2 e ( k ) + 14 9 C 2 2 - 5 3 C 3 e ( k ) 2 + - 52 27 C 2 3 + 16 9 C 2 C 3 + 22 9 C 3 C 2 - 56 27 C 4 e ( k ) 3 ] [ F ( α ) ] - 1 + O ( e ( k ) 4 )
Using Equation (15) and Equation (17), we have
H 1 ( x ( k ) ) A ( x ( k ) ) = I - C 2 e ( k ) + ( C 2 2 - C 3 ) e ( k ) 2 + - 106 27 C 2 3 + 17 9 C 2 C 3 + 10 9 C 3 C 2 - 10 9 C 4 e ( k ) 3 [ F ( α ) ] - 1 + O ( e ( k ) 4 )
Finally, by using Equation (13) and Equation (18) in Equation (6) with some simplifications, the error equation can be expressed as:
e ( k + 1 ) = x ( k ) - α - H 1 ( x ( k ) ) A ( x ( k ) ) F ( x ( k ) ) = 79 27 C 2 3 - 8 9 C 2 C 3 - 1 9 C 3 C 2 + 1 9 C 4 e ( k ) 4 + O ( e ( k ) 5 )
Thus from Equation (19), it can be concluded that the order of convergence of the 4 t h H M method is four.   ☐
For the case r 1 we state and prove the following theorem.
Theorem 4. Let F : D R n R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of α R n that is a solution of the system F ( x ) = 0 . Let us suppose that x S = S ¯ ( α , δ ) and F ( x ) is continuous and nonsingular in α, and x ( 0 ) is close enough to α. Then α is a point of attraction of the sequence { x ( k ) } obtained using the iterative expression Equation (7). Furthermore the sequence converges to α with order 2 r + 4 , wherer is a positive integer and r 1 .
Proof: In this case,
P ( x ) = μ j - 1 ( x ) , Q ( x ) = H 2 ( x ) A ( x ) , R ( x ) = F ( μ j - 1 ( x ) ) , j = 1 , . . . , r .
We can show by induction that
μ j - 1 ( α ) = α , μ j - 1 ( α ) = 0 , j = 1 , . . . , r
so that
P ( α ) = μ j - 1 ( α ) = α , H 2 ( α ) = I , Q ( α ) = H 2 ( α ) A ( α ) = I [ F ( α ) ] - 1 = [ F ( α ) ] - 1 0 , R ( α ) = F ( μ j - 1 ( α ) ) = F ( α ) = 0 , P ( α ) = μ j - 1 ( α ) = 0 , R ( α ) = F ( μ j - 1 ( α ) ) μ j - 1 ( α ) = 0 , G ( α ) = P ( α ) - Q ( α ) R ( α ) = 0 .
So ρ ( G ( α ) ) = 0 < 1 and by Ostrowski’s theorem, α is a point of attraction of Equation (7). A Taylor expansion of F ( μ j - 1 ( x ( k ) ) ) about α yields
F ( μ j - 1 ( x ( k ) ) ) = F ( α ) ( μ j - 1 ( x ( k ) ) - α ) + C 2 ( μ j - 1 ( x ( k ) ) - α ) 2 + . . .
Also, let
H 2 ( x ( k ) ) = I + 4 3 C 2 e ( k ) + - 4 C 2 2 + 8 3 C 3 e ( k ) 2 + . . .
Using Equation (17) and Equation (21), we have
H 2 ( x ( k ) ) A ( x ( k ) ) = I + L 2 e ( k ) 2 + . . . [ F ( α ) ] - 1 , L 2 = - 38 9 C 2 2 + C 3
Using Equation (20) and Equation (22), we obtain
μ j ( x ( k ) ) - α = μ j - 1 ( x ( k ) ) - α - H 2 ( x ( k ) ) A ( x ( k ) ) F ( μ j - 1 ( x ( k ) ) ) = μ j - 1 ( x ( k ) ) - α - I + L 2 e ( k ) 2 + . . . ( μ j - 1 ( x ( k ) ) - α ) + C 2 ( μ j - 1 ( x ( k ) ) - α ) 2 + . . . = L 2 e ( k ) 2 ( μ j - 1 ( x ( k ) ) - α ) + . . .
Proceeding by induction of Equation (23) and using Equation (12), we have
μ r ( x ( k ) ) - α = L 1 L 2 r e ( k ) ( 2 r + 4 ) + O ( e ( k ) ( 2 r + 5 ) ) , r 1
        ☐

4. Numerical Examples

In this section, we compare the performance of the contributed Equation (6) and Equation (7) with different methods given in Equations (1)–(5). The numerical experiments have been carried out using MATLAB 7.6 software for the test problems given below. The approximate solutions are calculated correct to 1000 digits by using variable precision arithmetic. We use the following stopping criterion for the iterations:
e r r m i n = x ( k + 1 ) - x ( k ) 2 < 10 - 100
We have used the approximated computational order of convergence p c given by (see [12])
p c log ( x ( k + 1 ) - x ( k ) 2 / x ( k ) - x ( k - 1 ) 2 ) log ( x ( k ) - x ( k - 1 ) 2 / x ( k - 1 ) - x ( k - 2 ) 2 )
Let M be the number of iterations required for reaching the minimum residual e r r m i n .

4.1. Test Problems

Test Problem 1 (TP1) We consider the following system given in [13]: F ( x 1 , x 2 ) = 0 , where F : ( 4 , 6 ) × ( 5 , 7 ) R 2 and
F ( x 1 , x 2 ) = ( x 1 2 - x 2 - 19 , x 2 3 / 6 - x 1 2 + x 2 - 17 ) .
The Jacobian matrix is given by F ( x ) = 2 x 1 - 1 - 2 x 1 1 2 x 2 2 + 1 . The starting vector is x ( 0 ) = ( 5 . 1 , 6 . 1 ) T and the exact solution is α = ( 5 , 6 ) T .
Test Problem 2 (TP2) We consider the following system given in [3]:
cos x 2 - sin x 1 = 0 , x 3 x 1 - 1 x 2 = 0 , exp x 1 - x 3 2 = 0 .
The solution is α ( 0 . 909569 , 0 . 661227 , 1 . 575834 ) T . We choose the starting vector x ( 0 ) = ( 1 , 0 . 5 , 1 . 5 ) T . The Jacobian matrix has 7 non-zero elements and it is given by
F ( x ) = - cos x 1 - sin x 2 0 x 3 x 1 ln x 3 1 / x 2 2 x 3 x 1 x 1 / x 3 exp x 1 0 - 2 x 3 .
Test Problem 3 (TP3) We consider the following system given in [3]:
x 2 x 3 + x 4 ( x 2 + x 3 ) = 0 , x 1 x 3 + x 4 ( x 1 + x 3 ) = 0 , x 1 x 2 + x 4 ( x 1 + x 2 ) = 0 , x 1 x 2 + x 1 x 3 + x 2 x 3 = 1 .
We solve this system using the initial approximation x ( 0 ) = ( 0 . 5 , 0 . 5 , 0 . 5 , - 0 . 2 ) T . The solution of this system is α ( 0 . 577350 , 0 . 577350 , 0 . 577350 , - 0 . 288675 ) T . The Jacobian matrix that has 12 non-zero elements is given by
F ( x ) = 0 x 3 + x 4 x 2 + x 4 x 2 + x 3 x 3 + x 4 0 x 1 + x 4 x 1 + x 3 x 2 + x 4 x 1 + x 4 0 x 1 + x 2 x 2 + x 3 x 1 + x 3 x 1 + x 2 0 .
Table 1. Comparison of different methods for system of nonlinear equations.
Table 1. Comparison of different methods for system of nonlinear equations.
MethodsTP1TP2TP3
M e r r m i n p c M e r r m i n p c M e r r m i n p c
2 n d N M Equation (1)74.6e−1142.0091.7e−1072.0083.9e−1452.02
3 r d H M Equation (2)51.4e−1742.9964.5e−1393.0052.9e−2914.10
4 t h N R Equation (3)44.6e−1144.0251.7e−1074.0052.9e−2914.11
4 t h S G S Equation (4)47.1−1083.99603.9958.8e−2574.03
4 t h H M Equation (6)41.4e−1053.99604.0055.5e−2474.12
6 t h C H M T Equation (5)405.91505.9844.6e−1996.12
6 t h H M Equation (7)405.90505.9846.1e−1946.13
8 t h H M Equation (7)407.9041.9e−1337.99408.64
10 t h H M Equation (7)31.1e−1549.9042.2e−2489.994010.76
Table 1 shows the results for the test problems (TP1, TP2, TP3), from which we conclude that the 10 t h H M method is the most efficient method with least number of iterations and residual error.
Table 2. Comparison of CPU time (s).
Table 2. Comparison of CPU time (s).
MethodsTP1TP2TP3
2 n d N M 1.1614051.7345491.758380
3 r d H M 0.9506782.4456761.969176
4 t h N R 0.8088511.5690211.452089
4 t h S G S 1.0529502.6495302.571427
4 t h H M 1.0011482.1700882.456138
6 t h C H M T 1.1323642.1178472.405149
6 t h H M 0.9440622.1373192.528262
8 t h H M 0.9863002.3284602.071641
10 t h H M 1.0297072.4821672.213744
In Table 2, we have given CPU time for the proposed methods and some existing methods.
Next, we consider the ( 2 r + 4 ) t h HM family of methods for finding the least value of r and thus the value of p in order to get the number of iteration M = 2 and e r r m i n = 0 . To achieve this, TP1 requires r = 6 ( p = 16 ), TP2 requires r = 18 ( p = 40 ) and TP3 requires r = 8 ( p = 20 ). Furthermore, it is observed that the order of convergence p depends on the test problem and its starting vector.

4.2. 1-D Bratu Problem

The 1-D Bratu problem [8] is given by
d 2 U d x 2 + λ exp U ( x ) = 0 , λ > 0 , 0 < x < 1 ,
with the boundary conditions U ( 0 ) = U ( 1 ) = 0 . The 1-D planar Bratu problem has two known, bifurcated, exact solutions for values of λ < λ c , one solution for λ = λ c and no solution for λ > λ c .
The critical value of λ c is simply 8 ( η 2 - 1 ) , where η is the fixed point of the hyperbolic cotangent function coth ( x ) . The exact solution to Equation (26) is known and can be presented here as
U ( x ) = - 2 ln cosh ( x - 1 2 ) θ 2 cosh θ 4 ,
where θ is a constant to be determined, which satisfies the boundary conditions and is carefully chosen and assumed to be the solution of the differential Equation (26). Using a similar procedure as in [14], we show how to obtain the critical value of λ. Substitute Equation (27) in Equation (26), simplify and collocate at the point x = 1 2 because it is the midpoint of the interval. Another point could be chosen, but low order approximations are likely to be better if the collocation points are distributed somewhat evenly throughout the region. Then, we have
θ 2 = 2 λ cosh 2 θ 4 .
Differentiating Equation (28) with respect to θ and setting d λ d θ = 0 , the critical value λ c satisfies
θ = 1 2 λ c cosh θ 4 sinh θ 4 .
By eliminating λ from Equation (28) and Equation (29), we have the value of θ c for the critical λ c satisfying
θ c 4 = coth θ c 4
for which θ c = 4 . 798714560 can be obtained using an iterative method. We then get λ c = 3 . 513830720 from Equation (28). Figure 1 illustrates this critical value of λ.
Figure 1. Variation of θ for different values of λ.
Figure 1. Variation of θ for different values of λ.
Algorithms 08 00895 g001
The finite dimensional problem using standard finite difference scheme is given by
F j ( U j ) = U j + 1 - 2 U j + U j - 1 h 2 + λ exp U j = 0 , j = 1 . . N - 1
with discrete boundary conditions U 0 = U N = 0 and the step size h = 1 / N . There are N - 1 unknowns ( n = N - 1 ). The Jacobian is a sparse matrix and its typical number of nonzero per row is three. It is known that the finite difference scheme converges to the lower solution of the 1-D Bratu using the starting vector U ( 0 ) = ( 0 , 0 , . . , 0 ) T .
We use N = 101 ( n = 100 ) and test for 350 λ’s in the interval ( 0 , 3 . 5 ] (interval width = 0.01). For each λ, we let M λ be the minimum number of iterations for which U j ( k + 1 ) - U j ( k ) 2 < 1 e - 13 , where the approximation U j ( k ) is calculated correct to 14 decimal places. Let M λ ¯ be the mean of iteration number for the 350 λ’s.
Table 3. Comparison of number of λ’s in different methods for 1-D Bratu problem.
Table 3. Comparison of number of λ’s in different methods for 1-D Bratu problem.
Method M = 2 M = 3 M = 4 M = 5 M > 5 M λ ¯
2 n d N M 012114143814.92
3 r d H M 0140206223.62
4 t h S G S 4237100813.33
4 t h H M 4234103723.35
6 t h C H M T 3213124823.42
6 t h H M 3528132113.00
Figure 2 and Table 3 give the results for the 1-D Bratu problem, where M represents number of iterations for convergence. It can be observed from the six methods considered in Table 3 that as λ increases to its critical value, the number of iterations required for convergence increase. However, as the order of method increases, the mean of iteration number decreases. The 6 t h H M is the most efficient method among the six methods because it has the lowest mean iteration number and the highest number of λ converging in 2 iterations.
Figure 2. Variation of number of iteration with λ for the 2 n d N M , 3 r d H M , 4 t h H M and 6 t h H M methods.
Figure 2. Variation of number of iteration with λ for the 2 n d N M , 3 r d H M , 4 t h H M and 6 t h H M methods.
Algorithms 08 00895 g002
For each λ, we find the minimum order of the ( 2 r + 4 ) t h HM family so that we reach convergence in 2 iterations and the results are shown in Figure 3. It can be observed that as the value of λ increases, the value of p required for convergence in 2 iterations also increases. For λ [ 0 . 01 , 0 . 04 ] , we require p = 4 ( 4 t h H M ). For λ [ 0 . 05 , 0 . 35 ] , we require p = 6 ( 6 t h H M ). For λ [ 0 . 36 , 0 . 83 ] , we require p = 8 ( 8 t h H M ). For λ [ 0 . 84 , 1 . 29 ] , we require p = 10 ( 10 t h H M ). For λ [ 1 . 30 , 1 . 66 ] , we require p = 12 ( 12 t h H M ). For λ [ 1 . 66 , 1 . 95 ] , we require p = 14 ( 14 t h H M ). For λ [ 1 . 96 , 2 . 19 ] , we require p = 16 ( 16 t h H M ). For λ [ 2 . 20 , 2 . 37 ] , we require p = 18 ( 18 t h H M ). For λ [ 2 . 38 , 2 . 52 ] , we require p = 20 ( 20 t h H M ). For λ [ 2 . 53 , 2 . 64 ] , we require p = 22 ( 22 t h H M ) and so on. We notice that the width of the interval decrease and the order of the family is very high as λ tends to its critical value. Finally, for λ = 3 . 5 , we require p = 260 to reach convergence in 2 iterations.
Figure 3. Order of the ( 2 r + 4 ) t h HM family for each λ.
Figure 3. Order of the ( 2 r + 4 ) t h HM family for each λ.
Algorithms 08 00895 g003

5. Conclusion

In this work, we have proposed a fourth order method and its multi-step version having higher order convergence using weight functions to solve systems of nonlinear equations. The proposed schemes do not require the evaluation of second or higher order Fréchet derivatives to reach fourth order or higher order of convergence. We have tested a few examples using the proposed schemes and compared them with some known schemes, which illustrate the superiority of the new schemes. Finally, the proposed new methods have been applied on a practical problem called the 1-D Bratu problem. The results obtained are interesting and encouraging for the new methods. Hence, the proposed methods can be considered competent enough to some of the existing methods.

Acknowledgments

The authors are thankful to the anonymous reviewers for their valuable comments.

Author Contributions

The contributions of all of the authors have been similar. All of them have worked together to develop the present manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  2. Homeier, H.H.H. On Newton-type methods with cubic convergence. Comp. Appl. Math. 2005, 176, 425–432. [Google Scholar] [CrossRef]
  3. Babajee, D.K.R. Analysis of Higher Order Variants of Newton’s Method and Their Applications to Differential and Integral Equations and in Ocean Acidification. Ph.D. Thesis, University of Mauritius, Moka, Mauritius, October 2010. [Google Scholar]
  4. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
  5. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
  6. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
  7. Babajee, D.K.R. On a two-parameter Chebyshev-Halley-like family of optimal two-point fourth order methods free from second derivatives. Afr. Mat. 2015, 26, 689–697. [Google Scholar] [CrossRef]
  8. Buckmire, R. Investigations of nonstandard Mickens-type finite-difference schemes for singular boundary value problems in cylindrical or spherical coordinates. Num. Meth. P. Diff. Eqns. 2003, 19, 380–398. [Google Scholar] [CrossRef]
  9. Babajee, D.K.R.; Kalyanasundaram, M.; Jayakumar, J. A family of higher order multi-point iterative methods based on power mean for solving nonlinear equations. Afr. Mat. 2015. [Google Scholar] [CrossRef]
  10. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  11. Ortega, J.M.; Rheinbolt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  12. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comp. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  13. Frontini, M.; Sormani, E. Third-order methods from Quadrature Formulae for solving systems of nonlinear equations. Appl. Math. Comp. 2004, 140, 771–782. [Google Scholar] [CrossRef]
  14. Odejide, S.A.; Aregbesola, Y.A.S. A note on two dimensional Bratu problem. Kragujevac J. Math. 2006, 29, 49–56. [Google Scholar]

Share and Cite

MDPI and ACS Style

Babajee, D.K.R.; Madhu, K.; Jayaraman, J. On Some Improved Harmonic Mean Newton-Like Methods for Solving Systems of Nonlinear Equations. Algorithms 2015, 8, 895-909. https://0-doi-org.brum.beds.ac.uk/10.3390/a8040895

AMA Style

Babajee DKR, Madhu K, Jayaraman J. On Some Improved Harmonic Mean Newton-Like Methods for Solving Systems of Nonlinear Equations. Algorithms. 2015; 8(4):895-909. https://0-doi-org.brum.beds.ac.uk/10.3390/a8040895

Chicago/Turabian Style

Babajee, Diyashvir Kreetee Rajiv, Kalyanasundaram Madhu, and Jayakumar Jayaraman. 2015. "On Some Improved Harmonic Mean Newton-Like Methods for Solving Systems of Nonlinear Equations" Algorithms 8, no. 4: 895-909. https://0-doi-org.brum.beds.ac.uk/10.3390/a8040895

Article Metrics

Back to TopTop