Next Article in Journal
Study of a High Order Family: Local Convergence and Dynamics
Next Article in Special Issue
On the Solvability of a Mixed Problem for a High-Order Partial Differential Equation with Fractional Derivatives with Respect to Time, with Laplace Operators with Spatial Variables and Nonlocal Boundary Conditions in Sobolev Classes
Previous Article in Journal
On Ulam Stability and Multiplicity Results to a Nonlinear Coupled System with Integral Boundary Conditions
Previous Article in Special Issue
Approximate Controllability of Sub-Diffusion Equation with Impulsive Condition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving Non-Linear Fractional Variational Problems Using Jacobi Polynomials

by
Harendra Singh
1,
Rajesh K. Pandey
2,3,* and
Hari Mohan Srivastava
4,5
1
Department of Mathematics, Post Graduate College, Ghazipur 233001, India
2
Department of Mathematical Sciences, Indian Institute of Technology (BHU) Varanasi, Varanasi 221005, India
3
Centre for Advanced Biomaterials and Tissue Engineering, Indian Institute of Technology (BHU) Varanasi, Varanasi 221005, India
4
Department of Mathematics and Statistics, University of Victoria, Victoria, BC V8W 3R4, Canada
5
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
*
Author to whom correspondence should be addressed.
Submission received: 15 January 2019 / Revised: 22 February 2019 / Accepted: 22 February 2019 / Published: 27 February 2019

Abstract

:
The aim of this paper is to solve a class of non-linear fractional variational problems (NLFVPs) using the Ritz method and to perform a comparative study on the choice of different polynomials in the method. The Ritz method has allowed many researchers to solve different forms of fractional variational problems in recent years. The NLFVP is solved by applying the Ritz method using different orthogonal polynomials. Further, the approximate solution is obtained by solving a system of nonlinear algebraic equations. Error and convergence analysis of the discussed method is also provided. Numerical simulations are performed on illustrative examples to test the accuracy and applicability of the method. For comparison purposes, different polynomials such as 1) Shifted Legendre polynomials, 2) Shifted Chebyshev polynomials of the first kind, 3) Shifted Chebyshev polynomials of the third kind, 4) Shifted Chebyshev polynomials of the fourth kind, and 5) Gegenbauer polynomials are considered to perform the numerical investigations in the test examples. Further, the obtained results are presented in the form of tables and figures. The numerical results are also compared with some known methods from the literature.

1. Introduction

It is necessary to determine the maxima and minima of certain functionals in study problems in analysis, mechanics, and geometry. These problems are known as variational problems in calculus of variations. Variational problems have many applications in various fields like physics [1], engineering [2], and areas in which energy principles are applicable [3,4,5].
Nowadays, fractional calculus is a very interesting branch of mathematics. Fractional calculus has many real applications in science and engineering, such as fluid dynamics [6], biology [7], chemistry [8], viscoelasticity [9,10], signal processing [11], bioengineering [12], control theory [13], and physics [14]. Due to the importance of the fractional derivatives established through real-life applications, several authors have considered problems in calculus of variations by replacing the integer-order derivative with fractional orders in objective functionals, and this is thus known as fractional calculus of variations. Some of these studies are of a fractionally damped system [15], energy control for a fractional linear control system [16], a fractional model of a vibrating string [17], and an optimal control problem [18]. In this paper, our aim is to minimize non-linear fractional variational problems (NLFVPs) [19] of the following form:
J ( y ) = 0 1 ( g ( x ) D α y ( x ) + g ( x ) I 1 α y ( x ) + h ( x ) ) 2 d x
under the constraints
y ( 0 ) = a ,     I 1 α y ( 1 ) = ϵ ,
where g and h are two functions of class C 1 with g ( x ) 0 on [ 0 ,   1 ] , α and ϵ are real numbers with α ( 0 ,   1 ) , and a is a constant.
The pioneer approach for solving the fractional variational problems originates in reference [20] where Agrawal derived the formulation of the Euler-Langrage equation for fractional variational problems. Further, in reference [4], he gave a general formulation for fractional variational problems. In reference [5], the authors used an analytical algorithm based on the Adomian decomposition method (ADM) for solving problems in calculus of variations. In [21,22], Legendre orthonormal polynomials and Jacobi orthonormal polynomials, respectively, were used to obtain an approximate numerical solution of fractional optimum control problems. In [23], the Haar wavelet method was used to obtain numerical solution of these problems. Some other numerical methods for the approximate solution of fractional variational problems are given in [24,25,26,27,28,29,30,31,32,33,34]. Recently, in [19], the authors gave a new class of fractional variational problems and solved this using a decomposition formula based on Jacobi polynomials. The operational matrix methods (see [35,36,37,38,39,40,41]) have been found to be useful for solving problems in fractional calculus.
In present paper, we extend the Rayleigh-Ritz method together with operational matrices of different orthogonal polynomials such as Shifted Legendre polynomials, Shifted Chebyshev polynomials of the first kind, Shifted Chebyshev polynomials of the third kind, Shifted Chebyshev polynomials of the fourth kind, and Gegenbauer polynomials to solve a special class of NLFVPs. The Rayleigh-Ritz methods have been discussed by many researchers in the literature for different kinds of variational problems, i.e., fractional optimal control problems [18,21,22,32,33]; here we cite only few, and many more can be found in the literature. In this method, first we take a finite-dimensional approximation of the unknown function. Further, using an operational matrix of integration and the Rayleigh-Ritz method in the variational problem, we obtain a system of non-linear algebraic equations whose solution gives an approximate solution for the non-linear variational problem. Error analysis of the method for different orthogonal polynomials is given, and convergence of the approximate numerical solution to the exact solution is shown. A comparative study using absolute error and root-mean-square error tables for all five kinds of polynomials is analyzed. Numerical results are discussed in terms of the different values of fractional order involved in the problem and are shown through tables and figures.

2. Basic Preliminaries

The definition of fractional order integration in the Riemann-Liouville sense is defined as follows.
Definition 1.
The Riemann-Liouville fractional order integral operator is given by
I α f ( x ) = { 1 Γ ( α ) 0 x ( x t ) α 1 f ( t ) d t , α > 0 , x > 0 , f ( x ) , α = 0 .
The analytical form of the shifted Jacobi polynomial of degree i on [0, 1] is given as
Ψ i ( x ) = k = 0 i ( 1 ) i k Γ ( i + b + 1 ) Γ ( i + k + a + b + 1 ) Γ ( k + b + 1 ) Γ ( i + a + b + 1 ) ( i k ) ! k ! x k
where a and b are certain constants. Jacobi polynomials are orthogonal in the interval [0, 1] with respect to the weight function w ( a , b ) ( x ) = ( 1 x ) a x b and have the orthogonality property
0 1 Ψ n ( x ) Ψ m ( x ) w ( a , b ) ( x ) d x = v n a , b δ m n
where δ m n is the Kronecker delta function and
v n a , b = Γ ( n + a + 1 ) Γ ( n + b + 1 ) ( 2 n + a + b + 1 ) n ! Γ ( n + a + b + 1 ) .
For certain values of the constants a and b , the Jacobi polynomials take the form of some well-known polynomials, defined as follows.
Case 1: Legendre polynomials (S1) For a = 0 ,   b = 0 in Equation (3), we get Legendre polynomials.
Ψ i ( x ) = k = 0 i ( 1 ) i k Γ ( i + 1 ) Γ ( i + k + 1 ) Γ ( k + 1 ) Γ ( i + 1 ) ( i k ) ! k ! x k
Case 2: Chebyshev polynomials of the first kind (S2) For a = 1 2 ,   b = 1 2 in Equation (3), we get Chebyshev polynomials of the first kind.
Ψ i ( x ) = k = 0 i ( 1 ) i k Γ ( i + 3 2 ) Γ ( i + k + 2 ) Γ ( k + 3 2 ) Γ ( i + 2 ) ( i k ) ! k ! x k
Case 3: Chebyshev polynomials of the third kind (S3) For a = 1 2 ,   b = 1 2 in Equation (3), we get Chebyshev polynomials of the third kind.
Ψ i ( x ) = k = 0 i ( 1 ) i k Γ ( i + 1 2 ) Γ ( i + k + 1 ) Γ ( k + 1 2 ) Γ ( i + 1 ) ( i k ) ! k ! x k
Case 4: Chebyshev polynomials of the fourth kind (S4) For a = 1 2 ,   b = 1 2 in Equation (3), we get Chebyshev polynomials of the fourth kind.
Ψ i ( x ) = k = 0 i ( 1 ) i k Γ ( i + 3 2 ) Γ ( i + k + 1 ) Γ ( k + 3 2 ) Γ ( i + 1 ) ( i k ) ! k ! x k
Case 5: Gegenbauer polynomials (S5) For a = b = a 1 2 in Equation (3), we get Gegenbauer polynomials.
Ψ i ( x ) = k = 0 i ( 1 ) i k Γ ( i + a + 1 2 ) Γ ( i + k + 2 a ) Γ ( k + a + 1 2 ) Γ ( i + 2 a ) ( i k ) ! k ! x k
A function f L 2 [ 0 , 1 ] with | f ( t ) | K can be expanded as
f ( t ) = lim n i = 0 n c i Ψ i ( t ) ,
where c i = f ( t ) , Ψ i ( t ) and , is the usual inner product space.
Equation (11) for finite-dimensional approximation is written as
f i = 0 m c i Ψ i ( t ) = C T ϕ m ( t ) ,
where C and ϕ m ( t ) are ( m + 1 ) × 1 matrices given by C = [ c 0 , c 1 , , c m ] T and ϕ m ( t ) = [ Ψ 0 , Ψ 1 , , Ψ m ] T .
Theorem 1.
Let H be a Hilbert space and Z be a closed subspace of H with dim Z < ; let { z 1 , z 2 , , z N } be any basis for Z . Suppose that y is an arbitrary element in H and z 0 is the unique best approximation to y out of Z . Then
y z 0 2 2 = T ( y ; z 1 , z 2 , , z N ) T ( z 1 , z 2 , , z N ) ,
where
T ( y ; z 1 , z 2 , , z N ) = | y , y y , Z 1 y , Z N Z 1 , Z N Z 1 , Z 1 Z 1 , Z N · · · · · · · · · Z N , y Z N , Z 1 Z N , Z N | .
Proof .
Please see references [42,43]. □
Theorem 2.
Suppose that f N ( x ) is the Nth approximation of the function f L w ( a ,   b ) 2 [ 0 , 1 ] , and suppose
S N ( f ) = 0 1 [ f ( x ) f N ( x ) ] 2 w ( a ,   b ) ( x ) d x ;
then we have
lim N S N ( f ) = 0 .
Proof .
Please see Appendix A. □

3. Operational Matrices

Theorem 3.
Let ϕ n = [ Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ] T be a Shifted Jacobi vector and suppose v > 0 ; then
I v Ψ i ( x ) = I ( v ) ϕ n ( x )
where I ( v ) = ( μ ( i , j ) ) is an ( n + 1 ) × ( n + 1 ) operational matrix of the fractional integral of order v and its ( i , j ) th entry is given by
μ ( i , j ) = k = 0 i l = 0 j ( 1 ) i + j k l Γ ( a + 1 ) Γ ( i + b + 1 ) Γ ( i + k + a + b + 1 ) Γ ( j + l + a + b + 1 ) Γ ( v + k + l + a + b + 1 ) ( 2 j + a + b + 1 ) j ! ( i k ) ! ( j l ) ! ( l ) !   Γ ( k + b + 1 ) Γ ( i + a + b + 1 ) Γ ( v + k + 1 ) Γ ( j + a + 1 ) Γ ( l + b + 1 ) Γ ( k + l + v + a + b + 1 ) .
Proof .
We refer to reference [44] for the proof. □
Now, in particular cases, the operational matrix of integration for various polynomials is given as follows.
For Shifted Legendre polynomials (S1), the ( i , j ) th entry of the operational matrix of integration is given as
μ ( i , j ) = k = 0 i l = 0 j ( 1 ) i + j + k + l ( i + k ) ! ( j + l ) ! ( i k ) ! ( j l ) ! ( k ) ! ( l ! ) 2 ( α + k + l + 1 ) Γ ( α + k + l ) .
For Shifted Chebyshev polynomials of the first kind (S2), the ( i , j ) th entry of the operational matrix of integration is given as:
μ ( i , j ) = k = 0 i l = 0 j ( 1 ) i + j k l Γ ( 3 2 ) Γ ( i + 3 2 ) Γ ( i + k + 2 ) Γ ( j + l + 2 ) Γ ( α + k + l + 3 2 ) ( 2 j + 2 ) j ! ( i k ) ! ( j l ) ! ( l ) !   Γ ( k + 3 2 ) Γ ( i + 2 ) Γ ( α + k + 1 ) Γ ( j + 3 2 ) Γ ( l + 3 2 ) Γ ( k + l + α + 3 ) .
For Shifted Chebyshev polynomials of the third kind (S3), the ( i , j ) th entry of the operational matrix of integration is given as
μ ( i , j ) = k = 0 i l = 0 j ( 1 ) i + j k l Γ ( 3 2 ) Γ ( i + 1 2 ) Γ ( i + k + 1 ) Γ ( j + l + 1 ) Γ ( α + k + l + 1 2 ) ( 2 j + 1 ) j ! ( i k ) ! ( j l ) ! ( l ) !   Γ ( k + 1 2 ) Γ ( i + 1 ) Γ ( α + k + 1 ) Γ ( j + 3 2 ) Γ ( l + 1 2 ) Γ ( k + l + α + 2 ) .
For Shifted Chebyshev polynomials of the fourth kind (S4), the ( i , j ) th entry of the operational matrix of integration is given as
μ ( i , j ) = k = 0 i l = 0 j ( 1 ) i + j k l Γ ( 1 2 ) Γ ( i + 3 2 ) Γ ( i + k + 1 ) Γ ( j + l + 1 ) Γ ( α + k + l + 3 2 ) ( 2 j + 1 ) j ! ( i k ) ! ( j l ) ! ( l ) !   Γ ( k + 3 2 ) Γ ( i + 1 ) Γ ( α + k + 1 ) Γ ( j + 1 2 ) Γ ( l + 3 2 ) Γ ( k + l + α + 2 ) .
For Shifted Gegenbauer polynomials (S5), the ( i , j ) th entry of the operational matrix of integration is given as
μ ( i , j ) = k = 0 i l = 0 j ( 1 ) i + j k l Γ ( i + a + 1 2 ) Γ ( i + k + 2 a ) Γ ( j + l + 2 a ) Γ ( a + 1 2 ) Γ ( α + k + l + a + 1 2 ) ( 2 j + 2 a ) j ! ( i k ) ! ( j l ) ! ( l ) !   Γ ( k + a + 1 2 ) Γ ( i + 2 a ) Γ ( α + k + 1 ) Γ ( j + a + 1 2 ) Γ ( l + a + 1 2 ) Γ ( 2 a + k + l + α + 1 ) .

4. Method of Solution

Approximating the unknown function in terms of orthogonal polynomials has been practiced in several papers in recent years [18,21,22,32,33] for different types of problems. Here, for solving the problem in Equation (1), we approximate
D α y ( x ) = C T Φ n ( x ) .
We are approximating the derivative first because we want to use the initial condition. Taking the integral of order α on both sides of Equation (19), we get
y ( x ) = C T I α Φ n ( x ) + y ( 0 ) .
Using the operational matrix of integration, Equation (20) can be written as
y ( x ) C T I ( α ) Φ n ( x ) + A T Φ n ( x )
where y ( 0 ) = a A T Φ n ( x ) and I ( α ) is the operational matrix of integration of order α .
Using Equation (19), we can write
I 1 α y ( x ) = I D α y ( x ) = C T I Φ n ( x ) C T I ( 1 ) Φ n ( x ) .
Using Equations (19) and (22) in Equation (1), we obtain
J   ( c 0 , c 1 , , c n ) = 0 1 ( g ( x ) C T Φ n ( x ) + g ( x ) C T I Φ n ( x ) + h ( x ) ) 2 d x .
Equation (23) can then be written as
J   ( c 0 , c 1 , , c n ) = 0 1 ( C T g ( x ) Φ n ( x ) + C T I ( 1 ) g ( x ) Φ n ( x ) + h ( x ) ) 2 d x .
We further take the following approximations:
g ( x ) Φ i ( x ) E 1 i , T Φ n ( x )
g ( x ) Φ i ( x ) E 2 i , T Φ n ( x )
h ( x ) E 3 T Φ n ( x )
where E 1 i , T = [ e 1 , 0 i ,   e 1 , 1 i ,   , e 1 , n i ] , E 2 i , T = [ e 2 , 0 i ,   e 2 , 1 i ,   , e 2 , n i ] , E 3 T = [ e 3 , 0 ,   e 3 , 1 ,   , e 3 , n ] , and e 1 , j i = g ( x ) Φ i ( x ) , Ψ j ( x ) , e 2 , j i = g ( x ) Φ i ( x ) , Ψ j ( x ) , e 3 , j = h ( x ) , Ψ j ( x ) , 0 i ,   j n , and , is the usual inner product space.
Using Equations (25) and (26) we can write
g ( x ) Φ n ( x ) E 1 T Φ n ( x )
g ( x ) Φ n ( x ) E 2 T Φ n ( x )
where
E 1 T = ( E 1 i , T ) 0 i n and   E 2 T = ( E 2 i , T ) 0 i n .
From Equations (24) and (27)–(29), we get
J   ( c 0 , c 1 , , c n ) = 0 1 ( C T E 1 T Φ n ( x ) + C T I ( 1 ) E 2 T Φ n ( x ) + E 3 T Φ n ( x ) ) 2 d x .
Let
E T = C T ( E 1 T + I ( 1 ) E 2 T ) + E 3 T .
From Equations (31) and (32), we get
J   ( c 0 , c 1 , , c n ) = 0 1 ( E T Φ n ( x ) ) 2 d x = 0 1 E T Φ n ( x ) Φ n ( x ) T E   d x , = E T P E
where P is a square matrix given by P = 0 1 Φ n ( x ) Φ n ( x ) T   d x .
Using Equation (22), the boundary condition can be written as
I 1 α y ( 1 ) C T I ( 1 ) Φ n ( 1 ) = ϵ .
Using the Lagrange multiplier method [18,20,21,22,32,33], the necessary extremal condition for the functional in Equation (33) becomes
J c 0 = 0 ,   J c 1 = 0 ,   ,   J c n 1 = 0 .
From Equations (34) and (35), we get a set of n + 1 equations. Solving these n + 1 equations, we get unknown parameters c 0 ,   c 1 , , c n . Using these unknown parameters in Equation (21), we get the unknown function’s extreme values of the non-linear fractional functional.

5. Error Analysis

The upper bound of error for the operational matrix of fractional integration of a Jacobi polynomial of the ith degree is given as
e i α = I ( α ) Ψ i ( x ) I α Ψ i ( x ) .
From Equation (36), we can write
e i α 2 = I α Ψ i ( x ) j = 0 n μ ( i , j ) Ψ j ( x ) 2 .
Taking the integral operator of order α on both sides of Equation (3), we get
I α Ψ i ( x ) = k = 0 i ( 1 ) i k Γ ( i + b + 1 ) Γ ( i + k + a + b + 1 ) ( i k ) !   Γ ( k + b + 1 ) Γ ( i + a + b + 1 ) Γ ( α + k + 1 ) x α + k .
From the construction of the operational matrix we can write
μ ( i , j ) = k = 0 i ( 1 ) i k Γ ( i + b + 1 ) Γ ( i + k + a + b + 1 ) ( i k ) !   Γ ( k + b + 1 ) Γ ( i + a + b + 1 ) Γ ( α + k + 1 ) c j , k ,   j = 0 , 1 , , n .
Using Theorem 1 we can write
x α + k j = 0 n c j , k Ψ j ( x ) 2 = ( T ( x α + k ; Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) T ( Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) ) 2 .
From Equations (37)–(39), we get
e i α 2 = k = 0 i ( 1 ) i k Γ ( i + b + 1 ) Γ ( i + k + a + b + 1 ) ( i k ) !   Γ ( k + b + 1 ) Γ ( i + a + b + 1 ) Γ ( α + k + 1 ) x α + k j = 0 n k = 0 i ( 1 ) i k Γ ( i + b + 1 ) Γ ( i + k + a + b + 1 ) ( i k ) !   Γ ( k + b + 1 ) Γ ( i + a + b + 1 ) Γ ( α + k + 1 ) c j , k Ψ j ( x ) 2 k = 0 i | Γ ( i + b + 1 ) Γ ( i + k + a + b + 1 ) ( i k ) !   Γ ( k + b + 1 ) Γ ( i + a + b + 1 ) Γ ( α + k + 1 ) |   x α + k j = 0 n c j , k Ψ j ( x ) 2 .
Using Equation (40) in Equation (41), we obtain the error bound for the operational matrix of integration of an ith-degree polynomial, which is given as
e i α 2 k = 0 i | Γ ( i + b + 1 ) Γ ( i + k + a + b + 1 ) ( i k ) !   Γ ( k + b + 1 ) Γ ( i + a + b + 1 ) Γ ( α + k + 1 ) | ( T ( x α + k ; Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) T ( Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) ) 2 , i = 0 , 1 , 2 , , n .
Now, in particular cases, the error bounds for different orthogonal polynomials are given as follows.
Case 1: For Legendre polynomials (S1) the error bound is given as
e i α 2 k = 0 i | Γ ( i + 1 ) Γ ( i + k + 1 ) ( i k ) !   Γ ( k + 1 ) Γ ( i + 1 ) Γ ( α + k + 1 ) | ( T ( x α + k ; Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) T ( Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) ) 2 ,   i = 0 , 1 , 2 , , n .
Case 2: For Chebyshev polynomials of the first kind (S2) the error bound is given as
e i α 2 k = 0 i | Γ ( i + 3 2 ) Γ ( i + k + 2 ) ( i k ) !   Γ ( k + 3 2 ) Γ ( i + 2 ) Γ ( α + k + 1 ) | ( T ( x α + k ; Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) T ( Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) ) 2 , i = 0 , 1 , 2 , , n .
Case 3: For Chebyshev polynomials of the third kind (S3) the error bound is given as
e i α 2 k = 0 i | Γ ( i + 1 2 ) Γ ( i + k + 1 ) ( i k ) !   Γ ( k + 1 2 ) Γ ( i + 1 ) Γ ( α + k + 1 ) | ( T ( x α + k ; Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) T ( Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) ) 2 , i = 0 , 1 , 2 , , n .
Case 4: For Chebyshev polynomials (S4) the error bound is given as
e i α 2 k = 0 i | Γ ( i + 3 2 ) Γ ( i + k 1 ) ( i k ) !   Γ ( k + 3 2 ) Γ ( i + 1 ) Γ ( α + k + 1 ) | ( T ( x α + k ; Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) T ( Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) ) 2 , i = 0 , 1 , 2 , , n .
Case 5: For Gegenbauer polynomials (S5) the error bound is given as
e i α 2 k = 0 i | Γ ( i + 2 ) Γ ( i + k + 3 ) ( i k ) !   Γ ( k + 2 ) Γ ( i + 3 ) Γ ( α + k + 1 ) | ( T ( x α + k ; Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) T ( Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ n ( x ) ) ) 2 , i = 0 , 1 , 2 , , n .
Let e I , n α , w denote the error vector for the operational matrix of integration of order α obtained by using ( n + 1 ) orthogonal polynomials in L w 2 [ 0 , 1 ] ; then
e I , n α , w = I ( α ) Φ n ( x ) I α Φ n ( x ) .
From Theorems 1 and 2 and from Equations (43)–(47), it is clear that as n the error vector in Equation (48) tends to zero.

6. Convergence Analysis

A set of orthogonal polynomials on [0, 1] forms a basis for L w 2 [ 0 ,   1 ] . Let S n be the n-dimensional subspace of L w 2 [ 0 ,   1 ] generated by ( Φ i ) 0 i n . Thus, every functional on S n can be written as a linear combination of orthogonal polynomials ( Φ i ) 0 i n . The scalars in the linear combinations can be chosen in such a way that the functional minimizes. Let the minimum value of a functional on space S n be denoted by m n . From the construction of S n and m n , it is clear that S n S n + 1 and m n + 1 m n .
Theorem 4.
Consider the functional J , then
lim n m n = m = i n f x ϵ L w 2 [ 0 , 1 ] J [ x ] .
Proof .
Using Equation (48) in Equation (23), we have
J   ( c 0 , c 1 , , c n ) = 0 1 ( C T g ( x ) Φ n ( x ) + C T I ( 1 ) g ( x ) Φ n ( x ) + C T e I , n 1 g ( x ) + h ( x ) ) 2 d x .
Taking n and using Equations (25)–(27) and (48) in Equation (49), we get
J e   ( c 0 , c 1 , , c n ) = 0 1 ( C T ( i = 0 n ( E 1 i , T Φ n ( x ) + e E 1 i , n w ) ) + C T I ( 1 ) ( i = 0 n ( E 2 i , T Φ n ( x ) + e E 2 i , n w ) ) + E 3 T Φ n ( x ) + e E 3 , n w ) 2 d x
where
e E 1 i , n w = E 1 i , T Φ ( x ) E 1 i , T Φ n ( x ) , e E 2 i , n w = E 2 i , T Φ ( x ) E 2 i , T Φ n ( x ) , e E 3 , n w = E 3 T Φ ( x ) E 3 T Φ n ( x ) ,
and J e is the error term of the functional.
Using Equations (30) and (32) in Equation (50), we get
J e   ( c 0 , c 1 , , c n ) = 0 1 ( E T Φ n ( x ) + e n w ) 2 d x
where
e n w = C T i = 0 n e E 1 i , n w + C T I ( 1 ) i = 0 n e E 2 i , n w .
Solving Equation (51) similarly to the original functional, Equation (51) reduces to the following form:
J e   ( c 0 , c 1 , , c n ) = E T P E + e n w ( J e ) .
Using Equation (48) in Equation (34), we get
C T I ( 1 ) Φ n ( 1 ) + C T e I , n 1 , w = ϵ .
Similar to above, by using the Rayleigh-Ritz method on Equation (53) with the boundary condition in Equation (54) we obtain the extreme value of the functional defined in Equation (53). Let this extreme value be denoted by m n * ( t ) .
Now, from Equation (48), it is obvious that e E 1 i , n w   , e E 2 i , n w ,   e E 3 , n w 0 as n , which implies that e n w ( J e ) 0   as   n . So, it is clear that as n , the functional J e in Equation (53) comes close to the functional J in Equation (23) and the boundary condition in Equation (54) comes close to Equation (34).
So, for large values of n ,
m n * ( t ) m n ( t ) .
From Theorem 4 and Equation (55), we conclude that
lim n m n * ( t ) = m ( t ) .
Proof completed. □

7. Numerical Results and Discussions

In this section, we investigate the accuracy of the method by testing it on some numerical examples. We apply the numerical algorithm to two test problems using different orthogonal polynomials as a basis. The results for the test problems are shown through the figures and tables.
Example 1.
Consider a non-linear fractional variational problem as in Equation (1) with g ( x ) = h ( x ) = 1 1 + x β ; we then have the following non-linear fractional variational problem [19]:
J   ( y ) = 0 1 ( 1 1 + x β D α y ( x ) ( I 1 α y ( x ) + 1 ) β x β 1 ( 1 + x β ) 2 ) 2 d x
under the constraints
y ( 0 ) = 0 ,   I 1 α y ( 1 ) = ϵ .
The exact solution of the above equation is given as
y e x a c t ( x ) = ( 1 2 ( 1 + ϵ ) 1 ) ( Γ ( β + 2 ) Γ ( β + α + 1 ) x β + α + 1 Γ ( α + 1 ) x α ) + Γ ( β + 1 ) Γ ( α + β ) x β + α 1 .
We discuss this example for different values of α = 0.5 ,   0.6 ,   0.7 ,   0.8 ,   0.9 , or 1, β = 5 , and ϵ = 1 .
In Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5, it is shown that the solutions for the two different values of α = 0.8   and   α = 1 coincide with the exact solutions for different orthogonal polynomials at n = 5.
In Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10, it is shown that the solution varies continuously for Shifted Legendre polynomials, Shifted Chebyshev polynomials of the second kind, Shifted Chebyshev polynomials of the third kind, Shifted Chebyshev polynomials of the fourth kind, and Gegenbauer polynomials, respectively, with different values of fractional order.
In Table 1, we have listed the maximum absolute errors (MAE) and root-mean-square errors (RMSE) for Example 1 for the two different n values of 2 and 6.
In Table 1, we have compared results for different polynomials, and it is observed that the results for Shifted Legendre polynomials and Gegenbauer polynomials are better than those for the other polynomials. It is also observed that the MAE and RMSE decrease with increasing n.
Example 2.
Consider a non-linear fractional variational problem as in Equation (1) with g ( x ) = h ( x ) = e v x ; we then have the following non-linear fractional variational problem [19]:
J   ( y ) = 0 1 ( e v x D α y ( x ) v ( I 1 α y ( x ) + 1 ) e v x ) 2 d x
under the constraints
y ( 0 ) = 0 ,     I 1 α y ( 1 ) = ϵ .
The exact solution of the above equation is given as
y e x a c t ( x ) = ( e 1 ( 1 + ϵ ) 1 ) v α ( k = 0 ( k + 1 ) Γ ( k + α + 1 ) ( v x ) k + α ) + x α 1 E 1 , α ( v x ) x α 1 Γ ( α )
where E a , b ( x ) is the Mittag-Leffler function of order a and b and is defined as
E a , b ( x ) = k = 0 x k Γ ( a k + b ) .
We discuss Example 2 for different α values of 0.5 ,   0.6 ,   0.7 ,   0.8 ,   0.9 ,   and 1 and ϵ = 2 .
In Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15, it is shown that the solutions for the two different values of α = 0.8   and   α = 1 coincide with the exact solutions for different orthogonal polynomials at n = 5.
Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20 reflect that the approximate solution varies continuously for Shifted Legendre polynomials, Shifted Chebyshev polynomials of the second kind, Shifted Chebyshev polynomials of the third kind, Shifted Chebyshev polynomials of the fourth kind, and Gegenbauer polynomials, respectively, with different values of fractional order.
In Table 2, we have listed the maximum absolute errors (MAE) and root-mean-square errors (RMSE) for Example 2 for the two n values 2 and 6.
In Table 2, we have compared results for different polynomials, and it is observed that the results for the Shifted Legendre polynomial are better than those for the other polynomials. It is also observed that the MAE and RMSE decrease as n increases.

8. Conclusions

We extended the Ritz method [18,20,21,22,32,33] for solving a class of NLFVPs using different orthogonal polynomials such as shifted Legendre polynomials, shifted Chebyshev polynomials of the first kind, shifted Chebyshev polynomials of the third kind, shifted Chebyshev polynomials of the fourth kind, and Gegenbauer polynomials. These polynomials were used to approximate the unknown function in the NLFVP. The advantage of the method is that it converts the given NLFVPs into a set of non-linear algebraic equations which are then solved numerically. The error bound of the approximation method for NLFVP was established. It was also shown that the approximate numerical solution converges to the exact solution as we increase the number of basis functions in the approximation. At the end, numerical results were provided by applying the method to two test examples, and it was observed that the results showed good agreement with the exact solution. Numerical results obtained using different orthogonal polynomials were compared. A comparative study showed that the shifted Legendre polynomials were more accurate in approximating the numerical solution.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors are very grateful to the referees for their constructive comments and suggestions for the improvement of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Theorem A1.
Let f : [ 0 ,   1 ] R be a function such that f C ( N + 1 ) [ 0 ,   1 ] and let f N ( x ) be the N t h approximation of the function from P N ( a ,   b ) ( x ) = s p a n   { Ψ 0 ( x ) , Ψ 1 ( x ) , , Ψ N ( x ) } ; then [45]
f ( x ) f N ( x ) w ( a ,   b ) 2 K ( N + 1 ) ! Γ ( 1 + a ) Γ ( 3 + 2 N + b ) Γ ( 4 + 2 N + a + b ) ,
where K = m a x x [ 0 , 1 ] | f ( N + 1 ) ( x ) | .
Proof .
Since f C ( N + 1 ) [ 0 ,   1 ] , the Taylor polynomial of f at x = 0 , is given as
g 1 ( x ) = f ( 0 ) + f ( 0 ) x + + f N ( 0 ) x N N ! .
The upper bound of the error of the Taylor polynomial is given as
| f ( x ) g 1 ( x ) | K x N + 1 ( N + 1 ) ! ,
where K = m a x x [ 0 , 1 ] | f ( N + 1 ) ( x ) | .
Since f N ( x ) and g 1 ( x ) P N ( a ,   b ) ( x ) , we have
f ( x ) f N ( x ) w ( a ,   b ) 2 f ( x ) g 1 ( x ) w ( a ,   b ) 2 ( K ( N + 1 ) ! ) 2 0 1 x 2 N + 2 + b ( 1 x ) a d x = ( K ( N + 1 ) ! ) 2 Γ ( 1 + a ) Γ ( 3 + 2 N + b ) Γ ( 4 + 2 N + a + b ) , f ( x ) f N ( x ) w ( a ,   b ) 2 K ( N + 1 ) ! Γ ( 1 + a ) Γ ( 3 + 2 N + b ) Γ ( 4 + 2 N + a + b ) ,
which shows that lim N f ( x ) f N ( x ) w ( a ,   b ) 2 = 0 . □

References

  1. Dym, C.L.; Shames, I.H. Solid Mechanics: A Variational Approach; McGraw-Hill: New York, NY, USA, 1973. [Google Scholar]
  2. Frederico, G.S.F.; Torres, D.F.M. Fractional conservation laws in optimal control theory. Nonlinear Dyn. 2008, 53, 215–222. [Google Scholar] [CrossRef]
  3. Pirvan, M.; Udriste, C. Optimal control of electromagnetic energy. Balk. J. Geom. Appl. 2010, 15, 131–141. [Google Scholar]
  4. Agrawal, O.P. A general finite element formulation for fractional variational problems. J. Math. Anal. Appl. 2008, 337, 1–12. [Google Scholar] [CrossRef]
  5. Dehghan, M.; Tatari, M. The use of Adomian decomposition method for solving problems in calculus of variations. Math. Probl. Eng. 2006, 2006, 1–12. [Google Scholar] [CrossRef]
  6. Singh, H. A new stable algorithm for fractional Navier-Stokes equation in polar coordinate. Int. J. Appl. Comp. Math. 2017, 3, 3705–3722. [Google Scholar] [CrossRef]
  7. Robinson, A.D. The use of control systems analysis in neurophysiology of eye movements. Ann. Rev. Neurosci. 1981, 4, 462–503. [Google Scholar] [CrossRef] [PubMed]
  8. Singh, H. Operational matrix approach for approximate solution of fractional model of Bloch equation. J. King Saud Univ.-Sci. 2017, 29, 235–240. [Google Scholar] [CrossRef]
  9. Bagley, R.L.; Torvik, P.J. Fractional calculus a differential approach to the analysis of viscoelasticity damped structures. AIAA J. 1983, 21, 741–748. [Google Scholar] [CrossRef]
  10. Bagley, R.L.; Torvik, P.J. Fractional calculus in the transient analysis of viscoelasticity damped structures. AIAA J. 1985, 23, 918–925. [Google Scholar] [CrossRef]
  11. Panda, R.; Dash, M. Fractional generalized splines and signal processing. Signal Process. 2006, 86, 2340–2350. [Google Scholar] [CrossRef]
  12. Magin, R.L. Fractional calculus in bioengineering. Crit. Rev. Biomed. Eng. 2004, 32, 1–104. [Google Scholar] [CrossRef] [PubMed]
  13. Bohannan, G.W. Analog fractional order controller in temperature and motor control applications. J. Vib. Control. 2008, 14, 1487–1498. [Google Scholar] [CrossRef]
  14. Novikov, V.V.; Wojciechowski, K.W.; Komkova, O.A.; Thiel, T. Anomalous relaxation in dielectrics. Equations with fractional derivatives. Mater. Sci. 2005, 23, 977–984. [Google Scholar]
  15. Agrawal, O.P. A new Lagrangian and a new Lagrange equation of motion for fractionally damped systems. J. Appl. Mech. 2013, 237, 339–341. [Google Scholar] [CrossRef]
  16. Mozyrska, D.; Torres, D.F.M. Minimal modified energy control for fractional linear control systems with the Caputo derivative. Carpath. J. Math. 2010, 26, 210–221. [Google Scholar]
  17. Almeida, R.; Malinowska, R.; Torres, D.F.M. A fractional calculus of variations for multiple integrals with application to vibrating string. J. Math. Phys. 2010, 51, 033503. [Google Scholar] [CrossRef]
  18. Lotfi, A.; Dehghan, M.; Yousefi, S.A. A numerical technique for solving fractional optimal control problems. Comput. Math. Appl. 2011, 62, 1055–1067. [Google Scholar] [CrossRef]
  19. Khosravian-Arab, H.; Almeida, R. Numerical solution for fractional variational problems using the Jacobi polynomials. Appl. Math. Modell. 2015. [Google Scholar] [CrossRef]
  20. Agrawal, O.P. Formulation of Euler-Lagrange equations for fractional variational problems. J. Math. Anal. Appl. 2002, 272, 368–379. [Google Scholar] [CrossRef]
  21. Doha, E.H.; Bhrawy, A.H.; Baleanu, D.; Ezz-Eldien, S.S.; Hafez, R.M. An efficient numerical scheme based on the shifted orthonormal Jacobi polynomials for solving fractional optimal control problems. Adv. Differ. Equ. 2015. [Google Scholar] [CrossRef]
  22. Ezz-Eldie, S.S.; Doha, E.H.; Baleanu, D.; Bhrawy, A.H. A numerical approach based on Legendre orthonormal polynomials for numerical solutions of fractional optimal control problems. J. Vib. Ctrl. 2015. [Google Scholar] [CrossRef]
  23. Osama, H.M.; Fadhel, S.F.; Zaid, A.M. Numerical solution of fractional variational problems using direct Haar wavelet method. Int. J. Innov. Res. Sci. Eng. Technol. 2014, 3, 12742–12750. [Google Scholar]
  24. Ezz-Eldien, S.S. New quadrature approach based on operational matrix for solving a class of fractional variational problems. J. Comp. Phys. 2016, 317, 362–381. [Google Scholar] [CrossRef]
  25. Bastos, N.; Ferreira, R.; Torres, D.F.M. Discrete-time fractional variational problems. Signal Process. 2011, 91, 513–524. [Google Scholar] [CrossRef]
  26. Wang, D.; Xiao, A. Fractional variational integrators for fractional variational problems. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 602–610. [Google Scholar] [CrossRef]
  27. Odzijewicz, T.; Malinowska, A.B.; Torres, D.F.M. Fractional variational calculus with classical and combined Caputo derivatives. Nonlinear Anal. 2012, 75, 1507–1515. [Google Scholar] [CrossRef]
  28. Bhrawy, A.H.; Ezz-Eldien, S.S. A new Legendre operational technique for delay fractional optimal control problems. Calcolo 2015. [Google Scholar] [CrossRef]
  29. Tavares, D.; Almeida, R.; Torres, D.F.M. Optimality conditions for fractional variational problems with dependence on a combined Caputo derivative of variable order. Optimization 2015, 64, 1381–1391. [Google Scholar] [CrossRef]
  30. Ezz-Eldien, S.S.; Hafez, R.M.; Bhrawy, A.H.; Baleanu, D.; El-Kalaawy, A.A. New numerical approach for fractional variational problems using shifted Legendre orthonormal polynomials. J. Optim. Theory Appl. 2017, 174, 295–320. [Google Scholar] [CrossRef]
  31. Almeida, R. Variational problems involving a Caputo-type fractional derivative. J. Optim. Theory Appl. 2017, 174, 276–294. [Google Scholar] [CrossRef]
  32. Pandey, R.K.; Agrawal, O.P. Numerical Scheme for Generalized Isoparametric Constraint Variational Problems with A-Operator. In Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Portland, OR, USA, 4–7 August 2013. [Google Scholar] [CrossRef]
  33. Pandey, R.K.; Agrawal, O.P. Numerical scheme for a quadratic type generalized isoperimetric constraint variational problems with A.-operator. J. Comput. Nonlinear Dyn. 2015, 10, 021003. [Google Scholar] [CrossRef]
  34. Pandey, R.K.; Agrawal, O.P. Comparison of four numerical schemes for isoperimetric constraint fractional variational problems with A-operator. In Proceedings of the ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Boston, MA, USA, 2–5 August 2015. [Google Scholar] [CrossRef]
  35. Singh, H.; Srivastava, H.M.D.; Kumar, D. A reliable numerical algorithm for the fractional vibration equation. Chaos Solitons Fractals 2017, 103, 131–138. [Google Scholar] [CrossRef]
  36. Singh, O.P.; Singh, V.K.; Pandey, R.K. A stable numerical inversion of Abel’s integral equation using almost Bernstein operational matrix. J. Quant. Spec. Rad. Trans. 2012, 111, 567–579. [Google Scholar] [CrossRef]
  37. Zhou, F.; Xu, X. Numerical solution of convection diffusions equations by the second kind Chebyshev wavelets. Appl. Math. Comput. 2014, 247, 353–367. [Google Scholar] [CrossRef]
  38. Yousefi, S.A.; Behroozifar, M.; Dehghan, M. The operational matrices of Bernstein polynomials for solving the parabolic equation subject to the specification of the mass. J. Comput. Appl. Math. 2011, 235, 5272–5283. [Google Scholar] [CrossRef]
  39. Singh, H. A New Numerical Algorithm for Fractional Model of Bloch Equation in Nuclear Magnetic Resonance. Alex. Eng. J. 2016, 55, 2863–2869. [Google Scholar] [CrossRef]
  40. Khalil, H.; Khan, R.A. A new method based on Legendre polynomials for solutions of the fractional two dimensional heat conduction equations. Comput. Math. Appl. 2014, 67, 1938–1953. [Google Scholar] [CrossRef]
  41. Singh, C.S.; Singh, H.; Singh, V.K.; Singh, O.P. Fractional order operational matrix methods for fractional singular integro-differential equation. Appl. Math. Modell. 2016, 40, 10705–10718. [Google Scholar] [CrossRef]
  42. Rivlin, T.J. An Introduction to the Approximation of Functions; Dover Publication: New York, NY, USA, 1981. [Google Scholar]
  43. Kreyszig, E. Introductory Functional Analysis with Applications; John Wiley and Sons, Inc.: Hoboken, NJ, USA, 1978. [Google Scholar]
  44. Bhrawy, A.H.; Tharwat, M.M.; Alghamdi, M.A. A new operational matrix of fractional integration for shifted Jacobi polynomials. Bull. Malays. Math. Sci. Soc. 2014, 37, 983–995. [Google Scholar]
  45. Behroozifar, M.; Sazmand, A. An approximate solution based on Jacobi polynomials for time-fractional convection–diffusion equation. Appl. Math. Comput. 2017, 296, 1–17. [Google Scholar] [CrossRef]
Figure 1. Comparison of exact and numerical solutions using S1 for α = 0.8   and   α = 1 , Example 1.
Figure 1. Comparison of exact and numerical solutions using S1 for α = 0.8   and   α = 1 , Example 1.
Mathematics 07 00224 g001
Figure 2. Comparison of exact and numerical solutions using S2 for α = 0.8   and   α = 1 , Example 1.
Figure 2. Comparison of exact and numerical solutions using S2 for α = 0.8   and   α = 1 , Example 1.
Mathematics 07 00224 g002
Figure 3. Comparison of exact and numerical solutions using S3 for α = 0.8   and   α = 1 , Example 1.
Figure 3. Comparison of exact and numerical solutions using S3 for α = 0.8   and   α = 1 , Example 1.
Mathematics 07 00224 g003
Figure 4. Comparison of exact and numerical solutions using S4 for α = 0.8   and   α = 1 , Example 1.
Figure 4. Comparison of exact and numerical solutions using S4 for α = 0.8   and   α = 1 , Example 1.
Mathematics 07 00224 g004
Figure 5. Comparison of exact and numerical solutions using S5 for α = 0.8   and   α = 1 , Example 1.
Figure 5. Comparison of exact and numerical solutions using S5 for α = 0.8   and   α = 1 , Example 1.
Mathematics 07 00224 g005
Figure 6. The behavior of solutions using S1 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 1.
Figure 6. The behavior of solutions using S1 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 1.
Mathematics 07 00224 g006
Figure 7. The behavior of solutions using S2 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 1.
Figure 7. The behavior of solutions using S2 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 1.
Mathematics 07 00224 g007
Figure 8. The behavior of solutions using S3 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 1.
Figure 8. The behavior of solutions using S3 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 1.
Mathematics 07 00224 g008
Figure 9. The behavior of solutions using S4 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 1.
Figure 9. The behavior of solutions using S4 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 1.
Mathematics 07 00224 g009
Figure 10. The behavior of solutions using S5 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 1.
Figure 10. The behavior of solutions using S5 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 1.
Mathematics 07 00224 g010
Figure 11. Comparison of exact and numerical solutions using S1 for α = 0.8   and   α = 1 , Example 2.
Figure 11. Comparison of exact and numerical solutions using S1 for α = 0.8   and   α = 1 , Example 2.
Mathematics 07 00224 g011
Figure 12. Comparison of exact and numerical solutions using S2 for α = 0.8   and   α = 1 , Example 2.
Figure 12. Comparison of exact and numerical solutions using S2 for α = 0.8   and   α = 1 , Example 2.
Mathematics 07 00224 g012
Figure 13. Comparison of exact and numerical solutions using S3 for α = 0.8   and   α = 1 , Example 2.
Figure 13. Comparison of exact and numerical solutions using S3 for α = 0.8   and   α = 1 , Example 2.
Mathematics 07 00224 g013
Figure 14. Comparison of exact and numerical solutions using S4 for α = 0.8   and   α = 1 , Example 2.
Figure 14. Comparison of exact and numerical solutions using S4 for α = 0.8   and   α = 1 , Example 2.
Mathematics 07 00224 g014
Figure 15. Comparison of exact and numerical solutions using S5 for α = 0.8   and   α = 1 , Example 2.
Figure 15. Comparison of exact and numerical solutions using S5 for α = 0.8   and   α = 1 , Example 2.
Mathematics 07 00224 g015
Figure 16. The behavior of solutions using S1 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 2.
Figure 16. The behavior of solutions using S1 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 2.
Mathematics 07 00224 g016
Figure 17. The behavior of solutions using S2 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 2.
Figure 17. The behavior of solutions using S2 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 2.
Mathematics 07 00224 g017
Figure 18. The behavior of solutions using S3 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 2.
Figure 18. The behavior of solutions using S3 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 2.
Mathematics 07 00224 g018
Figure 19. The behavior of solutions using S4 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 2.
Figure 19. The behavior of solutions using S4 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 2.
Mathematics 07 00224 g019
Figure 20. The behavior of solutions using S5 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 2.
Figure 20. The behavior of solutions using S5 for α values of 0.5, 0.6, 0.7, 0.8, 0.9, and 1, Example 2.
Mathematics 07 00224 g020
Table 1. Result comparison of Example 1 for different orthogonal polynomials at different values of n.
Table 1. Result comparison of Example 1 for different orthogonal polynomials at different values of n.
PolynomialsMaximum Absolute ErrorsRoot-Mean-Square Errors
n = 2n = 6n = 2n = 6
S11.4584 × 10−11.8326 × 10−71.3923 × 10−22.4900 × 10−8
S21.6154 × 10−14.2127 × 10−72.0960 × 10−27.1127 × 10−8
S32.2296 × 10−13.6897 × 10−71.3179 × 10−23.3726 × 10−8
S44.1764 × 10−11.0973 × 10−63.2039 × 10−21.7307 × 10−7
S51.9055 × 10−13.1593 × 10−72.2138 × 10−24.2368 × 10−8
Table 2. Result comparison of Example 2 for different orthogonal polynomials at different values of n.
Table 2. Result comparison of Example 2 for different orthogonal polynomials at different values of n.
PolynomialsMaximum Absolute ErrorsRoot-Mean-Square Errors
n = 2n = 6n = 2n = 6
S12.0407 × 10−21.4819 × 10−72.7038 × 10−31.5021 × 10−8
S22.4295 × 10−21.3713 × 10−23.1356 × 10−31.6490 × 10−3
S38.0010 × 10−24.8371 × 10−21.2590 × 10−28.3236 × 10−3
S41.1193 × 10−15.0558 × 10−29.8251 × 10−36.8594 × 10−3
S52.5349 × 10−21.9316 × 10−23.2343 × 10−32.4746 × 10−3

Share and Cite

MDPI and ACS Style

Singh, H.; Pandey, R.K.; Srivastava, H.M. Solving Non-Linear Fractional Variational Problems Using Jacobi Polynomials. Mathematics 2019, 7, 224. https://0-doi-org.brum.beds.ac.uk/10.3390/math7030224

AMA Style

Singh H, Pandey RK, Srivastava HM. Solving Non-Linear Fractional Variational Problems Using Jacobi Polynomials. Mathematics. 2019; 7(3):224. https://0-doi-org.brum.beds.ac.uk/10.3390/math7030224

Chicago/Turabian Style

Singh, Harendra, Rajesh K. Pandey, and Hari Mohan Srivastava. 2019. "Solving Non-Linear Fractional Variational Problems Using Jacobi Polynomials" Mathematics 7, no. 3: 224. https://0-doi-org.brum.beds.ac.uk/10.3390/math7030224

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop