Next Article in Journal
Predictive Modeling the Free Hydraulic Jumps Pressure through Advanced Statistical Methods
Next Article in Special Issue
Fractional Integral Equations Tell Us How to Impose Initial Values in Fractional Differential Equations
Previous Article in Journal
Novel Multiple Attribute Group Decision-Making Methods Based on Linguistic Intuitionistic Fuzzy Information
Previous Article in Special Issue
Some Alternative Solutions to Fractional Models for Modelling Power Law Type Long Memory Behaviours
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Good (and Not So Good) Practices in Computational Methods for Fractional Calculus

1
Fakultät Angewandte Natur- und Geisteswissenschaften, University of Applied Sciences Würzburg-Schweinfurt, Ignaz-Schön-Str. 11, 97421 Schweinfurt, Germany
2
GNS mbH Gesellschaft für Numerische Simulation mbH, Am Gaußberg 2, 38114 Braunschweig, Germany
3
Department of Mathematics, University of Bari, Via E. Orabona 4, 70126 Bari, Italy
4
INdAM Research Group GNCS, Piazzale Aldo Moro 5, 00185 Rome, Italy
5
Applied and Computational Mathematics Division, Beijing Computational Science Research Center, Beijing 100193, China
*
Author to whom correspondence should be addressed.
Submission received: 26 January 2020 / Revised: 24 February 2020 / Accepted: 25 February 2020 / Published: 2 March 2020
(This article belongs to the Special Issue Fractional Integrals and Derivatives: “True” versus “False”)

Abstract

:
The solution of fractional-order differential problems requires in the majority of cases the use of some computational approach. In general, the numerical treatment of fractional differential equations is much more difficult than in the integer-order case, and very often non-specialist researchers are unaware of the specific difficulties. As a consequence, numerical methods are often applied in an incorrect way or unreliable methods are devised and proposed in the literature. In this paper we try to identify some common pitfalls in the use of numerical methods in fractional calculus, to explain their nature and to list some good practices that should be followed in order to obtain correct results.

1. Introduction

The increasing interest in applications of fractional calculus, together with the difficulty of finding analytical solutions of fractional differential equations (FDEs), naturally forces researchers to study, devise and apply numerical methods to solve a large range of ordinary and partial differential equations with fractional derivatives.
The investigation of computational methods for fractional-order problems is therefore a very active research area in which, each year, a large number of research papers are published.
The task of finding efficient and reliable numerical methods for handling integrals and/or derivatives of fractional order is a challenge in its own right, with difficulties that differ in character but are no less severe than those associated with finding analytical solutions. The specific nature of these operators involves computational challenges which, if not properly addressed, may lead to unreliable or even wrong results.
Unfortunately, the scientific literature is rich with examples of methods that are inappropriate for fractional-order problems. In most cases these are just methods that were devised originally for standard integer-order operators then applied in a naive way to their fractional-order counterparts; without a proper knowledge of the specific features of fractional-order problems, researchers are often unable to understand why unexpected results are obtained.
The main aims of this paper are to identify a few major guidelines that should be followed when devising reliable computational methods for fractional-order problems, and to highlight the main peculiarities that make the solution of differential equations of fractional order a different—but surely more difficult and stimulating—task from the integer-order case. We do not intend merely to criticize weak or wrong methods, but try to explain why certain approaches are unreliable in fractional calculus and, where possible, point the reader towards more suitable approaches.
This paper is mainly addressed at young researchers or scientists without a particular background in the numerical analysis of fractional-order problems but who need to apply computational methods to solve problems of fractional order. We aim to offer in this way a kind of guide to avoid some of the most common mistakes which, unfortunately, are sometimes made in this field.
The paper is organized in the following way. After recalling in Section 2 some basic definitions and properties, we illustrate in Section 3 the most common ideas underlying the majority of the methods proposed in the literature: very often the basic ideas are not properly recognized and common methods are claimed to be new. In Section 4 we discuss why polynomial approximations can be only partially satisfactory for fractional-order problems and why they are unsuitable for devising high-order methods (as has often been proposed). The major problems related to the nonlocality of fractional operators are addressed in Section 5 and Section 6 discusses some of the most powerful approaches for the efficient treatment of the memory term. Some remarks related to the numerical treatment of fractional partial differential equations are presented in Section 7 and some final comments are given in Section 8.

2. Basic Material and Notations

With the aim of fixing the notation and making available the most common definitions and properties for further reference, we recall here some basic notions concerning fractional calculus.
For α > 0 and any t 0 R , in the paper we will adopt the usual definitions for the fractional integral of Riemann–Liouville type
J t 0 α f ( t ) = 1 Γ ( α ) t 0 t ( t τ ) α 1 f ( τ ) d τ , t > t 0 ,
for the fractional derivative of Riemann–Liouville type
RL D t 0 α f ( t ) : = D m J t 0 m α f ( t ) = 1 Γ ( m α ) d m d t m t 0 t ( t τ ) m α 1 f ( τ ) d τ , t > t 0
and for the fractional derivative of Caputo type
C D t 0 α f ( t ) : = J t 0 m α D m f ( t ) = 1 Γ ( m α ) t 0 t ( t τ ) m α 1 f ( m ) ( τ ) d τ , t > t 0 ,
with m = α the smallest integer greater than or equal to α .
We refer to any of the many existing textbooks on this subject (e.g., [1,2,3,4,5,6]) for an exhaustive treatment of the conditions under which the above operators exist and for their main properties. We just recall here the relationship between RL D t 0 α and C D t 0 α expressed as
C D t 0 α f ( t ) = RL D t 0 α f T m 1 [ f ; t 0 ] ( t ) ,
where T m 1 [ f ; t 0 ] is the Taylor polynomial of degree m 1 for the function f about the point t 0 ,
T m 1 [ f ; t 0 ] ( t ) = k = 0 m 1 ( t t 0 ) k k ! f ( k ) ( t 0 ) .
Moreover, we will almost exclusively consider initial value problems of Cauchy type for FDEs with the Caputo derivative, i.e.,
C D t 0 α y ( t ) = f ( t , y ( t ) ) y ( t 0 ) = y 0 , y ( t 0 ) = y 0 ( 1 ) , , y ( m 1 ) ( t 0 ) = y 0 ( m 1 ) ,
for some assigned initial values y 0 , y 0 ( 1 ) , , y 0 ( m 1 ) . A few general comments will also be made regarding problems associated with partial differential equations.

3. Novel or Well-Established Methods?

Quite frequently, one sees papers whose promising title claims the presentation of “new methods” or “a family of new methods” for some particular fractional-order operator. Papers of this type immediately capture the attention of readers eager for new and good ideas for numerically solving problems of this type.
But reading the first few pages of such papers can be a source of frustration, since what is claimed to be new is merely an old method applied to a particular (maybe new) problem. Now it is understandable that sometimes an old method is reinvented by a different author, maybe because it can be derived by some different approach or because the author is unaware of the previously published result (perhaps because it was published under an imprecise or misleading title). In fractional calculus, however, a different and quite strange phenomenon has taken hold: well-known and widely used methods are often claimed as “new” just because they are being applied to some specific problem. It seems that some authors are unaware that it is the development of new ideas and new approaches that leads to methods that can be described as new—not the application of known ideas to a particular problem. Even the application of well-established techniques to any of the new operators, obtained by simply replacing the kernel in the integral (1) with some other function, cannot be considered a truly novel method, especially when the extension to the new operator is straightforward.
Most of the papers announcing “new” methods are instead based on ideas and techniques that were proposed and studied decades ago, and sometimes proper references to the original sources are not even given.
In fact, there are a few basic and powerful methods that are suitable and extremely popular for fractional-order problems, and many proposed “new methods” are simply the application of the ideas behind them. It may therefore be useful to illustrate the main and more popular ideas that are most frequently (re)-proposed in fractional calculus, and to outline a short history of their origin and development.

3.1. Polynomial Interpolation and Product-Integration Rules

Solving differential equations by approximating their solution or their vector field by a polynomial interpolant is a very old and common idea. Some of the classical linear multistep methods for ordinary differential equations (ODEs), specifically those of Adams–Bashforth or Adams–Moulton type, are based on this approach.
In 1954 the British mathematician Andrew Young proposed [7,8] the application of polynomial interpolation to solve Volterra integral equations numerically. This approach turns out to be suitable for FDEs since (6) can be reformulated as the Volterra integral equation
y ( t ) = T m 1 [ f ; t 0 ] ( t ) + 1 Γ ( α ) t 0 t ( t u ) α 1 f ( u , y ( u ) ) d u .
The approach proposed by Young is to define a grid t n on the solution interval [ t 0 , T ] (very often, but not necessarily, equispaced, namely t n = t 0 + h n , h = ( T t 0 ) / N ) and to rewrite (7) in a piecewise way as
y ( t n ) = T m 1 [ f ; t 0 ] ( t n ) + 1 Γ ( α ) j = 0 n 1 t j t j + 1 ( t n u ) α 1 f ( u , y ( u ) ) d u ,
then to replace, in each interval [ t j , t j + 1 ] , the vector field f ( u , y ( u ) ) by a polynomial that interpolates to f on the grid. This approach is particularly simple if one uses polynomials of degree 0 or 1 because then one can determine the approximation solely on the basis of the data at one of the subinterval’s end points (degree 0; the product rectangle method) or at both end points (degree 1; the product trapezoidal method); thus, in these cases one need not introduce auxiliary points inside the interval or points outside the interval. Neither of these methods can yield a particularly high order of convergence, but as we shall demonstrate in Section 4, the analytic properties of typical solutions to fractional differential equations make it very difficult and cumbersome to achieve high-order accuracy irrespective of the technique used. Consequently, and because these techniques have been thoroughly investigated with respect to their convergence properties [9] and their stability [10] and are hence very well understood, the product rectangle and product trapezoidal methods are highly popular among users of fractional order models.
Higher-order methods have occasionally been proposed [11,12] but—as indicated above and discussed in more detail in Section 4—they tend to require rather uncommon properties of the exact solutions to the given problems and therefore are used only infrequently. We also have to notice that the effects of the lack of regularity on the convergence properties of product-integration rules have been studied since 1985 for Volterra integral equations [13] and since 2004 for the specific case of FDEs [14].

3.2. Approximation of Derivatives: L1 and L2 Schemes

A classical numerical technique for approximating the Caputo differential operator from (3) is the so-called L1 scheme. For 0 < α < 1 , the definition of the Caputo operator becomes
C D t 0 α f ( t ) = 1 Γ ( 1 α ) t 0 t ( t τ ) α f ( τ ) d τ for t > t 0 .
The idea ([15], Equation (8.2.6)) is to introduce a completely arbitrary (i.e., not necessarily uniformly spaced) mesh t 0 < t 1 < t 2 < < t N and to replace the factor f ( τ ) in the integrand by the approximation
f ( τ ) f ( t j + 1 ) f ( t j ) t j + 1 t j whenever τ ( t j , t j + 1 ) .
This produces the approximation formula
C D t 0 α f ( t n ) C D t 0 , L 1 α f ( t n ) = 1 Γ ( 2 α ) j = 0 n 1 w n j 1 , n ( f ( t n j ) f ( t n j 1 ) )
with
w μ , n = ( t n t μ ) 1 α ( t n t μ + 1 ) 1 α t n μ t n μ 1 .
For smooth functions f (but only under this assumption!) and an equispaced mesh t j = t 0 + j h , the convergence order of the L1 method is O ( h 2 α ) .
By construction, the L1 method is restricted to the case 0 < α < 1 . For α ( 1 , 2 ) , the L2 method ([15], §8.2) provides a useful modification. In its construction, one starts from the representation
C D t 0 α f ( t ) = 1 Γ ( 2 α ) t 0 t t 1 α f ( t τ ) d τ ,
which is valid for these values of α . Using now a uniform grid t j = t 0 + j h , one replaces the second derivative of f in the integrand by its central difference approximation,
f ( t n τ ) 1 h 2 f ( t n t k + 1 ) 2 f ( t n t k ) + f ( t n t k 1 )
for τ [ t k , t k + 1 ] , which yields
C D t 0 α f ( t n ) C D t 0 , L 2 α f ( t n ) = h α Γ ( 3 α ) k = 1 n w k , n f ( t n k ) ,
where now
w k , n = 1 for k = 1 , 2 2 α 3 for k = 0 , ( k + 2 ) 2 α 3 ( k + 1 ) 2 α + 3 k 2 α ( k 1 ) 2 α for 1 k n 2 , 2 n 2 α + 3 ( n 1 ) 2 α ( n 2 ) 2 α for k = n 1 , n 2 α ( n 1 ) 2 α for k = n .
A disadvantage of this method is that it requires the evaluation for f at the point t n + 1 = ( n + 1 ) h which is located outside the interval [ 0 , t n ] .
The central difference used in the definition of the L2 method is symmetric with respect to one of the endpoints of the associated subinterval [ t k , t k + 1 ] , not with respect to its mid point. If this is not desired, one may instead use the alternative
f ( t n τ ) 1 h 2 f ( t n k 2 ) f ( t n k 1 ) + f ( t n k + 1 ) f ( t n k )
on this subinterval. This leads to the L2C method [16]
C D t 0 α f ( t n ) C D t 0 , L 2 C α f ( t n ) = h α 2 Γ ( 3 α ) k = 1 n + 1 w k , n f ( t n k )
with
w k , n = 1 for k = 1 , 2 2 α 2 for k = 0 , 3 2 α 2 2 α for k = 1 , ( k + 2 ) 2 α 2 ( k + 1 ) 2 α + 2 ( k 1 ) 2 α ( k 2 ) 2 α for 2 k n 2 , n 2 α ( n 3 ) 2 α + 2 ( n 2 ) 2 α for k = n 1 , n 2 α + 2 ( n 1 ) 2 α ( n 2 ) 2 α for k = n , n 2 α ( n 1 ) 2 α for k = n + 1 .
Like the L2 method, the L2C method also requires the evaluation of f outside the interval [ 0 , t n ] ; one has to compute f ( ( n + 1 ) h ) and f ( h ) . Both the L2 and the L2C method exhibit O ( h 3 α ) convergence behavior for 1 < α < 2 if f is sufficiently well behaved; the constants implicitly contained in the O -terms seem to be smaller for the L2 method in the case 1 < α < 1.5 and for the L2C method if 1.5 < α < 2 .
In the limit case α 1 , the L2 method reduces to first-order backward differencing, and the L2C method becomes the centered difference of first order; for α 2 the L2 method corresponds to the classical second-order central difference.

3.3. Fractional Linear Multistep Methods

Fractional linear multistep methods (FLMMs) are less frequently used since their coefficients are, in general, not known explicitly but it is necessary to devise some algorithm for their (technically often difficult) computation. Nevertheless, since these methods allow us to overcome some of the issues associated with other approaches, it is worth giving a short presentation of their properties.
FLMMs were proposed by Lubich in 1986 [17] and studied in the successive works [18,19,20]. They extend to fractional-order integrals the quadrature rules obtained from standard linear multistep methods (LMMs) for ODEs.
Let us consider a classical k-step LMM of order p > 0 with first and second characteristic polynomials ρ ( z ) = ρ 0 z k + ρ 1 z k 1 + + ρ k and σ ( z ) = σ 0 z k + σ 1 z k 1 + + σ k , namely
j = 0 k ρ j y n j = h j = 0 k σ j f ( t n j ) , where δ ( ξ ) = ρ ( 1 / ξ ) σ ( 1 / ξ ) is the generating function .
FLMMs generalizing LMMs (9) for solving FDEs (7) are expressed as
y n = T m 1 [ f ; t 0 ] ( t ) + h α j = 0 ν w n , j f ( t j , y j ) + h α j = 0 n ω n j ( α ) f ( t j , y j ) ,
where the convolution weights ω n ( α ) are obtained from the power series expansion of δ ( ξ ) α , namely
n = 0 ω n ( α ) ξ n = 1 δ ( ξ ) α ,
and the w n , j are some starting weights that are introduced to deal with the lack of regularity of the solution at the origin; they are obtained by solving, at each step n, the algebraic linear systems
j = 0 ν w n , j j γ = j = 0 n ω n j j γ + Γ ( γ + 1 ) Γ ( 1 + γ + α ) n γ + α , ν A p ,
with A p = γ R | γ = i + j α , i , j N , γ < p 1 and ν + 1 the cardinality of A p .
The intriguing property of FLMMs is that, unlike product-integration rules, they are able to preserve the same convergence order p of the underlying LMMs if the LMM satisfies certain properties: it is required that δ ( ξ ) has no zeros in the closed unit disc | ξ | 1 except for ξ = 1 , and | arg δ ( ξ ) | < π for | ξ | < 1 . Thus, high-order FLMMs are possible without requiring the imposition of artificial smoothness assumptions as is required for methods based on polynomial interpolation.
But the price to be paid for this advantage may be not negligible: the convolution weights ω n ( α ) are not known explicitly and must be computed by some (possibly sophisticated) method (a discussion for the general case is available in [17,18,19,20] while algorithms for FLMMs of trapezoidal type are presented in [21]). Moreover, high-order methods may require the solution of large or very large systems (11) depending on the equation order α and the convergence order p of the method; in some cases these systems are so ill-conditioned as to affect the accuracy of the method, a problem addressed in depth in [22].
One of the simplest methods in this family is obtained from the backward Euler method, whose generating function is δ ( ξ ) = ( 1 ξ ) . Its convolution weights are hence the coefficients in the asymptotic expansion of ( 1 ξ ) α , i.e., they are the coefficients in the binomial series
ω j ( α ) = ( 1 ) j α j = Γ ( α + 1 ) j ! Γ ( α j + 1 )
and no starting weights are necessary since the convergence order is p = 1 and hence A p is the empty set. One recognizes easily that the so-called Grünwald-Letnikov scheme is obtained in this case. Although this scheme was discovered in the nineteenth century in independent works of Grünwald and Letnikov, its interpretation as an FLMM may facilitate its analysis.

4. Classical Approximations Will Not Give High-Order Methods

Solutions of fractional-derivative problems typically exhibit weak singularities. This topic is discussed at length in the survey chapter [23] and it is known since earlier works on Volterra integral equations [24,25]. This singularity is a consequence of the weakly singular behavior of the kernels of integral and fractional derivatives and its importance, from a physical perspective, is related to the natural emergence of completely monotone (CM) relaxation functions in models whose dynamics is governed by these operators [26,27]; CM relaxation behaviors are indeed typical of viscoelastic systems with strongly dissipative energies [28].
In the present section we shall examine the effects of the singular behavior on numerical methods, in the context of initial value problems such as (6).
To grasp quickly the main ideas, we focus on a very simple particular case of (6): the problem
C D 0 α y ( t ) = 1 for t ( 0 , T ] ,
where 0 < α < 1 and, for the moment, we do not prescribe the initial condition at t = 0 . The general solution of (12) is
y ( t ) = x α Γ ( 1 + α ) + b , where b is an arbitrary constant .
This solution lies in C [ 0 , T ] C 1 ( 0 , T ] but not in C 1 [ 0 , T ] . This implies that standard techniques for integer-derivative problems, which require that y C 1 [ 0 , T ] (or a higher degree of regularity), cannot be used here without some modification. In particular one cannot perform a Taylor series expansion of the solution around t = 0 because y ( 0 ) does not exist.
What about the initial condition? If we prescribe a condition of the form y ( 0 ) = y 0 we get b = y 0 in (13), but the solution is still not in C 1 [ 0 , T ] . One might hope that a Neumann-type condition of the form y ( 0 ) = 0 would control or eliminate the singularity in the solution, but a consideration of (13) shows that it is impossible to enforce such a condition; that is, the problem C D 0 α y ( t ) = 1 on ( 0 , T ] with y ( 0 ) = 0 has no solution. This seems surprising until we recall a basic property of the Caputo derivative from ([1], Lemma 3.11): if m 1 < β < m for some positive integer m and z C m [ 0 , T ] , then lim t 0 C D 0 β z ( t ) = 0 . Hence, if in (12) one has y C 1 [ 0 , T ] , then taking the limit as t 0 in (12) we get 0 = 1 , which is impossible. That is, any solution y of (12) cannot lie in C 1 [ 0 , T ] .
One can present this finding in another way: for the problem C D 0 α y ( t ) = f ( t ) on ( 0 , T ] with f C [ 0 , T ] , if the solution y C 1 [ 0 , T ] , then one must have f ( 0 ) = 0 . This result is a special case of ([1], Theorem 6.26).
Remark 1.
For the problem C D 0 α y ( t ) = f ( t ) on ( 0 , T ] with 0 < α < 1 , if one wants more smoothness of the solution y on the closed interval [ 0 , T ] , then one must impose further conditions on the data: by ([1], Theorem 6.27), for each positive integer m, one has y C m [ 0 , T ] if and only if 0 = f ( 0 ) = f ( 0 ) = = f ( m 1 ) ( 0 ) .
Conditions such as f ( 0 ) = 0 (and the even stronger conditions listed in Remark 1) impose an artificial restriction on the data f that should be avoided. Thus we continue by looking carefully at the consequence of dealing with a solution of limited smoothness.
Returning to (12) and imposing the initial condition y ( 0 ) = b , the unique solution of the problem is given by (13), where b is now fixed. Most numerical methods for integer-derivative initial value problems are based on the premise that on any small mesh interval [ t i , t i + 1 ] , the unknown solution can be approximated to a high degree of accuracy by a polynomial of suitable degree. But is this true of the function (13)? We now investigate this question.
Consider the interval [ 0 , h ] , where h = t 1 . This is the mesh interval where the solution (13) is worst behaved.
Lemma 1.
Let α ( 0 , 1 ) . Consider the approximation of t α by a linear polynomial c 0 + c 1 t on the interval [ 0 , h ] . Suppose this approximation is uniformly O ( h β ) accurate on [ 0 , h ] for some fixed β > 0 . Then one must have β α .
Proof. 
Our hypothesis is that | t α ( c 0 + c 1 t ) | C h β for all t [ 0 , h ] and some constant C that is independent of h and t. Consider the values t = 0 , t = h / 2 and t = h in this inequality: we get
0 ( c 0 + 0 ) = O ( h β ) , ( h / 2 ) α ( c 0 + c 1 h / 2 ) = O ( h β ) , h α ( c 0 + c 1 h ) = O ( h β ) .
The first equation gives c 0 = O ( h β ) . Hence the other equations give ( h / 2 ) α c 1 h / 2 = O ( h β ) and h α c 1 h = O ( h β ) . Eliminate c 1 by multiplying the first equation by 2 then subtracting from the other equation; this yields h α 2 ( h / 2 ) α = O ( h β ) . But this cannot be true unless β α , since the left-hand side is simply a multiple of h α because α 1 .  □
Lemma 1 says that the approximation of t α on [ 0 , h ] by any linear polynomial is at best O ( h α ) . But the order of approximation O ( h α ) of t α on [ 0 , h ] is also achieved by the constant polynomial 0. That is: using a linear polynomial to approximate t α on [ 0 , h ] does not give an essentially better result than using a constant polynomial. In a similar way one can show that using polynomials of higher degree does not improve the situation: the order of approximation of t α on [ 0 , h ] is still only O ( h α ) . This is a warning that when solving typical fractional-derivative problems, high-degree polynomials may be no better than low-degree polynomials, unlike the classical integer-derivative situation.
One can generalize Lemma 1 to any α > 0 with α not an integer, obtaining the same result via the same argument. Furthermore, our investigation of the simple problem (12) can be readily generalised to the much more general problem (6); see ([1], Section 6.4).

Implications for the Construction of Difference Schemes

The discussion earlier in Section 4 implies that, to construct higher-order difference schemes for typical solutions of problems such as (12) and (6), one must use non-classical schemes, since the classical schemes are constructed under the assumption that approximations by higher-order polynomials gives greater accuracy. The same idea is developed at length in [29], one of whose results we now present.
Note: although [29] discusses only boundary value problems, an inspection reveals that its arguments and results are also valid (mutatis mutandis) for initial value problems such as (6) when f = f ( t ) , i.e., when the problem (6) is linear.
Let α > 0 be fixed, with α not an integer. Consider the problem D α y = f on [ 0 , T ] with y ( 0 ) = 0 . Assume that the mesh on [ 0 , T ] is equispaced with diameter h, i.e., x i = i h for i = 0 , 1 , , N . Suppose that the difference scheme used to solve D α y = f at each point x i for i > 0 is j = 0 i a i j y j N = f ( t i ) . It is reasonable to assume that | a i j | = O ( h α ) for all i and j since we are approximating a derivative of order α (one can check that almost all schemes proposed for this problem have this property).
We have the following variant of ([29], Theorem 3.3).
Theorem 1.
Assume that our scheme achieves order of convergence p for some p > α when f ( t ) = C t k for all k { 0 , 1 , , p α 1 } . Then for each fixed positive integer i, the coefficients of the scheme must satisfy the following relationship:
lim h 0 h α j = 0 i j k + α a i j = i k Γ ( α + k + 1 ) Γ ( k + 1 ) for k = 0 , 1 , , p α 1 .
Proof. 
Fix k { 0 , 1 , , p α 1 } . This implies that k < p α . Choose for simplicity
f ( t ) = Γ ( k + α + 1 ) Γ ( k + 1 ) t k .
Then the true solution of our initial value problem is y ( t ) = t k + α . Fix a positive integer i. Then
j = 0 i a i j y j N = f ( t i ) = Γ ( k + α + 1 ) Γ ( k + 1 ) ( i h ) k .
Hence, using the hypothesis that our scheme achieves order of convergence p and | a i j | = O ( h α ) ,
lim h 0 h α j = 0 i j k + α a i j = lim h 0 h k j = 0 i a i j y ( t j ) = lim h 0 h k Γ ( k + α + 1 ) Γ ( k + 1 ) ( i h ) k + j = 0 i a i j y ( x j ) y j N = lim h 0 Γ ( k + α + 1 ) Γ ( k + 1 ) i k + O ( h p α k ) = Γ ( k + α + 1 ) Γ ( k + 1 ) i k ,
since k < p α .  □
Theorem 1 implies that schemes that fail to satisfy (14) cannot achieve an order of convergence greater than O ( h α ) at each mesh point. (This is consistent with the approximation theory result of Lemma 1.) For example, in the case 0 < α < 1 , it follows from Theorem 1 that the well-known L1 scheme is at best O ( h α ) accurate.
Remark 2.
To avoid the consequences of results such as Theorem 1, one can impose data restrictions such as f ( 0 ) = 0 . This is discussed in ([29], Section 5), where theoretical and experimental results show an improvement in the accuracy of standard difference schemes, but only for a restricted class of problems.

5. Failed Approaches to Treat Non-Locality

Non-locality is one of the major features of fractional-order operators. Indeed, fractional integrals and derivatives are often introduced as a mathematical formalism with the primary purpose of encompassing hereditary effects in the modeling of real-life phenomena when theoretical or experimental observations suggest that the effects of external actions do not propagate instantaneously but depend on the history of the system.
On the one hand, non-locality is a very attractive feature that has driven most of the interest and success of the fractional calculus; on the other hand, non-locality introduces severe computational difficulties that researchers try to overcome in different ways.
Unfortunately, some attempts to treat non-locality are unreliable and lead to wrong results. This is the case of the naive implementation of the “finite memory principle” consisting in simply neglecting a large amount of the history solution; since on the basis of this technique it is however possible to devise more sophisticated and accurate approaches, we postpone its discussion to Section 6.
We have also to mention methods based on some kind of fractional Taylor expansion of the solution, such as
y ( t ) = k = 0 Y k ( t t 0 ) k α ,
where the coefficients Y k are determined by some suitable numerical technique.
When solving integer-order differential equations, it is possible to use Taylor expansions to approximate the solution at a given point t 1 and hence reformulate the same expansion by moving the origin to the new point t 1 , thus generating a step-by-step method in which the approximation at t n + 1 is evaluated on the basis of the approximation at t n (or at additional previous points).
With fractional-order equations, instead, the above expansion holds only with respect to the point t 0 (the initial or starting point of the fractional differential operator) and it is not possible to generate a step-by-step method. Expansions of this type are therefore able to provide an accurate approximation only locally, i.e., very close to the starting point t 0 ; consequently, as discussed in [30], methods based on these expansions are usually unsuitable for FDEs.
Another failed approach is based on an attempt to exploit the difference between y ( t n + 1 ) and y ( t n ) in the integral formulation (7): rewrite the solution at t n + 1 as some increment of the solution at t n , i.e.,
y ( t n + 1 ) = y ( t n ) + G n ( t , y ( t ) ) ,
then approximate the increment
G n ( t , y ( t ) ) = 1 Γ ( α ) t 0 t n + 1 ( t n + 1 u ) α 1 f ( u , y ( u ) ) d u 1 Γ ( α ) t 0 t n ( t n u ) α 1 f ( u , y ( u ) ) d u
by replacing the vector field f ( t , y ( t ) ) in both integrals of (15b) by its (first-order) interpolating polynomial at the grid points t n 1 and t n . Methods of this kind read as
y n + 1 = y n + P n ( y n 1 , y n ) ,
with P n a known function obtained by standard interpolation techniques. Approaches of this kind are called two-step Adams–Bashforth methods and attract researchers since they apparently transform the non-local problem into a local one (and thus, a difficult problem into a much easier one); in (15b) G n ( t , y ( t ) ) is still a non-local term but these methods are strangely becoming quite popular despite the fact that, as discussed in [31], they are usually unreliable because in most cases they attempt to approximate the (implicitly) non-local contribution G n ( t , y ( t ) ) by some purely local term.
Using interpolation at the points t n 1 and t n to approximate f ( t , y ( t ) ) over the much larger intervals [ t 0 , t n ] and [ t 0 , t n + 1 ] is completely inappropriate. It is well known that polynomial interpolation may offer accurate approximations within the interval of the data points, in this case in [ t n 1 , t n ] ; but outside this interval (where an extrapolation is made instead of an interpolation), the approximation becomes more and more inaccurate as the integration intervals [ t 0 , t n ] and [ t 0 , t n + 1 ] in (15b) become larger and larger, i.e., as the integration proceeds and n increases.
The consequence is that completely untrustworthy results must be expected from methods based on this idea.
Note that the fundamental flaw of this approach is not the decomposition (15) but the local (and hence inappropriate) way (16) in which the history is handled. Indeed, it is possible to construct technically correct and efficient algorithms on the basis of (15), for example if one treats the increment term (15b) by a numerical method that is cheaper in computational cost than the method used for the local term [32].

6. Some Approaches for the Efficient, and Reliable, Treatment of the Memory Term

The non-locality of the fractional-order operator means that it is necessary to treat the memory term in an efficient way. This term is commonly identified to be the source of a computational complexity which, especially in problems of large size, requires adequate strategies in order to keep the computational cost at a reasonable level, and indeed this observation has led to many investigations of (more or less successful) approaches to reduce the computational cost. It should be noted however that the high number of arithmetic operations is not the only potential difficulty that the memory term introduces. There is another more fundamental issue, which seems to have attracted much less attention: the history of the process not only needs to be taken into account in the computation but, in order to be properly handled, also needs to be stored in the computer’s memory. While the required amount of memory is usually easily available in algorithms for solving ordinary differential equations, the memory demand may be too high for efficient handling in the case of, e.g., time-fractional partial differential equations where finite element techniques are used to discretize the spatial derivatives.
Most finite-difference methods for FDEs require at each time step the evaluation of some convolution sum of the form
y n = ϕ n + j = 0 n c j y n j or y n = ϕ n + j = 0 n c j f ( t n j , y n j ) , n = 1 , 2 , , N ,
where ϕ n is a term which mainly depends on the initial conditions or other known information.
A naive straightforward evaluation of (17) has a computational cost proportional to O N 2 and, when integration with a small-step size or on a large integration interval is required, the value of N can be extremely large and leads to prohibitive computational costs.
For this reason different approaches for a fast, efficient and reliable treatment of the memory term in non-local problems have been devised. We provide here a short description of some of the most interesting methods of this type. The influence of these approaches on the memory requirements will be addressed as well.

6.1. Nested Mesh Techniques

Several different concepts can be subsumed under the heading of so-called nested meshes. The general idea is based on the observation that the convolution sum in Equation (17) stems from a discretization of a fractional integral or differential operator that uses all the previous grid points as nodes. One can then ask whether it is really neccessary to use all these nodes or whether one could save effort by including only a subset of them by using a second, less fine mesh—i.e., a mesh nested inside the original one.

6.1.1. The Finite Memory Principle

The simplest idea in this class is the finite memory principle ([5], §7.3). It is based on defining a constant τ > 0 , the so-called memory length, and replacing (for t > t 0 + τ ) the memory integral term that extends over the interval [ t 0 , t ] by the integral over [ t τ , t ] with the same integrand function. Technically speaking, this amounts to “forgetting” the entire history of the process that is more than τ units of time in the past, so the memory has a finite and fixed length τ instead of the variable length t t 0 that may, in a long running process, be very much longer. From an algorithmic point of view, the finite memory method truncates the convolution sum in Equation (17) to a sum where j runs from n ν to n for some fixed ν . This has a number of significant advantages:
  • The computational complexity of the nth time step is reduced from O ( n ) to O ( 1 ) . Therefore, the combined total complexity of the overall method with N time steps is reduced from O ( N 2 ) to O ( N ) .
  • At no point in time does one need to access the part of the process history that is more than ν time steps in the past. Therefore, all those previous time steps can be removed from the active memory, and the memory requirement also decreases from O ( N ) to O ( 1 ) .
Unfortunately, this idea also has severe drawbacks. Specifically, it has been shown in [33] that the convergence order of the underlying discretization technique is lost completely. In other words, one cannot prove that the algorithm converges as the (maximal) step size goes to 0. Therefore, the method is not recommended for practical use.

6.1.2. Logarithmic Memory

To overcome the shortcomings of the finite memory principle, two related but not identical methods, both of which are also based on the nested mesh concept, have been developed in [33,34]. The common idea of both these approaches is the way in which the distant part of the memory is treated. Rather than ignoring it completely as the finite memory principle does, they do sample it, but on a coarser mesh; indeed the fundamental principle is to introduce not just one coarsening level, but to use, say, the step size h on the most recent part of the memory, step size w h (with some parameter w > 1 ) on the adjacent region, w 2 h on the next region, etc. The main difference between the two approaches of [33,34] then lies in the way in which the transition points from one mesh size to the next are chosen.
Specifically, as indicated in Figure 1, the method of Ford and Simpson [33] starts at the current time and fills subintervals of prescribed lengths from right to left with appropriately speced mesh points. This will lead to a reduction of the computational cost to O ( N log N ) while retaining the convergence order of the underlying scheme [33]. However, as indicated in Figure 1, it is common that the left end point of the leftmost coarsely subdivided interval does not match the initial point. In this case, one can either fill the remaining subinterval at the left end of the full interval with a fine mesh (which increases the computational cost but also reduces the error) or simply ignore the contribution from this subinterval (which reduces the computational complexity but slightly increases the error; however, since the memory length still grows with the number of steps, this does not imply the complete loss of accuracy observed in the finite memory principle). In either case, grid points from the fine mesh that are not currently used in the nested mesh may become active again in future steps. Therefore, all previous grid points need to be kept in memory, so the required amount of memory space remains at O ( N ) .
In contrast, the approach of Diethelm and Freed [34] starts to fill the basic interval from left to right, i.e., it begins with the subinterval with the coarsest mesh and then moves to the finer-mesh regions. The final result is also a method with an O ( N log N ) computational cost, and with the same convergence order as the Ford-Simpson method; but its selection strategy for grid points implies that points that are inactive in the current step will never become active again in future steps, and consequently the history data for these inactive points can be eliminated from the main memory. This reduces the memory requirements to only O ( log N ) .

6.2. A Method Based on the Fast Fourier Transform Algorithm

An effective approach for the fast evaluation of the convolution sums in (17) was proposed in [35,36]. The main idea is to split each of these sums in a way that enables the exploitation of the fast Fourier transform (FFT) algorithm. To provide a concise description, let us introduce the notations
T p ( n ) = j = p n c n j g j , S p , q ( n ) = j = p q c n j g j , n p ,
where g j = y j or g j = f ( t j , y j ) according to the formula used in (17). Thus the numerical methods described by (17) can be recast as
y n = ϕ n + T 0 ( n ) , n = 1 , 2 , , N .
The algorithm described in [35,36] is based on splitting T 0 ( n ) into one or more partial sums of type S p , q ( n ) and just one final convolution sum T p ( n ) of a maximum (fixed) length r. Thus, the computation is simply initialized as
T 0 ( n ) = j = 0 n c n j g j n { 1 , 2 , , r 1 }
and the following r values of T 0 ( n ) are split into the two terms
T 0 ( n ) = S 0 , r 1 ( n ) + T r ( n ) n { r , r + 1 , , 2 r 1 } .
Similarly, for the computation of the next 2 r values, T 0 ( n ) is split according to
T 0 ( n ) = S 0 , 2 r 1 ( n ) + T 2 r ( n ) n { 2 r , 2 r + 1 , , 3 r 1 } S 0 , 2 r 1 ( n ) + S 2 r , 3 r 1 ( n ) + T 3 r ( n ) + S 2 r , 3 r 1 ( n ) n { 3 r , 3 r + 1 , , 4 r 1 }
and the further 4 r summations are split according to
T 0 ( n ) = S 0 , 4 r 1 ( n ) + T 4 r ( n ) n { 4 r , 4 r + 1 , , 5 r 1 } S 0 , 4 r 1 ( n ) + S 4 r , 5 r 1 ( n ) + T 5 r ( n ) n { 5 r , 5 r + 1 , , 6 r 1 } S 0 , 4 r 1 ( n ) + S 4 r , 6 r 1 ( n ) + T 7 r ( n ) n { 6 r , 6 r + 1 , , 7 r 1 } S 0 , 4 r 1 ( n ) + S 4 r , 6 r 1 ( n ) + S 6 r , 7 r 1 ( n ) + T 8 r ( n ) n { 7 r , 7 r + 1 , , 8 r 1 }
and this process is continued until all terms T 0 ( n ) , for n N , are evaluated.
Note that in the above splittings the length ( p , q ) = q p + 1 of each sum S p , q is always some multiple of r with a power of 2 as multiplying factor (i.e., the possible length of S q , p ( n ) is r, 2 r , 4 r , 8 r and so on).
For clarity, the diagram in Figure 2 illustrates the way in which the computation on the main triangle T 0 = ( n , j ) : 0 j n N is split into partial sums identified by the (red-labeled) squares S p , q = ( n , j ) : q + 1 n q + ( p , q ) , p j q and final blocks denoted by the (blue-labeled) triangles T p = ( n , j ) : p j n p + r 1 .
Each of the final blocks T r ( n ) , n = r , r + 1 , , ( + 1 ) r 1 , is computed by direct summation requiring r ( r + 1 ) / 2 floating-point operations. The evaluation of the partial sums S q , p ( n ) can instead be performed by the FFT algorithm (see [37] for a comprehensive description) which requires a number of floating-point operations proportional to 2 log 2 2 , with = ( p , q ) the length of each partial sum S q , p ( n ) , since r is a power of 2.
In the optimal case in which both r and N are powers of 2, each partial sum S p , q that must be computed together with its length, number and computational cost is described in Table 1.
Furthermore, N / r final blocks T r , each of length r, are also computed in r ( r + 1 ) / 2 floating-point operations and hence the total amount of floating point operations is proportional to
N log 2 N + 2 N 2 log 2 N 2 + 4 N 4 log 2 N 4 + + s N s log 2 N s + N r r ( r + 1 ) 2 = = j = 0 log 2 s N log 2 N 2 j + N r + 1 2 = O N ( log 2 N ) 2 , s = N 2 r ,
which turns out, for sufficiently large N, to be consistently significantly smaller than the number O N 2 required by the direct summation of T 0 ( N ) .
Although the whole procedure may appear complicated and requires some extra effort in coding, it turns out to be quite efficient since it can be applied to different methods of the form (17) and does not affect their accuracy. This preservation of accuracy is because the technique does take into account the entire history of the process in the same way as the straightforward approach mentioned above whose computational cost is O ( N 2 ) . Thus, one does need to keep the entire history data in active memory, but one avoids the requirement of using special meshes. All the Matlab codes for FDEs described in [10,21,38], and freely available on the Mathworks website [39], make use of this algorithm.

6.3. Kernel Compression Schemes

Although the terminology “kernel compression scheme” has been introduced only recently for a few specific works [40,41,42], we use it here to describe a collection of methods that were proposed at various times by various authors and are all based on essentially the same principle: approximation of the solution of a non-local FDE by means of (possibly several) local ODEs. We provide here just the main ideas underlying this approach and we will refer the reader to the literature for a more comprehensive coverage of the subject.
Actually, these are standalone methods (usually classified as nonclassical methods [43]) and not just algorithms improving the efficiency of the treatment of the memory term; for this reason they could have been discussed in Section 3 along with the other methods for FDEs. But since one of their main achievements (and the motivation for their introduction) is to handle memory and computational issues related to the long and persistent memory of fractional-order problems, we consider it appropriate to discuss them in the present section.
For ease of presentation we consider only 0 < α < 1 but the extension to any positive α is only a technical matter. The basic idea starts from some integral representation of the kernel of the RL integral (1), e.g.,
t α 1 Γ ( α ) = sin ( α π ) π 0 e r t r α d r ,
which, thanks to standard quadrature rules, can be approximated by exponential sums
t α 1 Γ ( α ) = k = 1 K w k e r k t + e K ( t ) ,
where the error e K ( t ) and the computational complexity related to the number K of nodes and weights depend on the choice among the many possible quadrature rules. When applying this approximation instead of the exact integral in the integral formulation (7), the solution of the FDE (6) is rewritten as
y ( t ) = y 0 + k = 1 K w k t 0 t e r k ( t u ) f ( u , y ( u ) ) d u + E K ( t ) .
Each of the integrals in (20) is actually the solution of an initial value problem:
y [ k ] ( t ) = r k y [ k ] ( t ) + f ( t , y [ k ] ( t ) ) y [ k ] ( t 0 ) = 0 ,
which can be numerically approximated by standard ODE solvers, yielding approximations y n [ k ] on some grid { t n } . If the quadrature rule is chosen so as to make the error E K ( t ) so small that it can be neglected, an approximate solution of the original FDE (6) can be obtained step-by-step as
y n = y 0 + k = 1 K w ¯ k y n [ k ] ,
where each y n [ k ] depends only on y n 1 [ k ] or on a few other previous values, according to the selected ODE solver.
In practice, a non-local problem (the FDE) with non-vanishing memory is replaced by K local problems (the ODEs) each demanding a smaller computational effort and the memory storage is restricted to O p K if a p-step ODE solver is used for each of the ODEs (21).
Obviously, the idea sketched above requires several further technical details to work properly. First, an accurate error analysis is needed to ensure that the overall error is below the target accuracy. This is a very delicate task because it involves the investigation of the interaction between the quadrature rule used to approximate the integral in (20) and the ODE solver applied to the system (21), which can be a highly nontrivial matter. Moreover, some substantial additional problems must be addressed. For instance, A-stable methods should generally be preferred when solving the system (21) since some of the r k > 0 can be very large and give rise to stiff problems.
A non-negligible issue is that it is not possible to find a quadrature rule approximating (18) in a uniform manner with respect to all relevant values of t, i.e. with the same accuracy for any t t 1 where t 1 is the first mesh point to the right of the initial point t 0 or for all t t 0 (in either case, the singularity at t 0 indeed makes the integral quite difficult to be approximated). To overcome this difficulty, several different approaches have been proposed.
In a series of pioneering works [44,45,46], where a complex contour integral
t α 1 Γ ( α ) = 1 2 π i C e s t s α d s
is chosen to approximate the kernel, the integration interval [ t 0 , T ] is divided into a sequence of subintervals of increasing lengths, and different quadrature rules (on different contours C ) are used in each of these intervals. While high accuracy can be obtained, this strategy is quite complicated and requires the use of more expensive complex arithmetic.
In [40,41,42] the integral in (7) is divided into local and history terms
y ( t ) = y 0 + 1 Γ ( α ) t 0 t δ t ( t u ) α 1 f ( u , y ( u ) ) d u History term + 1 Γ ( α ) t δ t t ( t u ) α 1 f ( u , y ( u ) ) d u Local term
for a fixed δ t > 0 . This confines the singularity of the kernel to the local term, which can be approximated by standard methods for weakly singular integral equations (e.g., a product-integration rule) with a reduced computational cost and an insignificant memory requirement. The kernel in the history term no longer contains any singularity and can be safely approximated by (19) which applies now just for t > δ t .
To obtain the highest possible accuracy, Gaussian quadrature rules are usually preferred. A rigorous and technical error analysis is necessary to tune parameters in an optimal way. Several implementations of approaches of this kind have been proposed (e.g., see [47,48,49,50,51]) but owing to their technical nature, a comparison to decide which method is in general the most convenient is difficult; we just refer to the interesting results presented in [52].

7. Some Remarks about Fractional Partial Differential Equations

Even though this paper is essentially devoted to the numerical solution of ordinary differential equations of fractional order and the computational treatment of the associated differential and integral operators, a few comments should be made regarding numerical methods for partial fractional differential equations (PDEs).
Remark 3.
The issues discussed in Section 4 are relevant to partial differential equations also. Indeed, it is shown in [53] that imposing excessive smoothness requirements on the solutions to a partial differential equation (e.g., for the sake of simplifying the error analysis or for obtaining a higher convergence order) has drastic implications regarding the class of admissible problems; in particular, the choice of the forcing function f ( x , t ) in a linear initial-boundary value problem will then completely determine the initial condition in the problem.
Our second remark regarding partial differential equations deals with a totally different aspect.
Remark 4.
Typical algorithms for time-fractional partial differential equations contain separate discretisation techniques with respect to the time variable and the space variable(s). A current trend is to employ a very high order method for the discretisation of the (non-fractional) differential operator with respect to the space variable. While this might seem an attractive approach at first sight, it has a number of disadvantages. Specifically, while this leads to a smaller discretization error in the space variable, it also increases the algorithm’s overall complexity and makes the understanding of its properties more difficult. This complexity would be acceptable if the overall error could be reduced significantly. But since the overall error comprises not only the error from the space discretisation but also the contribution from the time approximation, it follows that to reduce the overall error, one must force this latter component to be very small also. As indicated above, we cannot expect to achieve a high convergence order in this variable, so the only way to reach this goal is to choose the time step size very small (in comparison with the space mesh size). From Section 6 we conclude that a standard algorithm with a higher-than-linear complexity is likely to lead to prohibitive run times, and even if the time discretisation uses a method with a linear or almost linear complexity, this very small step size requirement will still imply a high overall cost. Therefore, the use of a high-order space discretisation in a time-fractional partial differential equation is usually inadvisable.

8. Concluding Remarks

In this paper we have tried to describe some issues related to the correct use of numerical methods for fractional-order problems. Unlike integer-order ODEs, numerical methods for FDEs are in general not taught in undergraduate courses and, very often, non-specialists are unaware of the peculiarities and major difficulties that arise in the numerical treatment of FDEs and fractional PDEs.
The availability of only a few well-organized textbooks and monographs in this field, together with the presence of many incorrect results in the literature, makes the situation even more difficult.
Some of the ideas collected in this paper were discussed in the lectures of the Training School on “Computational Methods for Fractional-Order Problems”, held in Bari (Italy) during 22–26 July 2019, and promoted by the Cost Action CA15225—Fractional-order systems: analysis, synthesis and their importance for future design.
We believe that the scientific community should make an effort to raise the level of knowledge in this field by promoting specific academic courses at a basic level and/or by organizing training schools.

Author Contributions

Formal analysis, K.D., R.G. and M.S.; Investigation, K.D., R.G. and M.S.; Writing—original draft, K.D., R.G. and M.S.; Writing—review & editing, K.D., R.G. and M.S. All authors have read and agreed to the published version of the manuscript.

Funding

The cooperation which has lead to this article has been initiated and promoted within the COST Action CA15225, a network supported by COST (European Cooperation in Science and Technology). The work of Roberto Garrappa is also supported under a GNCS-INdAM 2019 Project. The work of Kai Diethelm was also supported by the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS17096A. The research of Martin Stynes is supported in part by the National Natural Science Foundation of China under grant NSAF U1930402.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CMComplete monotonicity
FDEFractional differential equation
FLMMFractional linear multistep method
LMMLinear multistep method
ODEOrdinary differential equation
PDEPartial differential equation

References

  1. Diethelm, K. The Analysis of Fractional Differential Equations; Lecture Notes in Mathematics; Springer: Berlin, Germany, 2010; Volume 2004, p. viii+247. [Google Scholar]
  2. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; North-Holland Mathematics Studies; Elsevier Science B.V.: Amsterdam, The Netherlands, 2006; Volume 204, p. xvi+523. [Google Scholar]
  3. Mainardi, F. Fractional Calculus and Waves in Linear Viscoelasticity; Imperial College Press: London, UK, 2010; p. xx+347. [Google Scholar]
  4. Miller, K.S.; Ross, B. An Introduction to the Fractional Calculus and Fractional Differential Equations; A Wiley-Interscience Publication; John Wiley & Sons, Inc.: New York, NY, USA, 1993; p. xvi+366. [Google Scholar]
  5. Podlubny, I. Fractional Differential Equations; Mathematics in Science and Engineering; Academic Press Inc.: San Diego, CA, USA, 1999; Volume 198, p. xxiv+340. [Google Scholar]
  6. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives; Gordon and Breach Science Publishers: Yverdon, Switzerland, 1993; p. xxxvi+976. [Google Scholar]
  7. Young, A. Approximate product-integration. Proc. R. Soc. Lond. Ser. A 1954, 224, 552–561. [Google Scholar]
  8. Young, A. The application of approximate product integration to the numerical solution of integral equations. Proc. R. Soc. Lond. Ser. A 1954, 224, 561–573. [Google Scholar]
  9. Diethelm, K.; Ford, N.J.; Freed, A.D. A predictor-corrector approach for the numerical solution of fractional differential equations. Nonlinear Dyn. 2002, 29, 3–22. [Google Scholar] [CrossRef]
  10. Garrappa, R. On linear stability of predictor-corrector algorithms for fractional differential equations. Int. J. Comput. Math. 2010, 87, 2281–2290. [Google Scholar] [CrossRef]
  11. Yan, Y.; Pal, K.; Ford, N.J. Higher order numerical methods for solving fractional differential equations. BIT Numer. Math. 2014, 54, 555–584. [Google Scholar] [CrossRef] [Green Version]
  12. Li, Z.; Liang, Z.; Yan, Y. High-order numerical methods for solving time fractional partial differential equations. J. Sci. Comput. 2017, 71, 785–803. [Google Scholar] [CrossRef] [Green Version]
  13. Dixon, J. On the order of the error in discretization methods for weakly singular second kind Volterra integral equations with nonsmooth solutions. BIT 1985, 25, 624–634. [Google Scholar] [CrossRef]
  14. Diethelm, K.; Ford, N.J.; Freed, A.D. Detailed error analysis for a fractional Adams method. Numer. Algorithms 2004, 36, 31–52. [Google Scholar] [CrossRef] [Green Version]
  15. Oldham, K.B.; Spanier, J. Theory and applications of differentiation and integration to arbitrary order. In The Fractional Calculus; Academic Press: New York, NY, USA; London, UK, 1974; p. xiii+234. [Google Scholar]
  16. Lynch, V.E.; Carreras, B.A.; del Castillo-Negrete, D.; Ferreira-Mejias, K.M.; Hicks, H.R. Numerical methods for the solution of partial differential equations of fractional order. J. Comput. Phys. 2003, 192, 406–421. [Google Scholar] [CrossRef]
  17. Lubich, C. Discretized fractional calculus. SIAM J. Math. Anal. 1986, 17, 704–719. [Google Scholar] [CrossRef]
  18. Lubich, C. Convolution quadrature and discretized operational calculus. I. Numer. Math. 1988, 52, 129–145. [Google Scholar] [CrossRef]
  19. Lubich, C. Convolution quadrature and discretized operational calculus. II. Numer. Math. 1988, 52, 413–425. [Google Scholar] [CrossRef]
  20. Lubich, C. Convolution quadrature revisited. BIT 2004, 44, 503–514. [Google Scholar] [CrossRef]
  21. Garrappa, R. Trapezoidal methods for fractional differential equations: theoretical and computational aspects. Math. Comput. Simul. 2015, 110, 96–112. [Google Scholar] [CrossRef] [Green Version]
  22. Diethelm, K.; Ford, J.M.; Ford, N.J.; Weilbeer, M. Pitfalls in fast numerical solvers for fractional differential equations. J. Comput. Appl. Math. 2006, 186, 482–503. [Google Scholar] [CrossRef] [Green Version]
  23. Stynes, M. Singularities. In Handbook of Fractional Calculus With Applications; De Gruyter: Berlin, Germany, 2019; Volume 3, pp. 287–305. [Google Scholar]
  24. Miller, R.K.; Feldstein, A. Smoothness of solutions of Volterra integral equations with weakly singular kernels. SIAM J. Math. Anal. 1971, 2, 242–258. [Google Scholar] [CrossRef] [Green Version]
  25. Lubich, C. Runge-Kutta theory for Volterra and Abel integral equations of the second kind. Math. Comput. 1983, 41, 87–102. [Google Scholar] [CrossRef]
  26. Hanyga, A. A comment on a controversial issue: A generalized fractional derivative cannot have a regular kernel. Fract. Calc. Appl. Anal. 2020, 23, 211–223. [Google Scholar] [CrossRef] [Green Version]
  27. Giusti, A. General fractional calculus and Prabhakar’s theory. Commun. Nonlinear Sci. Numer. Simul. 2019, 83, 105114. [Google Scholar] [CrossRef] [Green Version]
  28. Hanyga, A. Physically acceptable viscoelastic models. In Trends in Applications of Mathematics to Mechanics; Hutter, K., Wang, Y., Eds.; Shaker Verlag: Aachen, Germany, 2005; pp. 125–136. [Google Scholar]
  29. Stynes, M.; O’Riordan, E.; Gracia, J.L. Necessary conditions for convergence of difference schemes for fractional-derivative two-point boundary value problems. BIT 2016, 56, 1455–1477. [Google Scholar] [CrossRef]
  30. Sarv Ahrabi, S.; Momenzadeh, A. On failed methods of fractional differential equations: the case of multi-step generalized differential transform method. Mediterr. J. Math. 2018, 15, 149. [Google Scholar] [CrossRef] [Green Version]
  31. Garrappa, R. Neglecting nonlocality leads to unreliable numerical methods for fractional differential equations. Commun. Nonlinear Sci. Numer. Simul. 2019, 70, 302–306. [Google Scholar] [CrossRef] [Green Version]
  32. Deng, W.H. Short memory principle and a predictor-corrector approach for fractional differential equations. J. Comput. Appl. Math. 2007, 206, 174–188. [Google Scholar] [CrossRef] [Green Version]
  33. Ford, N.J.; Simpson, A.C. The numerical solution of fractional differential equations: Speed versus accuracy. Numer. Algorithms 2001, 26, 333–346. [Google Scholar] [CrossRef]
  34. Diethelm, K.; Freed, A.D. An Efficient Algorithm for the Evaluation of Convolution Integrals. Comput. Math. Appl. 2006, 51, 51–72. [Google Scholar] [CrossRef] [Green Version]
  35. Hairer, E.; Lubich, C.; Schlichte, M. Fast numerical solution of nonlinear Volterra convolution equations. SIAM J. Sci. Statist. Comput. 1985, 6, 532–541. [Google Scholar] [CrossRef]
  36. Hairer, E.; Lubich, C.; Schlichte, M. Fast numerical solution of weakly singular Volterra integral equations. J. Comput. Appl. Math. 1988, 23, 87–98. [Google Scholar] [CrossRef]
  37. Henrici, P. Fast Fourier methods in computational complex analysis. SIAM Rev. 1979, 21, 481–527. [Google Scholar] [CrossRef]
  38. Garrappa, R. Numerical Solution of Fractional Differential Equations: A Survey and a Software Tutorial. Mathematics 2018, 6, 16. [Google Scholar] [CrossRef] [Green Version]
  39. Garrappa, R. Mathworks Author’s Profile. Available online: https://www.mathworks.com/matlabcentral/profile/authors/2361481-roberto-garrappa (accessed on 26 January 2020).
  40. Baffet, D. A Gauss-Jacobi kernel compression scheme for fractional differential equations. J. Sci. Comput. 2019, 79, 227–248. [Google Scholar] [CrossRef] [Green Version]
  41. Baffet, D.; Hesthaven, J.S. A kernel compression scheme for fractional differential equations. SIAM J. Numer. Anal. 2017, 55, 496–520. [Google Scholar] [CrossRef] [Green Version]
  42. Baffet, D.; Hesthaven, J.S. High-order accurate adaptive kernel compression time-stepping schemes for fractional differential equations. J. Sci. Comput. 2017, 72, 1169–1195. [Google Scholar] [CrossRef]
  43. Diethelm, K. An investigation of some nonclassical methods for the numerical approximation of Caputo-type fractional derivatives. Numer. Algorithms 2008, 47, 361–390. [Google Scholar] [CrossRef]
  44. López-Fernández, M.; Lubich, C.; Schädle, A. Adaptive, fast, and oblivious convolution in evolution equations with memory. SIAM J. Sci. Comput. 2008, 30, 1015–1037. [Google Scholar] [CrossRef] [Green Version]
  45. Lubich, C.; Schädle, A. Fast convolution for nonreflecting boundary conditions. SIAM J. Sci. Comput. 2002, 24, 161–182. [Google Scholar] [CrossRef]
  46. Schädle, A.; López-Fernández, M.; Lubich, C. Fast and oblivious convolution quadrature. SIAM J. Sci. Comput. 2006, 28, 421–438. [Google Scholar] [CrossRef] [Green Version]
  47. Banjai, L.; López-Fernández, M. Efficient high order algorithms for fractional integrals and fractional differential equations. Numer. Math. 2019, 141, 289–317. [Google Scholar] [CrossRef] [Green Version]
  48. Fischer, M. Fast and parallel Runge-Kutta approximation of fractional evolution equations. SIAM J. Sci. Comput. 2019, 41, A927–A947. [Google Scholar] [CrossRef] [Green Version]
  49. Jiang, S.; Zhang, J.; Zhang, Q.; Zhang, Z. Fast evaluation of the Caputo fractional derivative and its applications to fractional diffusion equations. Commun. Comput. Phys. 2017, 21, 650–678. [Google Scholar] [CrossRef]
  50. Li, J.R. A fast time stepping method for evaluating fractional integrals. SIAM J. Sci. Comput. 2010, 31, 4696–4714. [Google Scholar] [CrossRef] [Green Version]
  51. Zeng, F.; Turner, I.; Burrage, K. A stable fast time-stepping method for fractional integral and derivative operators. J. Sci. Comput. 2018, 77, 283–307. [Google Scholar] [CrossRef] [Green Version]
  52. Guo, L.; Zeng, F.; Turner, I.; Burrage, K.; Karniadakis, G.E.M. Efficient multistep methods for tempered fractional calculus: Algorithms and simulations. SIAM J. Sci. Comput. 2019, 41, 2510–2535. [Google Scholar] [CrossRef] [Green Version]
  53. Stynes, M. Too much regularity may force too much uniqueness. Fract. Calc. Appl. Anal. 2016, 19, 1554–1562. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Full mesh (top) and nested meshes proposed in [33] (center) and in [34] (bottom). The meshes are shown for the time instant t = 21 and the basic step size h = 1 / 10 .
Figure 1. Full mesh (top) and nested meshes proposed in [33] (center) and in [34] (bottom). The meshes are shown for the time instant t = 21 and the basic step size h = 1 / 10 .
Mathematics 08 00324 g001
Figure 2. Splitting of the computation of T 0 ( n ) into partial sums S p , q (red-labeled squares) and final blocks T p (blue-labeled triangles).
Figure 2. Splitting of the computation of T 0 ( n ) into partial sums S p , q (red-labeled squares) and final blocks T p (blue-labeled triangles).
Mathematics 08 00324 g002
Table 1. Partial sums, their length, number and computational cost for the evaluation of T 0 ( N ) .
Table 1. Partial sums, their length, number and computational cost for the evaluation of T 0 ( N ) .
Partial SumsLen.No.Cost
S 0 , N 2 1 N 2 1 O N log 2 N
S 0 , N 4 1 , S N 2 , 3 N 4 1 N 4 2 O N 2 log 2 N 2
S 0 , N 8 1 , S N 4 , 3 N 8 1 , S N 2 , 5 N 8 1 , S 3 N 4 , 7 N 8 1 N 8 4 O N 4 log 2 N 4
S 0 , N 16 1 , S N 8 , 3 N 16 1 , S N 4 , 5 N 16 1 , S 3 N 8 , 7 N 16 1 , S N 1 , 9 N 16 1 , S 5 N 8 , 11 N 16 1 , S 3 N 4 , 13 N 16 1 , S 7 N 8 , 15 N 16 1 N 16 8 O N 8 log 2 N 8
         ⋮
S 0 , r 1 , S 2 r , 3 r 1 , S 4 r , 5 r 1 , S 6 r , 7 r 1 , S 8 r , 9 r 1 ,...r s = N 2 r O N s log 2 N s

Share and Cite

MDPI and ACS Style

Diethelm, K.; Garrappa, R.; Stynes, M. Good (and Not So Good) Practices in Computational Methods for Fractional Calculus. Mathematics 2020, 8, 324. https://0-doi-org.brum.beds.ac.uk/10.3390/math8030324

AMA Style

Diethelm K, Garrappa R, Stynes M. Good (and Not So Good) Practices in Computational Methods for Fractional Calculus. Mathematics. 2020; 8(3):324. https://0-doi-org.brum.beds.ac.uk/10.3390/math8030324

Chicago/Turabian Style

Diethelm, Kai, Roberto Garrappa, and Martin Stynes. 2020. "Good (and Not So Good) Practices in Computational Methods for Fractional Calculus" Mathematics 8, no. 3: 324. https://0-doi-org.brum.beds.ac.uk/10.3390/math8030324

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop