Next Article in Journal / Special Issue
Analytical Solution of Generalized Space-Time Fractional Cable Equation
Previous Article in Journal
Multiple q-Zeta Brackets
Previous Article in Special Issue
Basic Results for Sequential Caputo Fractional Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fractional Diffusion in Gaussian Noisy Environment

Department of Mathematics, University of Kansas, Lawrence, KS 66045, USA
*
Author to whom correspondence should be addressed.
Submission received: 16 February 2015 / Accepted: 24 March 2015 / Published: 31 March 2015
(This article belongs to the Special Issue Recent Advances in Fractional Calculus and Its Applications)

Abstract

:
We study the fractional diffusion in a Gaussian noisy environment as described by the fractional order stochastic heat equations of the following form: D t ( α ) u ( t , x ) = B u + u W ˙ H, where D t ( α ) is the Caputo fractional derivative of order α ∈ (0,1) with respect to the time variable t, B is a second order elliptic operator with respect to the space variable x ∈ ℝd and W ˙ H a time homogeneous fractional Gaussian noise of Hurst parameter H = (H1,⋯, Hd). We obtain conditions satisfied by α and H, so that the square integrable solution u exists uniquely.

1. Introduction

In recent years, there have been a great amount of works on anomalous diffusions in the study of biophysics, and so on (see, for example, [14], to mention just a few). In mathematics, some of these anomalous diffusions (such as sub-diffusions) can be described by the so-called fractional order diffusion processes. As for the term “fractional order diffusion”, one has to distinguish two completely different types. One is the equation of the form tu(t,x) = −(−Δ)αu(t,x), where t ≥ 0, x ∈ ℝd, α ∈ (0,1) is a positive number t = t and Δ = i = 1 d x i 2 is the Laplacian. This equation is not associated with the anomalous diffusion. Instead, it is associated with the so-called stable process (or, in general, the Lévy process), which has jumps. Another equation is of the form D t ( α ) u ( t , x ) = Δ u ( t , x ), where D t ( α ) is the Caputo fractional derivative with respect to t. It is also possible to use the Riemann-Liouville fractional derivative instead of the Caputo one (see [5] for the study of various fractional derivatives). This equation is relevant to the anomalous diffusion that we mentioned and has been studied by a number of researchers. Let us mention a few recent publications concerning the applications of subdiffusive fractional equations. The work in [3] studied the applications to the transport in biological cells. The work in [6,7] studied the fractional chemotaxis diffusion equation. The work in [4,8] studied the morphogen gradient formation. Anomalous electrodiffusion in nerve cells is studied in [9]. The work in [10,11] studied subdiffusive transport equations; it was argued that it is unlikely that a Caputo form of a transport equation can be derived from a chemotaxis model on the lattice, and the use of the Riemann-Liouville-type equation was strongly advocated if the anomalous exponent α is space dependent.
If one considers the anomalous diffusion in a random environment, then this naturally leads to the study of a fractional order stochastic partial differential equation of the form D t ( α ) u ( t , x ) = B u ( t , x ) + u ( t , x ) W ˙ ( t , x ), where B is a second order differential operator, including the Laplacian as a special example, and W ˙ is a noise. In this paper, we shall study this fractional order stochastic partial differential equation when W ˙ ( t , x ) = W ˙ H ( x ) is a time homogeneous fractional Gaussian noise of Hurst parameter H = (H1,⋯, Hd). Mainly, we shall find a relation between α and H, such that the solution to the above equation has a unique square integrable solution.
If α is formally set to one, then the above stochastic partial differential equation has been studied in [12]. Therefore, our work can be considered as an extension of the work [12] to the case of fractional diffusion (in Gaussian noisy environment). Let us also mention that when we formally set α = 1, and we recover one of the main results in [12] (see Remark 3 in Section 2 below). Thus, our condition (2.10) given below is also optimal.
Here is the organization of the paper. The main result of the paper is stated in Section 2. In our proof, we need to use the properties of the two fundamental solutions (Green’s functions) Z(t, x, ξ) and Y(t, x, ξ) associated with the equation D t ( α ) u ( t , x ) = B u ( t , x ), which is represented by Fox’s H-function. We shall recall some most relevant results on the H-function and Green’s function Z(t, x, ξ) and Y(t, x, ξ) in Section 3. A number of preparatory lemmas are needed to prove our main result, and they are presented in Section 4. Finally, Section 5 is devoted to the proof of our main theorem.

2. Main Result

Let:
B = i , j = 1 d a i , j ( x ) 2 x i x j + j = 1 d b j ( x ) x j + c ( x )
be a uniformly elliptic second-order differential operator with bounded continuous real-valued coefficients. Let u0 be a given bounded continuous function (locally Hölder continuous if d > 1). Let {WH(x),x ∈ ℝd} be a time homogeneous (time-independent) fractional Brownian field on some probability space (Ω, , P) (like elsewhere in probability theory, we omit the dependence of WH(x) = Wh(x,ω) on ω ∈ Ω). Namely, the stochastic process {WH(x), x ∈ ℝd} is a (multi-parameter) Gaussian process with mean zero, and its covariance is given by:
E ( W H ( x ) W H ( y ) ) = i = 1 d R H i ( x i , y i ) ,
where H1, ⋯, Hd are some real numbers in the interval (0,1). Due to some technical difficulty, we assume that Hi > 1/2 for all i = 1, 2, ⋯, d. The symbol E denotes the expectation on (Ω, , P) and:
R H i ( x i , y i ) = 1 2 ( | x i | 2 H i + | y i | 2 H i | x i y i | 2 H i ) , x i , y i
is the covariance function of a fractional Brownian motion of Hurst parameter Hi.
Throughout this paper, we fix an arbitrary parameter α ∈ (0,1) and a finite time horizon T ∈ (0, ∞). We study the following stochastic partial differential equation of fractional order:
{ D t ( α ) u ( t , x ) = B u ( t , x ) + u ( t , x ) W ˙ H ( x ) , t ( 0 , T ] , x d ; u ( 0 , x ) = u 0 ( x ) ,
where:
D t ( α ) u ( t , x ) = 1 Γ ( 1 α ) [ t 0 t ( t τ ) α u ( τ , x ) d τ t α u ( 0 , x ) ]
is the Caputo fractional derivative (see, e.g., [5]) and W ˙ H ( x ) = d x 1 x d W H ( x ) is the distributional derivative (generalized derivative) of WH, called fractional Brownian noise.
Our objective is to obtain condition on α and H, such that the above equation has a unique solution. However, since WH is not differentiable or since W ˙ H (x) does not exist as an ordinary function, we have to describe under what sense a random field {u(t, x), t ≥ 0, x ∈ ℝd} is a solution to the above Equation (2.2).
To motivate our definition of the solution, let us consider the following (deterministic) partial differential equation of fractional order with the term W ˙ H ( x ) in Equation (2.2) replaced by f(t, x):
{ D t ( α ) u ˜ ( t , x ) = B u ˜ ( t , x ) + f ( t , x ) , t ( 0 , T ] , x d ; u ˜ ( 0 , x ) = u 0 ( x ) ,
where the function f is bounded and jointly continuous in (t, x) and locally Hölder continuous in x.
In [13], it is proven that there are two Green’s functions {Z(t, x, ξ), Y(t, x, ξ), 0 < tT, x,ξ ∈ ℝd}, such that the solution to the Cauchy problem Equation (2.3) is given by:
u ˜ ( t , x ) = d Z ( t , x , ξ ) u 0 ( ξ ) d ξ + 0 t d s d Y ( t s , x , y ) f ( s , y ) d y .
In general, there is no explicit form for the two Green’s functions {Z(t, x, ξ), Y(t, x, ξ)}. However, their constructions and properties are known (see [1315] and the references therein). We shall recall some needed results in the next section.
From the classical solution expression Equation (2.4), we expect that the solution u(t, x) to Equation (2.2) satisfies formally:
u ( t , x ) = d Z ( t , x , ξ ) u 0 ( ξ ) d ξ + 0 t d s d Y ( t s , x , y ) u ( s , y ) W ˙ H ( y ) d y .
The above formal integral 0 t d s d Y ( t s , x , y ) u ( s , y ) W ˙ H ( y ) d y can be defined by Itô-Skorohod stochastic integral d [ 0 t Y ( t s , x , y ) u ( s , y ) d s ] W H ( d y ), as given in [12].
Now, we can give the following definition.
Definition 1. A random field {u(t,x),0 ≤ t ≤T,xd} is called a mild solution to the Equation (2.2) if:
  • u(t, x) is jointly measurable in t ∈ [0, T] and x ∈ ℝd;
  • ∀(t,x) ∈ [0, T] × ℝd, 0 t d Y ( t s , x , y ) u ( s , y ) d s W H ( d y ) is well defined in 2 = L2 (Ω, , P);
  • The following holds in 2:
    u ( t , x ) = d Z ( t , x , ξ ) u 0 ( ξ ) d ξ + 0 t d Y ( t s , x , y ) u ( s , y ) W H ( d y ) d s .
Let us return to the discussion of the two Green’s functions {Z(t,x,ξ), Y(t,x,ξ)}. If α = 1, namely, if D t ( α ) in Equation (2.3) is replaced by t and B = Δ : = i = 1 d x i 2, then:
Z ( t , x , ξ ) = Y ( t , x , ξ ) = ( 4 π t ) d / 2 exp { | x ξ | 2 4 t } .
In this case, the stochastic partial differential equation of the form:
u ( t , x ) t = Δ u ( t , x ) + u W ˙ H ( x ) , x d ,
was studied in [12]. The mild solution to the above Equation (2.7) is proven to exist uniquely under conditions:
H i > 1 / 2 , i = 1 , , d and i = 1 d H i > d 1 .
The main result of this paper is to extend the above result in [12] to our Equation (2.2).
Theorem 2. Let the coefficients aij(x), bi(x), i, j = 1, ⋯, d, be bounded and Hölder continuous with exponent γ.
Let aij(x) be uniformly elliptic. Namely, there is a constant a0, ∈ (0, ∞), such that:
i , j = 1 d a i j ( x ) ξ i ξ j a 0 | ξ | 2 ξ = ( ξ 1 , , ξ d ) d .
Let u0 be bounded continuous (and locally Hölder continuous if d> 1). Assume:
H i > { 1 2 i f d = 1 , 2 , 3 , 4 1 2 d γ 2 d i f d 5
and:
i = 1 d H i > d 2 + 1 α .
Then, the mild solution to (2.2) exists uniquely in L2(Ω, , P).
Remark 3. (i) If α is formally set to one and B = Δ, then Hi > 1/2 implies Condition (5.8). Thus, Condition (2.10) is the same as Condition (2.8) (which is a condition given in [12]). Therefore, in some sense, our condition (2.10) is optimal.
Since Hi < 1 for all i = 1,2, ⋯,d, the condition is possible only when α > 1/2.

3. Green’s Functions Z and Y

3.1. Fox’s H-Function

We shall use the H-function to express the Green’s functions Z and Y in Definition 1. In this subsection, we recall some results about the H-function and the two Green’s functions. We shall follow the presentation in [16] (see also [13] and the references therein).
Definition 4. Let m, n, p, q be integers, such that 0 ≤ m ≤ q,0 ≤ n ≤ p. Let ai,bi ∈ ℂ be complex numbers, and let αj, βj be positive numbers, i = 1, 2, ⋯,p;j = 1, 2, ⋯, q. Let the set of poles of the gamma functions Γ(bj + βjs) not intersect with that of the gamma functions Γ(1 − ai αis), namely,
{ b j l = b j l β j , l = 0 , 1 , } { a i k = 1 a i + k α i , k = 0 , 1 , } =
for all i = 1, 2, ⋯, p and j = 1,2, ⋯, q. The H-function:
H p q m n ( z ) H p q m n [ z | ( a 1 , α 1 ) ( a p , α p ) ( b 1 , β 1 ) ( b q , β q ) ]
is defined by the following integral:
H p q m n ( z ) = 1 2 π i L j = 1 m Γ ( b j + β j s ) i = 1 n Γ ( 1 a i α i s ) i = n + 1 p Γ ( a j + α i s ) j = m + 1 q Γ ( 1 b j β i s ) z s d s , z ,
where an empty product in Equation (3.1) means one and L in Equation (3.1) is the infinite contour, which separates all of the points bjl to the left and all the points aik to the right of L. Moreover, L has one of the following forms:
Case 1. L = L−∞ is a left loop situated in a horizontal strip starting at point − ∞ + iϕ1 and terminating at point − ∞ + 2 for some − ∞ < ϕ1 < ϕ2 < ∞;
Case 2. L = L+∞ is a right loop situated in a horizontal strip starting at point ∞ + iϕ1 and terminating at point ∞ + 2 for some − ∞ < ϕ1 < ϕ2 < ∞;
Case 3. L = Liγ is a contour starting at point γ - i∞ and terminating at point γ + i∞ for some γ ∈ ( − ∞, ∞).
To illustrate L, we give the following graphs.
Mathematics 03 00131f1
The integral Equation (3.1) exists when j = 1 q β j i = 1 p α i 0 (see [16], Theorem 1.1).
Example 5. To compare with the classical case α = 1, we consider the case m = 2, n = 0, p = 1, q = 2, α1 = α1 = b2 = β1 = β2 = 1 and b 1 = d 2. Let L = L−∞. Then, we have:
H 12 20 [ z | ( d 2 , 1 ) , ( 1 , 1 ) ( 1 , 1 ) ] = 1 2 π i L Γ ( d 2 + s ) Γ ( 1 + s ) Γ ( 1 + s ) z s d s = 1 2 π i L Γ ( d 2 + s ) z s d s = v = 0 lim 8 ( d 2 + v ) ( s + d 2 + v ) Γ ( d 2 + s ) z s = v = 0 lim 8 ( d 2 + v ) Γ ( v + d 2 + s + 1 ) ( s + d 2 + v 1 ) ( s + d 2 ) z s = v = 0 z d / 2 ( 1 ) v 1 v ! z v = z d / 2 exp ( z ) .

3.2. Green’s Functions Z and Y When B Has Constant Coefficients

In this subsection, let us consider Z and Y when the operator B in Equation (2.2) has the following form:
B = i , j = 1 d a i j 2 x i x j ,
where the matrix A = (aij) is positive definite. In this case, Z and Y (we call them Z0 and Y0 to distinguish from the general coefficient case) are given as follows.
Z 0 ( t , x ) = π d / 2 ( det A ) 1 / 2 [ i , j = 1 d A ( i j ) x i x j ] d / 2 × H 12 20 [ 1 4 t α i , j = 1 d A ( i j ) x i x j | ( d 2 , 1 ) , ( 1 , 1 ) ( 1 , α ) ] ,
where (A(ij)) = A−1 and
Y 0 ( t , x ) = π d / 2 ( det A ) 1 / 2 [ i , j = 1 d A ( i j ) x i x j ] d / 2 t α 1 × H 12 20 [ 1 4 t α i , j = 1 d A ( i j ) x i x j | ( d 2 , 1 ) , ( 1 , 1 ) ( α , α ) ] .
It is easy to see that for the constant coefficients, both of the Green’s functions are homogeneous in time and space. Namely,
Z 0 ( t , x , ξ ) = Z 0 ( t , x ξ ) , Y 0 ( t , x , ξ ) = Y 0 ( t , x ξ ) .
In particular, when α = 1, it is easy to see from the above expression and the explicit form Equation (3.2) of H 12 20 ( z ) that:
Z 0 ( t , x , ξ ) = Y 0 ( t , x , ξ ) = ( 4 π ) d / 2 det ( A ) 1 / 2 exp { i , j = 1 d A ( i j ) ( x i ξ i ) ( x j ξ j ) 4 t } ,
which reduces to Equation (2.6) when A = I is the identity matrix.
With the above expressions for Z0 and Y0 and the properties of the H-function, one can obtain the following estimates.
Proposition 6. Denote:
p ( t , x ) = exp ( σ t α 2 α | x | 2 2 α ) , t > 0 , x d ,
where and in what follows the positive constants C and σ are generic, which may be different in different occurrences. Then, we have the following estimates:
| Z 0 ( t , x ) | { C t α 2 p ( t , x ) w h e n d = 1 C t α [ | log | x | 2 t α | + 1 ] p ( t , x ) w h e n d = 2 C t α | x | 2 d p ( t , x ) w h e n d 3 ,
where, for instance, | Z 0 ( t , x ) | C t α 2 p ( t , x ) means that there are positive constant C and positive constant σ, such that the above inequality holds.
Proof. Denote R = |x|2/tα. From [13], Proposition 1, it follows that when R ≤ 1, we have:
| Z 0 ( t , x ) | { C t α 2 w h e n d = 1 C t α [ | log | x | 2 t α | + 1 ] w h e n d = 2 C t α | x | 2 d w h e n d 3.
Since when R ≤ 1, p(t, x) is bounded from below. This proves the inequality Equation (3.4) when R ≤ 1.
When R > 1, then by [13], Proposition 1, we have | Z 0 ( t , x ) | C t α d 2 p ( t , x ). It is clear that this implies the inequality Equation (3.4) when d = 1 and d = 2. Now, we assume that d ≥ 3. We have:
| Z 0 ( t , x ) | C t α d 2 p ( t , x ) C t α | x | 2 d ( | x | 2 t α ) d 2 1 p ( t , x ) C t α | x | 2 d p ( t , x ) ,
where we used the fact that ( | x | 2 t α ) d 2 1 p ( t , x ) p ( t , x ) for a different σ in the later p(t, x). □
Similarly, we can use [13], Proposition 2 (for the d = 1 case), and [13], Section 4.2 (for the d ≥ 2 case), to obtain the following estimates for Y0(t, x).
Proposition 7. We follow the same notation p(t, x) as defined by Equation (3.3). We have:
  • When d = 1, we have the following estimates:
    | Y 0 ( t , x ) | { C t α 2 1 p ( t , x ) w h e n t α | x | 2 1 C t α 2 1 w h e n t α | x | 2 1.
  • When d ≥ 2, we have the following estimates:
    | Y 0 ( t , x ) | { C t p ( t , x ) w h e n d = 2 C t α 2 1 p ( t , x ) w h e n d = 3 C t α 1 [ | log | x | 2 t α | + 1 ] p ( t , x ) w h e n d = 4 C t α 1 | x | 4 d p ( t , x ) w h e n d 5.

3.3. Green’s Functions Z and Y in the General Coefficient Case

If the coefficients of B are not constant, then the Green’s functions Z and Y are more complicated and may be obtained by a method similar to the Levi parametrix for the parabolic equations.
Denote:
M ( t , x , ξ ) = i , j d [ a i j ( x ) a i j ( ξ ) ] 2 x i x j Z 0 ( t , x ξ , ξ ) + i = 1 d b i ( x ) x i Z 0 ( t , x ξ , ξ ) + c ( x ) Z 0 ( t , x ξ , ξ ) K ( t , x , ξ ) = i , j = 1 d [ a i j ( x ) a i j ( ξ ) ] 2 x i x i Y 0 ( t , x ξ , ξ ) + i = 1 d b i ( x ) x i ( t , x ξ , ξ ) + c ( x ) Y 0 ( t , x ξ , ξ ) .
Let Q(s, y, ξ) and Φ(s, y, ξ) be defined by:
Q ( t , x , ξ ) = M ( t , x , ξ ) + 0 t d s d K ( t s , x , y ) Q ( s , y , ξ ) d y ; Φ ( t , x . ξ ) = K ( t , x , ξ ) + 0 t d s d K ( t s , x , y ) Φ ( s , y , ξ ) d y .
The following proposition is proven in [13] (see the Section 2.2 ofthat paper).
Proposition 8. Let the coefficients aij(x) and bi(x) satisfy the conditions in Theorem 2. Recall that γ is the Hölder exponent of the coefficients with respect to the spatial variable x. Then, the Green’s functions {Z(t, x, ξ),Y(t, x, ξ)} have the following form:
Z ( t , x , ξ ) = Z 0 ( t , x ξ , ξ ) + V Z ( t , x , ξ ) ; Y ( t , x , ξ ) = Y 0 ( t , x ξ , ξ ) + V Y ( t , x , ξ ) ,
where
V Z ( t , x , ξ ) = 0 t d s d Y 0 ( t s , x , y ) Q ( s , y , ξ ) d y ; V Y ( t , x , ξ ) = 0 t d s d Y 0 ( t s , x , y ) Φ ( s , y , ξ ) d y .
Moreover, the function VZ(t, x, ξ),VY(t, x, ξ) satisfies the following estimates.
| V Z ( t , x , ξ ) | { C t ( γ 1 ) α 2 p ( t , x , ξ ) , w h e n d = 1 ; C t γ α 2 α p ( t , x ξ ) , w h e n d = 2 ; C t γ 0 α 2 α | x ξ | 2 d + γ γ 0 p ( t , x ξ ) , w h e n d = 3 or d 5 ; C t ( γ γ 0 ) α 2 α | x ξ | 2 + γ 2 γ 0 p ( t , x ξ ) , w h e n d = 4
and:
| V Y ( t , x , ξ ) | { C t α 1 + ( γ 1 ) α 2 p ( t , x ξ ) , w h e n d = 1 ; C t γ α 2 1 p ( t , x ξ ) , w h e n d = 2 ; C t ( γ 0 + γ ) α 4 1 | x ξ | 2 d + ( γ γ 0 ) / 2 p ( t , x ξ ) , w h e n d = 3 or d 5 ; C t ( γ γ 0 ) α 4 1 | x ξ | 2 + γ 2 γ 0 p ( t , x ξ ) , w h e n d = 4.
Here, γ0 is any number, such that 0 < γ0 < γ, and in the case d ≥ 3, the constant C depends on γ0.

4. Auxiliary Lemmas

To prove our main theorem, we need to dominate certain multiple integrals involving Y(t, x, ξ) and Z(t, x, ξ). Since both Y(t, x, ξ) and Z(t, x, ξ) are complicated, we shall first bound them by p(t, x − ξ) from the estimations of |Y0(t, x, ξ)| and |VY(t, x, ξ)|. More precisely, we have the following bounds for Y(t, x, ξ).
Lemma 9. Let xd,t ∈ (0,T]. Then:
| Y ( t , x , ξ ) | { C t 1 + α 2 p ( t , x ξ ) , d = 1 ; C t 1 p ( t , x ξ ) , d = 2 ; C t ( γ 2 γ 0 ) α 2 1 | x ξ | 2 + γ 2 γ 0 p ( t , x ξ ) , d = 4 ; C t ( γ γ 0 ) α 4 1 | x ξ | 2 d + ( γ γ 0 ) / 2 p ( t , x ξ ) , d = 3 or d 5.
Proof. We shall prove the lemma by the above different cases. First, when d = 1, by Proposition 7, we have:
| Y 0 ( t , x ξ , ξ ) | { C t α 2 1 p ( t , x ξ ) , t α | x ξ | 2 1 ; C t α 2 1 , t α | x ξ | 2 1.
If t−a|x − ξ|2 ≤ 1, then:
| Y 0 ( t , x ξ , ξ ) | C t 1 + α 2 . p ( x , t ) e σ C t α 2 1 p ( t , x ξ ) .
Therefore:
| Y ( t , x , ξ ) | | Y 0 ( t , x ξ , ξ ) | + | V Y ( t , x , ξ ) | C t α 1 + ( γ 1 ) α 2 p ( t , x ξ ) + C t 1 + α 2 p ( t , x ξ ) C t 1 + α 2 p ( t , x ξ ) .
Now, we consider the case d = 2. From the following inequalities:
| V Y ( t , x , ξ ) | C t γ α 2 1 p ( t , x ξ ) ; | Y 0 ( t , x ξ , ξ ) | C t 1 p ( t , x ξ )
we have easily:
| Y ( t , x , ξ ) | | Y 0 ( t , x ξ , ξ ) | + | V Y ( t , x , ξ ) | C t 1 p ( t , x ξ ) .
We are going to prove the lemma when d = 3. From Proposition 7, we have:
| Y 0 ( t , x ξ , ξ ) | C t α 2 1 p ( t , x ξ ) = C t ( γ γ 0 ) α 4 1 | x ξ | 1 + ( γ γ 0 ) / 2 | x ξ t α 2 | 1 ( γ γ 0 ) / 2 p ( t , x ξ ) C t ( γ γ 0 ) α 4 1 | x ξ | 1 + ( γ γ 0 ) / 2 p ( t , x ξ ) .
Combining this inequality with Proposition 8, we obtain:
| Y ( t , x , ξ ) | C t ( γ γ 0 ) α 4 1 | x ξ | 1 + ( γ γ 0 ) / 2 p ( t , x ξ ) .
We turn to consider the case d = 4. Proposition 7 yields that for any θ > 0, the following holds true:
| Y 0 ( t , x ξ , ξ ) | C t α 1 [ ( | x ξ | 2 t α ) θ + ( t α | x ξ | 2 ) θ ] p ( t , x ξ ) ; = C t α 1 ( t α | x ξ | 2 ) θ [ ( | x ξ | 2 t α ) 2 θ + 1 ] p ( t , x ξ ) .
If | x ξ | 2 t α > 1, then:
[ ( | x ξ | 2 t α ) 2 θ + 1 ] p ( t , x ξ ) 2 ( | x ξ | 2 t α ) 2 θ p ( t , x ξ ) ( C p ( t , x ξ ) .
As a consequence, we have:
| Y 0 ( t , x ξ , ξ ) | C t α 1 ( t α | x ξ | 2 ) θ p ( t , x ξ ) .
If | x ξ | 2 t α > 1, then the above inequality is obviously true. Now, we can choose θ > 0, such that 2θ ≥ (−2 + γ − 2γ0). Thus, we have:
| Y 0 ( t , x ξ , ξ ) | = C t α 1 + α θ + ( 2 θ ( 2 + γ 2 γ 0 ) ) α 2 | x ξ | 2 + γ 2 γ 0 ( | x ξ | t α 2 ) 2 θ ( 2 + γ 2 γ 0 ) p ( t , x ξ ) C t ( γ 2 γ 0 ) α 2 1 | x ξ | 2 + γ 2 γ 0 p ( t , x ξ ) .
Combining the above inequality with Proposition 8, we have:
| Y ( t , x , ξ ) | C t ( γ 2 γ 0 ) α 2 1 | x ξ | 2 + γ 2 γ 0 p ( t , x ξ ) + C t ( γ 0 + γ ) α 4 1 | x ξ | 2 + γ 2 γ 0 p ( t , x ξ ) C t ( γ 2 γ 0 ) α 2 1 | x ξ | 2 + γ 2 γ 0 p ( t , x ξ )
since ( γ 2 γ 0 ) α 2 1 ( γ 0 + γ ) α 4 1.
Finally, we consider the case d ≥ 5. From the estimates: |Y0(t, xξ, ξ)| ≤ Ct−α−1|x-ξ|4−dp(t,xξ) we obtain:
| Y 0 ( t , x ξ , ξ ) | C t ( γ 0 + γ ) α 4 1 | x ξ | 2 d + ( γ γ 0 ) / 2 | x ξ t α 2 | 2 ( γ γ 0 ) / 2 p ( t , x ξ ) t ( γ γ 0 ) α 4 1 | x ξ | 2 d + ( γ γ 0 ) / 2 p ( t , x ξ ) .
Therefore, we have:
| Y ( t , x , ξ ) | C t ( γ γ 0 ) α 4 1 | x ξ | 2 d + ( γ γ 0 ) / 2 p ( t , x ξ ) .
The proposition is then proven. □
The bound Equation (4.1) will greatly help to simplify our estimation of the multiple integrals that we are going to encounter. However, when the dimension d is greater than or equal to two, the multiple integrals are still complicated to estimate, and our main technique is to reduce the computation to one dimensional. This means that we shall further bound the right-hand side of the inequality Equation (4.1) by the product of functions of one variable. Before doing so, we denote the exponents of t and |x − ξ| in Equation (4.1) by ζd and κd. Namely, we denote:
ζ d = { 1 + α 2 , d = 1 ; 1 , d = 2 ; ( γ 2 γ 0 ) α 2 1 , d = 4 ; ( γ 2 γ 0 ) α 4 1 , d = 3 or d 5.
and:
k d = { 0 , d = 1 , 2 ; 2 + γ 2 γ 0 , d = 4 ; 2 d + ( γ γ 0 ) / 2 , d = 3 or d 5.
From now on, we shall exclusively use p ( t , x ) = exp ( σ t α 2 α | x | 2 2 α ) to denote a function of one variable. However, the constant σ may be different in different appearances of p(t, x) (for notational simplicity, we omit the explicit dependence on σ of p(t, x)).
With these notation, Lemma 9 yields:
Lemma 10. The following bound holds true for the Green’s function Y:
| Y ( t , x , ξ ) | C i = 1 d t ζ d / d | x i ξ i | k d / d p ( t , x i ξ i ) .
Proof. It is easy to see that:
| x | = ( i = 1 d x i 2 ) 1 / 2 max 1 i d | x i | i = 1 d | x i | 1 d .
Thus, for any positive number α > 0 , | x | α i = 1 d | x i | α d.
On the other hand,
| x | 2 2 α = [ i = 1 d | x i | 2 ] 1 2 α [ max 1 i d | x i | 2 ] 1 2 α = max 1 i d | x i | 2 2 α 1 d i = 1 d | x i | 2 2 α .
Combining the above with Equation (4.1) yields Equation (4.4), since the exponents in |x − ξ| in Equation (4.1) are negative. □
Lemma 11. Let −1 < β ≤ 0, x ∈ ℝ. Then, there is a constant C, dependent on σ, α and β, but independent of ξ and s, such that:
sup ξ | x | β p ( s , x ξ ) d x C s α β 2 + α 2 .
Proof. Making the substitution x = y s α 2, we obtain:
| x | β p ( s , x ξ ) d x = s α β 2 + α 2 | y | β exp ( σ | y ξ s α 2 | 2 2 α ) d y s α β 2 + α 2 ( | y | 1 | y | β d y + exp ( σ | y ξ s α 2 | 2 2 α ) d y ) C s α β 2 + α 2
since the two integrals inside the parenthesis are finite (and independent of s and ξ). □
The following is a slight extension of the above lemma.
Lemma 12. There is a constant C, dependent on σ, α and β, but independent of ξ and s, such that:
sup ξ | x | β | log | x | | p ( s , x ξ ) d x C s α β 2 + α 2 [ 1 + | log s | ] .
Proof. We shall follow the same idea as in the proof of Lemma 11. Making the substitution x = y s α 2, we obtain:
| x | β | log | x | | p ( s , x ξ ) d x C s α β 2 + α 2 | y | β [ | log | y | | + | log s | ] exp ( σ | y ξ s α 2 | 2 2 α ) d y C s α β 2 + α 2 ( 1 + | log s | ) ( | y | e | y | β | log | y | | d y + exp ( σ | y ξ s α 2 | 2 2 α ) d y ) C s α β 2 + α 2 ( 1 + | log s | ) .
This proves the lemma. □
Lemma 13. Let θ1 and θ2 satisfy −1 < θ1 < 0 and − 1 < θ2 ≤ 0. Then, for any ρ1, τ2 ∈ ℝ, ρ1τ2,
  • If θ1 + θ2 = −1, then:
    | ρ 1 τ 1 | θ 1 | ρ 2 ρ 1 | θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 C + C | log ( ρ 2 τ 1 ) | .
  • If θ1 + θ2 < −1, then:
    | ρ 1 τ 1 | θ 1 | ρ 2 ρ 1 | θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 C | ρ 2 τ 1 | 1 + θ 1 + θ 2 .
Proof. Without loss of generality, we suppose τ1ρ2. We divide the integral domain into four intervals.
| ρ 1 + τ 1 | θ 1 | ρ 2 ρ 1 | θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 = 3 τ 1 ρ 2 2 | ρ 1 τ 1 | θ 1 | ρ 2 ρ 1 | θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 + 3 τ 1 ρ 2 2 τ 1 + ρ 2 2 | ρ 1 τ 1 | θ 1 | ρ 2 ρ 1 | θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 + τ 1 + ρ 2 2 3 ρ 2 τ 1 2 | ρ 1 τ 1 | θ 1 | ρ 2 ρ 1 | θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 + 3 ρ 2 τ 1 2 | ρ 1 τ 1 | θ 1 | ρ 2 ρ 1 | θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 = : I 1 + I 2 + I 3 + I 4 .
Let us consider I2 first. When ρ 1 [ 3 τ 1 ρ 2 2 , τ 1 + ρ 2 2 ], we have | ρ 2 ρ 1 | ρ 2 τ 1 2. Noticing p(s2 − s1, ρ2 − ρ1) ≤ 1, we have the following estimate for I2:
I 2 ( ρ 2 τ 1 2 ) θ 2 3 τ 1 ρ 2 2 τ 1 + ρ 2 2 | ρ 1 τ 1 | θ 1 d ρ 1 ( ρ 2 τ 1 2 ) θ 2 [ τ 1 τ 1 + ρ 2 2 ( ρ 1 τ 1 ) θ 1 d ρ 1 + 3 τ 1 ρ 2 2 τ 1 ( τ 1 ρ 1 ) θ 1 d ρ 1 ] = C ( ρ 2 τ 1 )
With the same argument, we have:
I 3 C ( ρ 2 τ 1 ) 1 + θ 1 + θ 2 .
Now, we study I1. The term I4 can be analyzed in a similar way. Since ρ 1 < 3 τ 1 ρ 2 2 < τ 1 < ρ 2, we have:
I 1 3 τ 1 ρ 2 2 ( τ 1 ρ 1 ) θ 1 + θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 .
To estimate the above integral, we divide our estimation into three cases.
Case (i): θ1 + θ2 < − 1.
In this case, we bound p(s2s1, ρ2ρ1) by 1. Thus, we have:
I 1 3 τ 1 ρ 2 2 ( τ 1 ρ 1 ) θ 1 + θ 2 d ρ 1 = 1 1 + θ 1 + θ 2 ( ρ 2 τ 1 2 ) 1 + θ 1 + θ 2 .
Case (ii): θ 1 + θ 2 = 1 , ρ 2 τ 1 2 1.
In this case, we have 3 τ 1 ρ 2 2 τ 1 1. Thus, we have:
I 1 τ 1 1 ( τ 1 ρ 1 ) 1 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 τ 1 1 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1
which is bounded when s1 and s2 are in a bounded domain.
Case (iii): θ 1 + θ 2 = 1 , ρ 2 τ 1 2 < 1.
In this case, we divide the integral into two intervals as follows.
I 1 = 3 τ 1 ρ 2 2 ( τ 1 ρ 1 ) θ 1 + θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 τ 1 1 ( τ 1 ρ 1 ) 1 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 + τ 1 1 3 τ 1 ρ 2 2 ( τ 1 ρ 1 ) 1 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 C + τ 1 1 3 τ 1 ρ 2 2 ( τ 1 ρ 1 ) 1 d ρ 1 C + C | ln ( ρ 2 τ 1 ) | .
Similar argument works for I4. Combining the estimates for Ik, k = 1, 2, 3,4 yields the lemma. □
Lemma 14. Let θ1 and θ2 satisfy − 1 < θ1 < 0,−1 < θ2 0 and θ1 + 2 > −2. Let 0 ≤ r1 < r2T and 0 ≤ s1 < s2T. Then, for any ρ1,τ2 ∈ ℝ, ρ1τ2, we have:
2 | ρ 1 τ 1 | θ 1 | ρ 2 ρ 1 | θ 2 | τ 2 τ 1 | θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) p ( r 2 r 1 , τ 2 τ 1 ) d ρ 1 d τ 1 { C ( s 2 s 1 ) α ( θ 1 + θ 2 + 1 ) 2 ( r 2 r 1 ) α ( θ 2 + 1 ) 2 , θ 1 + θ 2 > 1 ; C ( r 2 r 1 ) α ( θ 1 + 2 θ 2 + 2 ) 2 , θ 1 + θ 2 < 1 ; C ( r 1 r 1 ) α ( θ 2 + 1 ) 2 [ 1 + | log ( r 2 r 1 ) | ] , θ 1 + θ 2 = 1.
Proof. First, we write:
I : = 2 | ρ 1 τ 1 | θ 1 | ρ 2 ρ 1 | θ 2 | τ 2 τ 1 | θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) p ( r 2 r 1 , τ 2 τ 1 ) d ρ 1 d τ 1 = f ( τ 1 , ρ 2 , s 1 , s 2 , θ 1 , θ 2 ) | τ 2 τ 1 | θ 2 p ( r 2 r 1 , τ 2 τ 1 ) d τ 1 ,
where:
f ( τ 1 , ρ 2 , s 1 , s 2 , θ 1 , θ 2 ) = | ρ 1 τ 1 | θ 1 | ρ 2 ρ 1 | θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 .
We divide the situation into three cases.
Case (i): θ1 + θ2 > − 1.
In this case, we apply the Hölder’s inequality to obtain:
f ( τ 1 , ρ 2 , s 1 , s 2 , θ 1 , θ 2 ) { | ρ 1 τ 1 | θ 1 + θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 } θ 1 θ 1 + θ 2 { | ρ 1 ρ 1 | θ 1 + θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) d ρ 1 } θ 1 θ 1 + θ 2 C ( s 2 s 1 ) α ( θ 1 + θ 2 ) 2 + α 2 ,
where the last inequality follows from Lemma 11. Substituting the above estimate Equation (4.7) into Equation (4.6), we have:
I = f ( τ 1 , ρ 2 , s 1 , s 2 , θ 1 , θ 2 ) | τ 2 τ 1 | θ 2 p ( r 2 r 1 , τ 2 τ 1 ) d τ 1 C ( s 2 s 1 ) α ( θ 1 + θ 2 ) 2 + α 2 | τ 2 τ 1 | θ 2 p ( r 2 r 1 , τ 2 τ 1 ) d τ 1 .
Using Lemma 11, again we have,
I C ( s 2 s 1 ) α ( θ 1 + θ 2 ) 2 + α 2 ( r 2 τ 1 ) α θ 2 2 + α 2 .
Case (ii): θ1 + θ2 < −1.
In this case, from Lemma 13, Part (ii), it follows:
f ( τ 1 , ρ 2 , s 2 , θ 1 , θ 2 ) C | ρ 2 τ 1 | θ 1 + θ 2 + 1 .
Hence, we have:
I C | ρ 2 τ 1 | θ 1 + θ 2 + 1 | τ 2 τ 1 | θ 2 p ( r 2 r 1 , τ 2 τ 1 ) d τ 1 .
Now, since from the condition of the lemma, θ1 + 2θ2 + 1 > − 1, we can use Hölder’s inequality, such as in the inequality (4.7) in Case (i), to obtain:
I C | ρ 2 τ 1 | θ 1 + θ 2 + 1 | τ 2 τ 1 | θ 2 p ( r 2 r 1 , τ 2 τ 1 ) d τ 1 C ( r 2 r 1 ) α ( θ 1 + 2 θ 2 ) 2 + α .
Case (iii): θ1 + θ2 = − 1.
In this case, we first use Lemma 13, Part (i), to obtain:
f ( τ 1 , ρ 2 , s 1 , s 2 , θ 1 , θ 2 ) C [ 1 + | log | ρ 2 τ 1 | | ] .
Thus, using Lemma 12, we have:
I C { 1 + | log | ρ 2 τ 1 | | } | τ 2 τ 1 | θ 2 p ( r 2 r 1 , τ 2 τ 1 ) d τ 1 C ( r 2 r 1 ) α ( θ 2 + 1 ) 2 [ 1 + | log | r 2 r 1 | | ] .
The lemma is then proven. □
Corollary 15. Let θ1 and θ2 satisfy −1 < θ1 < 0, −1 < θ2 ≤ 0 and θ1+2θ2 > −2. Let 0 ≤ r1 < r2T and 0 ≤ s1 < s2 ≤ T. Then, for any ρ1, τ2 ∈ ℝ, ρ1τ2, we have:
2 | ρ 1 τ 1 | θ 1 | ρ 2 ρ 1 | θ 2 | τ 2 τ 1 | θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) p ( r 2 r 1 , τ 2 τ 1 ) d ρ 1 d τ 1 { C ( s 2 s 1 ) α ( θ 1 + 2 θ 2 + 2 ) 4 ( r 2 r 1 ) α ( θ 1 + 2 θ 2 + 2 ) 4 ; θ 1 + θ 2 1 C ( s 2 s 1 ) α ( θ 2 + 1 ) 4 ( r 2 r 1 ) α ( θ 2 + 1 ) 4 [ 1 + | log ( r 2 r 1 ) | + | log ( s 2 s 1 ) | ] ; θ 1 + θ 2 = 1.
Proof. Consider first the case θ1 + θ2 < −1. Denote the integral on the left-hand side of Equation (4.8) by I. Then, the inequality Equation (4.8) implies:
I C ( r 2 r 1 ) α ( θ 1 + 2 θ 2 ) 2 + α .
In the same way, we have:
I C ( s 2 s 1 ) α ( θ 1 + 2 θ 2 ) 2 + α .
Now, we use the fact that if three numbers satisfying a ≤ b and a ≤ c, then a = a1/2a1/2 b1/2c1/2.
I C ( r 2 r 1 ) α ( θ 1 + 2 θ 2 ) 4 + α / 2 ( s 2 s 1 ) α ( θ 1 + 2 θ 2 ) 4 + α / 2
which simplifies to Equation (4.8). Exactly the same argument can be applied to the case θ1 + θ2 = −1 and the case θ1 + θ2 > − 1. Thus, the inequality Equation (4.7) implies Equation (4.8). □
Lemma 16. Let p1, ⋯, pn > 0. Then for any T > 0,
0 s 1 < < s n T ( s n s n 1 ) p n 1 ( s 2 s 1 ) p 2 1 s 1 p 1 1 d s = T n k = 1 n Γ ( p k ) Γ ( p 1 ) + + p n + 1 .
Proof. This is well known. For example, it is a straightforward consequence of Formula 4.634 of [17] with some obvious transformations. □
Lemma 17. Assume that u0 is bounded. Then:
sup x d Z ( t , x , ξ ) u 0 ( ξ ) d ξ C .
Proof. We use Z(t, x, ξ) = Z0(t, x − ξ, ξ) + Vz(t, x, ξ). Since u0 is bounded,
| d Z 0 ( t , x , ξ ) u 0 ( ξ ) d ξ | C d | z 0 ( t , x , ξ ) | d ξ
which is bounded by the estimates in Equation (3.4) and a substitution ξ = x + t a 2 y. In fact, we have, for example, when d ≥ 3,
d | Z 0 ( t , x ξ ) | d ξ C d t α t ( 2 d ) α 2 t d α 2 | y | 2 d exp { σ | y | 2 2 α } d y C t 1 α C .
Similarly, using the estimation for Vz(t, x, ξ) given in Proposition 8, we can bound d | V Z ( t , x ξ ) | d ξ by a constant. In fact, for example, when d = 3, we have:
d | V Z ( t , x ξ ) | d ξ C t γ 0 α 2 α d t 3 α 2 t ( γ γ 0 1 ) α 2 | y | γ γ 0 1 exp { σ | y | 2 2 α } d y C t γ α 2 C .
The other dimension cases can be dealt with the same way. □

5. Proof of the Main Theorem 2

Change t to s and x to y, and the Equation (2.5) for the mild solution becomes:
u ( s , y ) = d Z ( s , y , ξ ) u 0 ( ξ ) d ξ + 0 s d Y ( s r , y , z ) u ( r , z ) W H ( d z ) d r .
Substituting the above into Equation (2.5), we have:
u ( t , x ) = d Z ( t , x , ξ ) u 0 ( ξ ) d ξ + 0 t d Y ( t s , x , y ) Z ( s , y , ξ ) u 0 ( ξ ) d ξ W H ( d y ) d s + 0 t 0 s d Y ( t s , x , y ) Y ( s r , y , z ) u ( r , z ) W H ( d z ) d r W H ( d y ) d s .
We continue to iterate this procedure to obtain:
u ( t , x ) = n = 0 ψ n ( t , x ) ,
where Ψn satisfies the following recursive relation:
ψ 0 ( t , x ) = d Z ( t , x , ξ ) u 0 ( ξ ) d ξ
ψ n + 1 ( t , x ) = 0 t d Y ( t s , x , y ) ψ n ( s , y ) W H ( d y ) ( d s ) , n = 0 , 1 , 2 ,
To write down the explicit expression for the expansion (5.1), we denote:
f n ( t , x ; x 1 , , x n ) = T n d Y ( t s n , x , x n ) Y ( s 2 s 1 , x 2 , x 1 ) Z ( s 1 , x 1 , ξ ) u 0 ( ξ ) d ξ d s ,
where:
T n = 0 s 1 < s 1 < < s 1 t and d s = d s 1 d s 2 d s n .
With these notations, we see from the above iteration procedure that:
ψ n ( t , x ) = I n ( f ˜ n ( t , x ) ) = n d f n ( t , x ; x 1 , , x n ) W H ( d x 1 ) W H ( d x 2 ) W H ( d x n ) = n d f ˜ n ( t , x ; x 1 , , x n ) W H ( d x 1 ) W H ( d x 2 ) W H ( d x n ) ,
where In denotes the multiple Itô-type integral with respect to W(x) (see [12]) and f ˜ n ( t , x ; x 1 , , x n ) is the symmetrization of fn(t, x; x1, , xn) with respect to x1, , xn:
f ˜ n ( t , x ; x 1 , , x n ) = 1 n ! i 1 , , i n σ ( n ) f n ( t , x ; x i 1 , , x i n ) ,
where σ(n) denotes the set of permutations of (1, 2, ⋯, n).
The Expansion (5.1) with the explicit Expression (5.3) for Ψn is called the chaos expansion of the solution.
If Equation (2.2) has a square integrable solution, then it has a chaos expansion according to a general theorem of Itô. From the above iteration procedure, it is easy to see that this chaos expansion of the solution is given uniquely by Equations (5.1)(5.3). This is the uniqueness.
If we can show that the series Equation (5.1) is convergent in L 2 ( Ω , F , P ), then it is easy to verify that u(t, x) defined by Equations (5.1)(5.3) satisfies Equation (2.5). Thus, the existence of the solution to Equation (2.2) is solved, and the explicit form of the solution is also given (by Equations (5.1)(5.3)). We refer to [12] for more detail.
Thus, our remaining task is to prove that the series defined by Equation (5.1) is convergent in L 2 ( Ω , F , P ). To this end, we need to use the lemmas that we proved in the previous section.
Let now u(t, x) be defined by Equations (5.1)(5.3). Then, we have:
E [ u ( t , x ) 2 ] = n = 0 E [ I n ( f ˜ n ( t , x ) ) ] 2 = n = 0 n ! f ˜ n , f ˜ n H n = 0 n ! f n , f n H ,
where:
f n , f n H = 2 n d i = 1 n φ H ( u i , v i ) f n ( u 1 , , u n ) f n ( v 1 , , v n ) d u 1 d v 1 d u 2 d v 2 d u n d v n
and the last inequality follows from Hölder inequality. Here, and in the remaining part of the paper, we use the following notations:
u i = ( u i 1 , , u i d ) , d u i = d u i 1 d u i d , i = 1 , 2 , , n ; φ H ( u i , v i ) = j = 1 d φ H j ( u i j , u i j ) = j = 1 d H j ( 2 H j 1 ) | u i j v i j | 2 H j 2 .
We use the idea in [12] to estimate each term Θn(t,x) = n! 〈{fn, fnH in the series (5.4). By the defining formula (5.2) for fn, we have:
Θ n ( t , x ) = n ! T n 2 2 n d + 2 i = 1 n φ H ( ξ i η i ) Y ( t s n , x , ξ n ) Y ( s 2 s 1 , ξ 2 , ξ 1 ) d Z ( s 1 , ξ 1 , ξ 0 ) u 0 ( ξ 0 ) d ξ 0 Y ( t r n , x , η n ) Y ( r 2 r 1 , η 2 , η 1 ) d Z ( r 1 , η 1 , η 0 ) u 0 ( η 0 ) d η 0 d ξ d η d s d r .
Application of Lemma 17 to the above integral yields:
Θ n ( t , x ) C n ! T n 2 2 n d i = 1 n φ H ( ξ i η i ) Y ( t s n , x , ξ n ) Y ( s 2 s 1 , ξ 2 , ξ 1 ) Y ( t r n , x , η n ) Y ( r 2 r 1 , η 2 , η 1 ) d ξ d η d s d r .
Using Lemma 10 for the above integral, we have:
Θ n ( t , x ) C n n ! T n 2 i = 1 d Θ i , n ( t , x i , s , r ) d s d r ,
where:
Θ i , n ( t , x i , s , r ) = 2 n { k = 1 n φ H i ( ρ k τ k ) } | t s n | ς d d | x i ρ n | k d d p ( t s n , x i ρ n ) | s 2 s 1 | ς d d | ρ 2 ρ 1 | k d d p ( s 2 s 1 , ρ 2 ρ 1 ) | t r n | ς d d | x i τ n | k d d p ( t r n , x i τ n ) | r 2 r 1 | ς d d | τ 2 τ 1 | k d d p ( r 2 r 1 , τ 2 τ 1 ) d ρ d τ .
Here, we use the notation ρk = ξki and τk = ηki, k = 1, ⋯, n. The quantity Θi,n can be written as:
Θ i , n ( t , x i , s , r ) = | t s n | ς d d | t r n | ς d d | s 2 s 1 | ς d d | r 2 r 1 | ς d d 2 n { k = 1 n φ H i ( ρ k τ k ) } | x i ρ n | k d d p ( t s n , x i ρ n ) | x i τ n | k d d p ( t r n , x i τ n ) | ρ 2 ρ 1 | k d d p ( s 2 s 1 , ρ 2 ρ 1 ) | τ 2 τ 1 | k d d p ( r 2 r 1 , τ 2 τ 1 ) d ρ d τ .
From the definition Equation (4.3) of κd, we see easily that k d d > 1. We assume:
2 H i + 2 k d d > 0.
Under the above condition, we can apply Corollary 15 with θ 1 = 2 H i 2 > 1 , θ 2 = k d d > 1 to the integration 1dτ1 in Expression (5.7) (Condition (5.8) implies that θ1 + 2θ2 > −2). Then, when θ1 + θ2 ≠ − 1, we have:
Θ i , n ( t , x i , s , r ) C | t s n | ς d d | t r n | ς d d | s 3 s 2 | ς d d | r 3 r 2 | ς d d | s 2 s 1 | ς d d + H i d + k d 2 d α | r 2 r 1 | ς d d + H i d + k d 2 d α 2 n 2 { k = 2 n φ H i ( ρ k τ k ) } | x i ρ n | k d d p ( t s n , x i ρ n ) | x i τ n | k d d p ( t r n , x i τ n ) | ρ 3 ρ 2 | k d d p ( s 3 s 2 , ρ 3 ρ 2 ) | τ 3 τ 2 | k d d p ( r 3 r 2 , τ 3 τ 2 ) d ρ n d ρ 2 d τ n d τ 2.
Repeatedly applying this argument, we obtain:
Θ i , n ( t , x i , s , r ) C n k = 1 n | t k + 1 t k | i | s k + 1 s k | i ,
where we recall the convention that tn+1 = t and sn+1 = s and where:
i = ς d d + H i d + k d 2 d α .
Substituting the above estimate of Θi,n into the expression for Θn, we have:
Θ n ( t , x ) C n T n 2 k = 1 n ( s k + 1 s k ) ( r k + 1 r k ) d s d r = C n [ T n k = 1 n ( s k + 1 s k ) d s ] 2 ,
where:
= i = 1 d i = ς d + | H | α 2 + k d α 2 with | H | = i = 1 d H i .
Now, we apply Lemma 16 to obtain:
Θ n ( t , x ) C n [ Γ ( + 1 ) Γ ( n ( + 1 ) ) ] 2 C n Γ ( 2 n ( + 1 ) ) .
This estimate combined with Equation (5.4) proves that if:
2 ( + 1 ) > 1 ,
then n = 0 Θ n ( t , x ) is bounded, which implies that the series (5.1) is convergent in L 2 ( Ω , F , P ).
Now, we analyze the above condition (5.10). By the definition of , this condition can be written as:
= ς d + | H | α 2 + k d α 2 > 1 / 2.
or:
| H | > 1 α k d 2 ς d α .
Using the definitions of κd and ζd defined by Equations (4.2) and (4.3), we see that the right-hand side of Equation (5.11) is:
{ 1 α 0 2 α ( 1 + α 2 ) = 1 α 1 when d = 1 1 α 0 2 α ( 1 ) = 1 α when d = 2 1 α + ( 2 γ + 2 γ 0 ) + 2 α ( ( γ 2 γ 0 ) α 2 + 1 ) = 1 α + 2 when d = 4 1 α 2 + d γ γ 0 2 + 2 α ( ( γ γ 0 ) α 4 + 1 ) = 1 α 2 + d when d = 3 or d 5.
Summarizing the above computations, we obtain that Condition (5.11) or Condition (5.10) is equivalent to:
i = 1 d H i > d 2 + 1 α .
When θ1 + θ2 = − 1, Corollary 15 implies that, for any ε > 0,
| ρ 1 τ 1 | θ 1 | ρ 2 ρ 1 | θ 2 | τ 2 τ 1 | θ 2 p ( s 2 s 1 , ρ 2 ρ 1 ) p ( r 2 r 1 , τ 2 τ 1 ) d ρ 1 d τ 1 C ( s 2 s 1 ) α ( θ 2 + 1 + ε ) 4 ( r 2 r 1 ) α ( θ 2 + 1 + ε ) 4 .
Now, we can follow the above same argument to obtain that if:
2 ( + 1 ) > 1 ,
where = d + k d + d 4 α, then n = 0 Θ n ( t , x ) is convergent in L 2 ( Ω , , P ). In the same way as in the case θ1 + θ2 ≠ − 1, we can show that Condition (5.12) implies Equation (5.13).
Now, we consider Condition (5.8). From the definition Equation (4.3) of κd, we see that when d = 1,2,3,4, Hi > 1/2 implies Equation (5.8). When d ≥ 5, then Condition (5.8) is implied by the following:
H i > 1 2 d γ 2 d
by choosing γ0 sufficiently small. Theorem 2 is then proven. □.

Acknowledgments

Yaozhong Hu is partially supported by a grant from the Simons Foundation #209206 and by the General Research Fund of the University of Kansas.
The authors thank Jingyu Huang and the anonymous referees for helpful comments.
MSC classifications: 26A33; 60H15; 60H05; 35K40; 35R60

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bronstein, I.; Israel, Y.; Kepten, E.; Mai, S.; Shavta, Y.; Barkai, E.; Garini, Y. Transient anomalous diffusion of telomeres in the nucleus of mammalian cells. Phys. Rev. Lett. 2009, 103, 018102. [Google Scholar]
  2. Hellmann, M.; Heermann, D.W.; Weiss, M. Enhancing phosphorylation cascades by anomalous diffusion. EPL 2012, 97, 58004. [Google Scholar]
  3. Soula, H.; Caré, B.; Beslon, G.; Berry, H. Anomalous versus slowed-Down Brownian Diffusion in the Ligand-Binding Equilibrium. Biophys. J. 2013, 105, 2064–2073. [Google Scholar]
  4. Yuste, S.B.; Abad, E.D.; Lindenberg, K. Reaction-subdiffusion model of morphogen gradient formation. Phys. Rev. E 2010, 82. [Google Scholar]
  5. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives. Theory and Applications; Gordon and Breach Science Publishers: Yverdon, Switzerland, 1993. [Google Scholar]
  6. Langlands, T.A.M.; Henry, B.I.; Wearne, S.L. Fractional cable equation models for anomalous electrodiffusion in nerve cells: Finite domain solutions. SIAM J. Appl. Math. 2009, 71, 1168–1203. [Google Scholar]
  7. Fedotov, S. Subdiffusion, chemotaxis, and anomalous aggregation. Phys. Rev. E 2011, 83, 021110. [Google Scholar]
  8. Fedotov, S.; Falconer, S. Nonlinear degradation-enhanced transport of morphogens performing subdiffusion. Phys. Rev. E 2014, 89, 012107. [Google Scholar]
  9. Langlands, T.A.M.; Henry, B.I. Fractional chemotaxis diffusion equations. Phys. Rev. E 2010, 81, 051102. [Google Scholar]
  10. Fedotov, S.; Falconer, S. Subdiffusive master equation with space dependent anomalous exponent and structural instability. Phys. Rev. E 2012, 85, 031132. [Google Scholar]
  11. Straka, P.; Fedotov, S. Transport equations for subdiffusion with nonlinear particle interation. J. Theor. Biol. 2015, 366, 71–83. [Google Scholar]
  12. Hu, Y. Heat equations with fractional white noise potentials. Appl. Math. Opt. 2001, 43, 221–243. [Google Scholar]
  13. Eidelman, S.D.; Kochubei, A.N. Cauchy problem for fractional diffusion equations. J. Diff. Equ. 2004, 199, 211–255. [Google Scholar]
  14. Kochubei, A.N. Fractional-order diffusion. Diff. Equ. 1990, 26, 485–492. [Google Scholar]
  15. Schneider, W. R. Fractional diffusion and wave equations. J. Math. Phys. 1989, 30, 134–144. [Google Scholar]
  16. Kilbas, A.A.; Saigo, M. H-Transforms. Theory and Applications; Analytical Methods and Special Functions, 9; Chapman & Hall/CRC: Boca Raton, FL, USA, 2004. [Google Scholar]
  17. Gradshteyn, I.S.; Ryzhik, I.M. Table of Integrals, Series, and Products, 7th ed; Academic Press: Waltham, MA, USA, 2007. [Google Scholar]

Share and Cite

MDPI and ACS Style

Hu, G.; Hu, Y. Fractional Diffusion in Gaussian Noisy Environment. Mathematics 2015, 3, 131-152. https://0-doi-org.brum.beds.ac.uk/10.3390/math3020131

AMA Style

Hu G, Hu Y. Fractional Diffusion in Gaussian Noisy Environment. Mathematics. 2015; 3(2):131-152. https://0-doi-org.brum.beds.ac.uk/10.3390/math3020131

Chicago/Turabian Style

Hu, Guannan, and Yaozhong Hu. 2015. "Fractional Diffusion in Gaussian Noisy Environment" Mathematics 3, no. 2: 131-152. https://0-doi-org.brum.beds.ac.uk/10.3390/math3020131

Article Metrics

Back to TopTop