Next Article in Journal
On Unicyclic Graphs with Minimum Graovac–Ghorbani Index
Previous Article in Journal
Weighted Fejér, Hermite–Hadamard, and Trapezium-Type Inequalities for (h1,h2)–Godunova–Levin Preinvex Function with Applications and Two Open Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Renormalization Group Method for a Stochastic Differential Equation with Multiplicative Fractional White Noise

School of Mathematics, Jilin University, Changchun 130012, China
Submission received: 11 December 2023 / Revised: 21 January 2024 / Accepted: 23 January 2024 / Published: 24 January 2024
(This article belongs to the Section Difference and Differential Equations)

Abstract

:
In this paper, we present an application of the renormalization group method developed by Chen, Goldenfeld and Oono for a stochastic differential equation in a space of Hilbert space-valued generalized random variables with multiplicative noise. The driving process is a real-valued fractional white noise with a Hurst parameter greater than 1 / 2 . The stochastic integration is understood in the Wick–Itô–Skorohod sense. This article is a generalization of results of Glatt-Holtz and Ziane, which were for the systems with white noise. We firstly demonstrate the process of formulating the renormalization group equation and the asymptotic solution. Then, we give rigorous proof of the consistency of the approximate solution. In addition, some numerical comparisons are given to illustrate the validity of our results.

1. Introduction

This paper considers the following stochastic differential equation with multiplicative fractional white noise in an infinite dimension:
d u ( τ ) = 1 ε A u ( τ ) d τ + F u ( τ ) d τ + ε m B u ( τ ) W H ( τ ) d τ ,
where A and B are bounded linear operators on the separable real Hilbert space V , F is a non-linear operator on V , W H ( τ ) is the V -valued cylindrical fractional white noise with a fixed Hurst parameter H ( 1 / 2 , 1 ) , the product concerning the fractional white noise is in the Wick sense and 1 / 2 < m < 1 , 0 < ε 1 .
In stochastic analysis, Brownian motion is generally used to describe random phenomena. This is an independent incremental continuous random process with normal distribution. However, empirical studies on random phenomena often show that there is a strong interrelated “abnormal” behavior among remote samples [1]. In recent years, there has been great interest in studying noise that has no independent increment but possesses long-range dependence, long memory and self-similarity properties, for example, in the fields of hydrology [2], communication [3], mathematical finance [4], etc. Fractional Brownian motion is one of the simplest random processes [5]. It is Gaussian, self-similar and contains stationary increments. It is an extension of classical Brownian motion and is often used to capture these “abnormal” behaviors [1].
In view of the above special properties, stochastic differential equations driven by fractional Brownian motion have been used as models for many practical problems [6,7,8,9], and the theory has been further developed [10]. The research on infinite-dimensional stochastic differential equations driven by fractional white noise, especially multiplicative noise, has also developed. For example, [11] investigated a class of linear evolution equations with multiplicative fractional Gaussian noise in Hilbert spaces. When the Hurst exponent H ( 1 / 2 , 1 ) , the existence and uniqueness of weak, strong and mild solutions were obtained. Reference [12] studied the format of Equation (1) with H ( 0 , 1 ) and established the existence and uniqueness of mild solutions. This paves the way for the regularity of the equation studied in this paper. Reference [13] analyzed a class of stochastic affine evolution equations with bilinear noise terms. The driving process was real-valued fractional Brownian motion with Hurst parameters greater than 1 / 2 . The existence and uniqueness of weak solutions were proven.
In addition to the qualitative analysis of the equation, it is important to study the behavior of stochastic differential equations with fractional Brownian motion. In general, for stochastic differential equations, exact solutions are not always available. The main difficulties are manifold. First, fractional Brownian motion is not a semimartingale if H 1 / 2 . One cannot apply well-developed classical methods to these studies. Second, many definitions of integrals involve fractional Brownian motion, such as Wiener integrals, divergence-type integrals, Wick–Itô–Skorohod integrals, pathwise integrals and so on [10]. Different research methods are used accordingly. The literature investigating the existence of solutions of nonlinear equations in the sense of Wick integrals is deficient, let alone the construction of approximate solutions. In particular, Equation (1) can be considered a singular perturbation problem. The challenge is that when we try to find the power series solution with respect to ε , starting from the direct expansion, we often obtain non-uniform results, which may contain secular terms.
Therefore, finding a suitable method to construct an approximate solution for Equation (1) is worth investigating. In general, people often seek the numerical solution or asymptotic solution, or a combination of the two. At present, the numerical solution of stochastic differential equations has developed into an important part of stochastic analysis, see references [14,15]. For the asymptotic solution, one of the most effective is the perturbation method, which is a semi-analytical and semi-numerical method [16,17,18].
Recently, there have been some related studies. Glatt-Holtz and Ziane [19] analyzed the asymptotic solution of a class of stochastic differential equations,
d X τ + 1 ε A X τ d τ = F X τ d τ + ε m G ( τ , τ / ε ) d W ,
by using the renormalization group (RG) method developed by Chen, Goldenfeld and Oono in the 1980s [20], where A is an antisymmetric or symmetric positive semidefinite matrix, F is a polynomial vector-valued function of the d-th degree, G ( τ , τ / ε ) is bounded independently of τ and τ / ε and W stands for n-dimensional Brownian motion. Qu, Li and Shi [21] investigated the asymptotic behavior of the following system:
d X τ + 1 ε A X τ d τ = F X t d τ + ε 1 / 2 G ( X τ ) d W
by applying the RG method. They proved that the error between the approximate solution and the exact solution remained in O ( ε γ ) ( 0 < γ < 1 / 2 ) with high probability. Inspired by the above studies, we seek an asymptotic solution of Equation (1) by using the RG method.
The benefits of the RG approach are extensive. Firstly, it does not need to make specific assumptions about the structure of the perturbed sequence, nor does it need to apply asymptotic matching. Secondly, as mentioned in [22,23], the computational efficiency of the RG method is higher than that of the traditional singular perturbation approach. For example, compared with normal form theory, the secular terms can be identified more readily by inspecting the naive asymptotic expansion than by checking the vector field. Moreover, the asymptotic solutions obtained by the RG method contain deeper insights into the structure or physical properties of the problem. For example, reference [23] studied a class of systems with autonomous perturbations and pointed out that the RG equation obtained by the RG method was the amplitude equation of the system. The amplitude equation is an equation that governs the long time dynamics of the system, so the RG equation reflects the essential characteristics of the system. For more details on the RG method, see references [24,25].
To obtain a uniformly valid approximate solution of (1), we derive a renormalized system
d W ( τ ) = R F W ( τ ) d τ + ε m B W ( τ ) W H ( τ ) d τ .
The solution of (2) with the initial condition W ( 0 ) = θ defines an approximate solution of (1):
u ¯ ( τ ) = e τ ε A W ( τ ) + 0 τ e τ ε A N F ( s ) W ( τ ) d s .
We prove that for any T ( 0 , + ) , u ¯ ( τ ) is a valid approximation:
sup τ [ 0 , T ] u ( τ ) u ¯ ( τ ) 1 , 3 C ε 1 / 2 ,
where the norm · 1 , 3 is defined in Section 2; for the specific meaning of operators R F and N F , refer to Section 3.1. It is worth noting that if the driven process is a classical Brownian motion B ( t ) , there will be no stochastic term in (2). This is because in the naive expansion of the solution, B θ B ( t ) 1 , 3 is uniformly bounded, not the secular term. For details, please refer to Section 3.1 and Appendix A.
The organization of this article is as follows. Section 2 presents some basic concepts concerning the spaces of generalized random variables and some definitions. Section 3 elaborates on the procedure of the RG method in Equation (1). In particular, Section 3.1 derives the naive expansion solution and a reduced system using the RG method. Section 3.2 gives proof of the consistency of the asymptotic solution. We apply our results to two examples and provide some numerical simulation diagrams of the approximate solution in Section 4. The last section elaborates on our conclusions. Throughout the article, we denote general constants by C, which may vary from line to line.

2. Preliminaries

We assume that the underlying space is the white noise probability space ( S ( R ) , B , μ ) , where S ( R ) denotes the space of tempered distributions over the space of rapidly decreasing test functions S ( R ) , B is the Borel sigma-algebra generated by the weak-star topology on S ( R ) , and μ is the unique probability measure corresponding to
S ( R ) e i ω , ϕ d μ ( ω ) = exp 1 2 ϕ L 2 ( R ) 2 ,
given by the Bochner–Minlos theorem, where ω , ϕ = ω ( ϕ ) denotes the action of ω S ( R ) on ϕ S ( R ) .
We denote by L 2 ( μ ) the space L 2 ( S ( R ) , μ ) of square integrable functions on S ( R ) with values in R . { H α } α T is an orthogonal basis of L 2 ( μ ) (see, e.g., [26], Theorem 2.2.3), where H α is constructed by means of Hermite orthogonal polynomials h n and Hermite functions ξ n ,
H α ( ω ) : = i = 1 h α i ( ω , ξ i ) , α T , ω S ( R ) .
Here, T is a set of multi-indices defined as ( N 0 N ) c ( N { 0 } ) N , i.e., a set of sequences of non-negative integers with only finitely many nonzero components.
Let ( V , · , · V , · V ) be a separable real Hilbert space with an orthonormal basis { e i } i = 1 . Denote by L 2 ( μ ; V ) the space of V -valued generalized random variables defined on the S ( R ) square Bochner integrable to the measure μ . Then, the family { H α e i } α T , i N forms an orthogonal basis of L 2 ( μ ; V ) .
Next, we define the spaces of V -valued generalized random variables and the Wick calculus.
Definition 1. 
(1) For ρ [ 0 , 1 ] and q N , we define the spaces S ( V ) ρ , q as
S ( V ) ρ , q = f = α T c α H α L 2 μ ; V : f ρ , q 2 = α T c α V 2 ( α ! ) 1 + ρ ( 2 N ) α q < ,
where α ! : = k α k ! and ( 2 N ) α q : = k ( 2 k ) α k q .
(2) For ρ [ 0 , 1 ] and q N , we define the spaces S ( V ) ρ , q as
S ( V ) ρ , q = f = α T c α H α : c α V , f ρ , q 2 = α T c α V 2 ( α ! ) 1 ρ ( 2 N ) α q < .
It can be seen that S ( V ) ρ , q 1 S ( V ) ρ , q 2 for q 1 < q 2 .
We also define
S ( V ) ρ = q = 1 S ( V ) ρ , q a n d S ( V ) ρ = q = 1 S ( V ) ρ , q .
We assert that f ( t ) = α T c α ( t ) H α is a V -valued generalized random variable, i.e., f ( t ) S ( V ) ρ , if there exists q N such that f ρ , q < .
Next, we introduce the Wick product on S ( V ) 1 . Consider F , G S ( V ) 1 with forms
F = α T i = 1 c i α H α e i = i = 1 F i e i , G = β T i = 1 b i β H β e i = i = 1 G i e i ,
where c i α , b i α R and F i , G i ( S ) 1 (the space of generalized random variables).
Definition 2. 
The Wick product of stochastic processes F and G is
F G : = i = 1 ( F i G i ) e i = γ T θ γ H γ ,
where θ γ = i = 1 α + β = γ c i α b i β e i V . If at least one of F i and G i is deterministic, then F i G i = F i · G i (the Wick product becomes the ordinary (pointwise) product).
Additionally, we give the definition of the cylindrical fractional Brownian motion. To do this, we need to define a bijection n ( · , · ) : N × N N ,
n ( i , j ) : = j + ( i + j 1 ) ( i + j 2 ) 2 .
Moreover, for every H ( 1 / 2 , 1 ) and f S ( R ) , we define the operator M as
M f ( x ) = C H R f ( t ) | t x | 3 / 2 H d t ,
where C H = [ 2 Γ ( H 1 / 2 ) cos ( π ( H 1 / 2 ) / 2 ) ] 1 [ Γ ( 2 H + 1 ) sin ( π H ) ] 1 / 2 , Γ ( · ) denotes the Gamma function. Then, we have the following definition:
Definition 3. 
The cylindrical fractional Brownian motion on V is defined as
B H ( t ) : = i = 1 β i H ( t ) e i = i = 1 k = 1 b i k H ( t ) H σ k e i = k = 1 b k H ( t ) H σ k ,
where
β i H ( t ) = k = 1 b i k H ( t ) H σ k , b i k H ( t ) = 0 t M ξ j ( s ) d s , k = n ( i , j ) , 0 , o t h e r w i s e ,
b k H ( t ) = δ k , n ( i , j ) 0 t M ξ j ( s ) d s e i , and σ k = ( 0 , 0 , , 1 , ) with 1 in the k-th entry and 0 otherwise.
The fractional white noise W H ( t ) : = d B H ( t ) d t , can be denoted as
W H ( t ) : = i = 1 W i H ( t ) e i = i = 1 k = 1 d i k H ( t ) H σ k e i = k = 1 d k H ( t ) H σ k ,
where
W i H ( t ) = k = 1 d i k H ( t ) H σ k , d i k H ( t ) = M ξ j ( t ) , k = n ( i , j ) , 0 , o t h e r w i s e ,
and d k H ( t ) = δ k , n ( i , j ) M ξ j ( t ) e i .
In order to investigate stochastic differential equations in spaces of Hilbert space-valued generalized random variables, it is necessary to introduce the action of operators on the corresponding spaces.
Definition 4. 
Let G = α T c α H α S ( V ) ρ , ρ [ 0 , 1 ] .
(1) 
If A is a bounded linear operator on V , we have
A G : = α T A c α H α .
(2) 
If F : V V is a non-linear operator with Lipschitz and linear growth conditions, we have
F G : = α T F c α H α .

3. Renormalization Group Method for a Singular Perturbation System

Before proceeding with the RG procedure, we must impose the following conditions on operators A, F and B in Equation (1).
Hypothesis 1. 
(i) 
A is a bounded linear operator on V with semi-simple zero eigenvalues, and the other eigenvalues have negative real parts.
(ii) 
B is a bounded linear operator on V such that there exists an orthonormal basis ( e i ) i = 1 and a sequence of real numbers ( λ i ) i = 1 such that B e i = λ i e i for all i N . Moreover, the operators B and A commute on V .
(iii) 
F : V V is a non-linear operator. There exists a constant C > 0 such that, for all x , y V ,
F x F y V C x y V a n d F x V C x V .
Hypothesis 1(i) implies that A is an infinitesimal generator of the strongly continuous semigroup { S ( τ ) = e τ A : τ 0 } , then the operator ( 1 / ε ) A , where ε > 0 , is an infinitesimal generator of { S ( τ / ε ) : τ 0 } (see, e.g., [27]). Therefore, combined with Theorem 3.1 in [12], the Equation (1) with the initial condition u ( 0 ) = θ S ( V ) 1 , 3 has a unique mild solution which exists on S ( V ) 1 , q for all τ 0 and q 3 .
We introduce the fast time scale t = τ / ε . The singular perturbation problem (1) can be transformed into a regular one. We define W ˜ H ( t ) : = ε 1 H W H ( τ ) , where W ˜ H is a fractional white noise that possesses the same statistical properties as W H , and define u ˜ ( t ) : = u ( ε t ) . Up to the equivalence of distributions, we obtain
d u ˜ ( t ) = A u ˜ ( t ) d t + ε F u ˜ ( t ) d t + ε a B u ˜ ( t ) W ˜ H ( t ) d t , u ˜ ( 0 ) = θ ,
where a = m + H , θ S ( V ) 1 , 3 .

3.1. Derivation of Naive Expansion Solution and Renormalization

We assume that the solution of (3) can be expressed as the following formal expansion:
u ˜ ( t ) = u ˜ 0 t + ε u ˜ 1 t + ε a u ˜ a t + ,
substituting (4) into (3) and matching the power of ε , the equations can be derived as
ε 0 : d u ˜ 0 ( t ) = A u ˜ 0 ( t ) d t , u ˜ 0 ( 0 ) = θ , ε 1 : d u ˜ 1 ( t ) = A u ˜ 1 ( t ) d t + F u ˜ 0 ( t ) d t , u ˜ 1 ( 0 ) = 0 , ε a : d u ˜ a ( t ) = A u ˜ a ( t ) d t + B u ˜ 0 ( t ) W ˜ H ( t ) d t , u ˜ a ( 0 ) = 0 ,
which admit the following mild solutions:
u ˜ 0 ( t ) = e t A θ , u ˜ 1 ( t ) = 0 t e ( t s ) A F e s A θ d s = e t A t R F θ + 0 t e t A N F ( s ) θ d s , u ˜ a ( t ) = 0 t e ( t s ) A B e s A θ W ˜ H ( s ) d s = e t A B θ B ˜ H ( t ) ,
where R F is a time-free operator analogous to the resonance term in finite dimensions, N F ( t ) : = e t A F e t A R F , and B ˜ H is the cylindrical fractional Brownian motion corresponding to W ˜ H . For the last equality, we use the commutativity of operators A and B.
For now, we are only interested in the solutions up to O ( ε a ) . Then, naive expansion (4) can be formally rewritten as
u ˜ ( t ) = e t A θ + ε e t A t R F θ + ε 0 t e t A N F ( s ) θ d s + ε a e t A B θ B ˜ H ( t ) + O ( ε a + 1 ) .
In order to analyze the validity of (5), we need the following definition:
Definition 5. 
(Secular term). Assume that { δ n ( ε ) } is an asymptotic sequence. The function f ( t ; ε ) , t 0 , has an asymptotic expansion with respect to the sequence
f ( t ; ε ) f 0 + n = 1 δ n ( ε ) a n ( t ) , as ε ε 0 ,
where f 0 and a n ( t ) are independent of ε. The term δ n ( ε ) a n ( t ) is said to be secular if
lim t a n ( t ) a n 1 ( t ) = .
According to Definition 5, one can see that e t A t R F θ is the secular term. Through tedious calculation, the norm of e t A B θ B ˜ H ( t ) is proportional to t H 1 / 2 , which is also secular (please refer to Appendix A). To work with these terms, we introduce the RG procedure below.
Firstly, we introduce an ansatz-free parameter σ 0 into (5), i.e.,
u ˜ ( t ) = e t A θ + ε e t A ( t σ + σ ) R F θ + ε 0 t e t A N F ( s ) θ d s + ε a 0 σ e t A B θ W ˜ H ( s ) d s + ε a σ t e t A B θ W ˜ H ( s ) d s + O ( ε a + 1 ) .
Then, we set
W ˜ ( σ ) = θ + ε σ R F θ + ε a 0 σ B θ W ˜ H ( s ) d s
and use the above formula to obtain
u ˜ ( t ) = e t A W ˜ ( σ ) + ε e t A ( t σ ) R F W ˜ ( σ ) + ε 0 t e t A N F ( s ) W ˜ ( σ ) d s + ε a σ t e t A B W ˜ ( σ ) W ˜ H ( s ) d s + O ( ε a + 1 ) .
Finally, we note that e t A u ˜ ( t ) is independent of the parameter σ . Formally, we have
d ( e t A u ˜ ( t ) ) d σ = 0 ,
which implies that
0 = d W ˜ ( σ ) ε R F W ˜ ( σ ) d σ ε a B W ˜ ( σ ) W ˜ H ( σ ) d σ + O ( ε a + 1 ) .
Then, one can obtain the a-th order RG equation
d W ˜ ( t ) = ε R F W ˜ ( t ) d t + ε a B W ˜ ( t ) W ˜ H ( t ) d t .
This is the RG equation mentioned in [22,28]. It can be seen that the linear part is zero and the operator R F satisfies condition (iii) of Hypothesis 1. Thus, combined with Theorem 3.1 in [12], Equation (7) with initial condition W ˜ ( 0 ) = θ has a unique mild solution on S ( V ) 1 , 3 for all t 0 . Then, setting σ = t and substituting the solution of (7) into (6), we obtain the corresponding asymptotic solution:
u ˜ ( t ) = e t A W ˜ ( t ) + ε 0 t e t A N F ( s ) W ˜ ( t ) d s + O ( ε a + 1 ) .
Returning to the slow time τ , we obtain with W ˜ ( t ) = W ( ε t ) = W ( τ )
d W ( τ ) = R F W ( τ ) d τ + ε m B W ( τ ) W H ( τ ) d τ , W ( 0 ) = θ ,
and the asymptotic solution to the a-th order
u ¯ ( τ ) = e τ ε A W ( τ ) + 0 τ e τ ε A N F ( s ) W ( τ ) d s .

3.2. Consistency of the Asymptotic Solution

In this section, we will obtain the consistency of the asymptotic solution. Before introducing the theorem, the following lemmas are necessary.
Lemma 1. 
Suppose that the Hypotheses 1(i) and (ii) are satisfied. Then, for any X ( τ ) S ( V ) 1 , 3 ,
0 τ e τ s ε A B X ( s ) W H ( s ) d s 1 , 3 2 C τ 0 τ X ( s ) 1 , 3 2 d s , τ 0 ,
where C > 0 is a constant.
Proof. 
Let X ( s ) = i = 1 α T c i α ( s ) H α e i , s [ 0 , τ ] . Then,
B X ( s ) W H ( s ) = i = 1 ( B X ) i ( s ) e i i = 1 W i H ( s ) e i = i = 1 ( B X ) i ( s ) W i H ( s ) e i = i = 1 γ T α + σ k = γ c i α ( s ) λ i d i k H ( s ) H γ e i .
Next, we claim that if X S ( V ) 1 , 3 , then B X ( s ) W H ( s ) 1 , 3 2 C X ( s ) 1 , 3 2 . In fact,
B X ( s ) W H ( s ) 1 , 3 2 = i = 1 γ T α + σ k = γ c i α ( s ) λ i d i k H ( s ) 2 ( 2 N ) 3 γ C i = 1 γ T α + σ k = γ | c i α ( s ) | 2 | d i k H ( s ) | 2 ( 2 N ) 3 γ C i = 1 α , σ k δ k , n ( i , j ) j 4 3 H | c i α ( s ) | 2 ( 2 N ) 3 ( α + σ k ) C i = 1 k = 1 δ k , n ( i , j ) j 4 3 H ( 2 k ) 3 α T | c i α ( s ) | 2 ( 2 N ) 3 α C k = 1 k 4 3 H 3 i = 1 α T | c i α ( s ) | 2 ( 2 N ) 3 α C X ( s ) 1 , 3 2 .
It should be noted that the second step can be deduced from the finiteness of T and the boundedness of operator B, i.e., | λ i | C . In the third step, we make use of the estimate in [12], | M ξ n ( t ) | 2 C n 4 / 3 H , for all t R . We can obtain that | d i k H ( s ) | 2 δ k , n ( i , j ) | M ξ j ( s ) | 2 C δ k , n ( i , j ) j 4 / 3 H . The fourth step can be derived from ( 2 N ) 3 σ k = ( 2 k ) 3 . The penultimate step can be obtained by δ k , n ( i , j ) j 4 / 3 H k 4 / 3 H .
Therefore, by using the Hölder inequality, we have
0 τ e τ s ε A B X ( s ) W H ( s ) d s 1 , 3 2 C τ 0 τ B X ( s ) W H ( s ) 1 , 3 2 d s C τ 0 τ X ( s ) 1 , 3 2 d s .
Lemma 2. 
Assume that Hypothesis 1 is satisfied. Let W ( τ ) be the solution of (8) and u ( τ ) be the solution of (1) with the initial condition u ( 0 ) = θ . Then, for any T ( 0 , + ) ,
0 τ e τ s ε A [ F u ( s ) F e s ε A W ( s ) ] d s 1 , 3 2 C 0 τ u ( s ) u ¯ ( s ) 1 , 3 2 d s + C ε , τ 0 , T ,
where u ¯ ( τ ) is given by (9), and C > 0 is dependent on T and θ.
Proof. 
By using the hypothesis on F and the Hölder inequality, we have
0 τ e τ s ε A F u ( s ) F e s ε A W ( s ) d s 1 , 3 2 C τ 0 τ u ( s ) e s ε A W ( s ) 1 , 3 2 d s .
In what follows, we will estimate u ( s ) e ( s / ε ) A W ( s ) 1 , 3 2 . We assert that
u ( s ) e s ε A W ( s ) 1 , 3 2 = u ( s ) u ¯ ( s ) + 0 s e s ε A N F ( v ) W ( s ) d v 1 , 3 2 2 u ( s ) u ¯ ( s ) 1 , 3 2 + C ε s .
To reveal this, we define two projection operators P and Q projected onto the kernel of A (written as ker A ) and the space orthogonal to ker A . Let the dimension of ker A be l. Then, for any η S ( V ) 1 , 3 , we have
0 s e s ε A N F ( v ) η d v 1 , 3 2 C ε s , s [ 0 , τ ] .
In fact,
0 s e s ε A N F ( v ) η d v = 0 s e s ε A [ e v ε A P F e v ε A ] N η d v + 0 s e s ε A [ e v ε A Q F e v ε A ] N η d v = : K 1 + K 2 .
We denote that N F ( v ) [ e ( v / ε ) A F e ( v / ε ) A ] N . For the first term,
K 1 1 , 3 2 = 0 s [ P F e v ε A ] N η d v 1 , 3 2 s 0 s [ P F e v ε A ] N η 1 , 3 2 d v C s 0 s [ e v ε A ] N η 1 , 3 2 d v C s 0 s e 2 r v ε d v C ε s ,
where r is the minimum of the modulus of operator A with negative real eigenvalues, i.e., r = min { | r l + 1 | , | r l + 1 | , } . For the second term,
K 2 1 , 3 2 = 0 s e s ε A [ e v ε A Q F e v ε A ] N η d v 1 , 3 2 s 0 s e s ε A [ e v ε A Q F e v ε A ] N η 1 , 3 2 d v s 0 s e s v ε A Q V 2 [ Q F e v ε A ] N η 1 , 3 2 d v C s 0 s e 2 r ( s v ) ε e v ε A η 1 , 3 2 d v C ε s .
Since the mild solution of (8) exists on S ( V ) 1 , 3 for τ [ 0 , + ) , then for any T ( 0 , + ) , there exists a constant C > 0 independent of ε , such that the solution of (8) can be bounded as
W ( τ ) 1 , 3 C , τ [ 0 , T ] .
Replacing η with W ( τ ) in (12), we have
0 s e s ε A N F ( v ) W ( s ) d v 1 , 3 2 C ε s .
Finally, substituting (11) into (10), we have
0 τ e τ s ε A F u ( s ) F e s ε A W ( s ) d s 1 , 3 2 C τ 0 τ u ( s ) u ¯ ( s ) 1 , 3 2 d s + C ε τ 2 C 0 τ u ( s ) u ¯ ( s ) 1 , 3 2 d s + C ε ,
where the constant C > 0 is dependent on T and θ . □
Theorem 1. 
Assume that Hypothesis 1 is satisfied. Let W ( τ ) be the solution of (8) and u ( τ ) be the solution of (1) with the initial condition u ( 0 ) = θ . Then, for any T ( 0 , + ) ,
sup τ [ 0 , T ] u ( τ ) u ¯ ( τ ) 1 , 3 C ε 1 / 2 ,
where u ¯ ( τ ) is given by (9), and C > 0 is dependent on T and θ.
Proof. 
Set E ( τ ) = u ( τ ) u ¯ ( τ ) . From system (1) and approximate solution (9), we have
E ( τ ) = 0 τ e τ s ε A [ F u ( s ) F e s ε A W ( s ) ] d s + 0 τ e τ ε A [ N F ( s ) W ( s ) N F ( s ) W ( τ ) ] d s + ε m 0 τ e τ s ε A B [ u ( s ) e s ε A W ( s ) ] W H ( s ) d s = : I 1 + ε m I 2 .
For the first addition of (13), by Lemma 2 and estimation (12), we have
I 1 1 , 3 2 2 0 τ e τ s ε A [ F u ( s ) F e s ε A W ( s ) ] d s 1 , 3 2 + 2 0 τ e τ ε A [ N F ( s ) W ( s ) N F ( s ) W ( τ ) ] d s 1 , 3 2 C 0 τ E ( s ) 1 , 3 2 d s + C ε .
For the second addition of (13), by Lemma 1 and estimation (11), we have
I 2 1 , 3 2 = 0 τ e τ s ε A B [ u ( s ) e s ε A W ( s ) ] W H ( s ) d s 1 , 3 2 C τ 0 τ u ( s ) e s ε A W ( s ) 1 , 3 2 d s C 0 τ E ( s ) 1 , 3 2 d s + C ε .
Therefore,
E ( τ ) 1 , 3 2 2 I 1 1 , 3 2 + 2 ε 2 m I 2 1 , 3 2 C 0 τ E ( s ) 1 , 3 2 d s + C ε .
By applying the Gronwall’s inequality, we have
E ( τ ) 1 , 3 2 C ε e C T , τ [ 0 , T ] ,
which implies that
sup τ [ 0 , T ] E ( τ ) 1 , 3 C ε 1 / 2 .
The proof is complete. □

4. Examples

In this section, we apply our results to two examples. For the first example, we consider the following equation:
d u 1 ( t ) = ε a 1 u 1 ( t ) d t + ε α b u 1 ( t ) W H ( t ) d t , d u 2 ( t ) = u 2 ( t ) d t + ε a 2 ( t ) u 1 ( t ) d t ,
where a 1 , b R ; α ( 1 , 2 ) ; H ( 1 / 2 , 1 ) ; and a 2 ( t ) is the continuous bounded function. Note that { u 1 ( t ) , t R + } is the geometric fractional Brownian motion (see, e.g., [10], Example 3.4.4). We have u 1 ( t ) = u 1 ( 0 ) exp ( ε a 1 t ε 2 α 2 b 2 t 2 H + ε α b B H ( t ) ) .
By applying the RG method, we obtain a renormalized system
d V 1 ( t ) = ε a 1 V 1 ( t ) d t + ε α b V 1 ( t ) W H ( t ) d t , d V 2 ( t ) = 0
and an approximate solution
u ¯ 1 ( t ) = V 1 ( t ) , u ¯ 2 ( t ) = e t V 2 ( t ) + ε e t V 1 ( t ) 0 t a 2 ( s ) d s .
By utilizing the Euler–Maruyamma method [29] via MATLAB (latest v. 2024), the approximate solution of (14) can be simulated with u 1 ( 0 ) = 1 , u 2 ( 0 ) = 1 , H = 0.75 , a 1 = 1 , a 2 ( t ) = sin ( t ) , b = 10 and α = 1.2 . Let E ( t ) = u ( t ) u ¯ ( t ) . Definition 1 implies that L 2 ( μ ; R 2 ) S ( R 2 ) 1 , 3 , u 1 , 3 u L 2 ( μ ; R 2 ) ; therefore we can estimate E ( t ) 1 , 3 by E ( t ) L 2 u ( t ) u ¯ ( t ) L 2 ( μ ; R 2 ) .
For ε = 10 2 , we draw the exact solution and the asymptotic solution, as shown in Figure 1a. It can be seen that the difference between the two is tiny. Figure 1b exhibits the time evolution of the error E ( t ) L 2 , one can observe that the error stays in the ranges of 0 to 8 × 10 3 (less than ε ).
For ε = 10 4 , Figure 2a shows the almost invisible difference between u ( t ) and u ¯ ( t ) . Figure 2b depicts the time evolution of the error E ( t ) L 2 , which indicates that the error stays in the ranges of 0 to 4.6 × 10 3 .
Combined with Figure 1b and Figure 2b, the error decreases as the perturbation parameter shrinks, indicating that u ¯ ( t ) approximates u ( t ) better. For further verification, we generate 1000 independent sets of fractional white noise and obtain the corresponding distribution of the maximum error, as shown in Figure 3. For ε = 10 2 , the mean of maximum error is 8.1 × 10 3 ; for ε = 10 4 , the mean of maximum error is 4.6 × 10 3 , which means that the asymptotic solution is uniformly valid.
As a second example, we consider the stochastic version of Lorenz equation:
u ˙ = ρ ( v u ) , v ˙ = σ u v u w , w ˙ = β w + u v ,
where ρ , σ and β are three constants proportional to the Prandtl number, Rayleigh number and geometric factor.
In reference [30], the normal form of system (15) was studied when σ fluctuated by noise, i.e.,
σ = σ 0 ( 1 + ε ξ ( t ) ) ,
where ξ ( t ) is a stationary, zero mean stochastic process, and they pointed out that in the absence of noise, the fixed point ( 0 , 0 , 0 ) loses its stability by a simple bifurcation at σ 0 = 1 .
Inspired by their works, we study the case where σ is perturbed by fractional white noise, that is,
σ = 1 + ε m W H ( t ) ,
where 0 < ε 1 , m ( 1 , 2 ) , H ( 1 / 2 , 1 ) . Applying the transformation
u v w = ε 1 ρ 0 1 1 0 0 0 1 x 1 x 2 x 3 ,
we obtain
x ˙ 1 x ˙ 2 x ˙ 3 = 0 0 0 0 α 0 0 0 β x 1 x 2 x 3 + ε ρ α ( x 1 + ρ x 2 ) x 3 1 α ( x 1 + ρ x 2 ) x 3 ( x 1 + ρ x 2 ) ( x 1 x 2 ) + ε m ρ α ( x 1 + ρ x 2 ) 1 α ( x 1 + ρ x 2 ) 0 W H ( t ) ,
where α = 1 + ρ . According to the relationship of parameters, we consider the following three cases:
  • Case I α = β .
By applying the RG method, we obtain a renormalized system
d V 1 d V 2 d V 3 = ε 0 1 α V 1 V 3 ( ρ 1 ) V 1 V 2 d t + ε m ρ α ( V 1 + ρ V 2 ) 1 α ( V 1 + ρ V 2 ) 0 W H ( t ) d t
and an approximate solution
x ¯ 1 ( t ) x ¯ 2 ( t ) x ¯ 3 ( t ) = V 1 ( t ) e α t V 2 ( t ) e α t V 3 ( t ) + ε ρ α 2 e α t V 1 ( t ) V 3 ( t ) + ρ 2 2 α 2 e 2 α t V 2 ( t ) V 3 ( t ) ρ α 2 e 2 α t V 2 ( t ) V 3 ( t ) 1 α V 1 2 ( t ) + ρ α e 2 α t V 2 2 ( t ) .
  • Case II 2 α = β .
The renormalized system is
d V 1 d V 2 d V 3 = ε 0 0 ρ V 2 2 d t + ε m ρ α ( V 1 + ρ V 2 ) 1 α ( V 1 + ρ V 2 ) 0 W H ( t ) d t
and the approximate solution is
x ¯ 1 ( t ) x ¯ 2 ( t ) x ¯ 3 ( t ) = V 1 ( t ) e α t V 2 ( t ) e 2 α t V 3 ( t ) + ε ρ 2 α 2 e 2 α t V 1 ( t ) V 3 ( t ) + ρ 2 3 α 2 e 3 α t V 2 ( t ) V 3 ( t ) 1 α 2 e 2 α t V 1 ( t ) V 3 ( t ) ρ 2 α 2 e 3 α t V 2 ( t ) V 3 ( t ) 1 2 α V 1 2 ( t ) + ρ 1 α e α t V 1 ( t ) V 2 ( t ) .
  • Case III α β and 2 α β .
The renormalized system is
d V 1 d V 2 d V 3 = ε m ρ α ( V 1 + ρ V 2 ) 1 α ( V 1 + ρ V 2 ) 0 W H ( t ) d t
and the approximate solution is
x ¯ 1 ( t ) x ¯ 2 ( t ) x ¯ 3 ( t ) = V 1 ( t ) e α t V 2 ( t ) e β t V 3 ( t ) + ε ρ α β e β t V 1 ( t ) V 3 ( t ) + ρ 2 α ( α + β ) e ( α + β ) t V 2 ( t ) V 3 ( t ) 1 α ( α β ) e β t V 1 ( t ) V 3 ( t ) ρ α β e ( α + β ) t V 2 ( t ) V 3 ( t ) 1 β V 1 2 ( t ) + ρ 1 β α e α t V 1 ( t ) V 2 ( t ) ρ β 2 α e 2 α t V 2 2 ( t ) .
Next, we take case III as an example to illustrate the validity of our results, similar numerical comparisons can be made in the other two cases. As in first example, we use the Euler–Maruyamma method [29] via MATLAB (latest v. 2024) to simulate the asymptotic solution of (16) with x 1 ( 0 ) = 0.2 , x 2 ( 0 ) = 0.1 , x 3 ( 0 ) = 0.1 , H = 0.75 , ρ = 0.1 , α = 1.1 , β = 1.25 and m = 1.2 . Denote E ( t ) = x ( t ) x ¯ ( t ) as the error between the asymptotic solution and the exact one.
For ε = 10 2 , we picture the exact and approximate solutions respectively in Figure 4a. It can be seen from Figure 4b that the error ( E ( t ) L 2 [ 0 , 1.6 × 10 3 ] ) decreases with the passage of time.
For ε = 10 4 , Figure 5a shows a much smaller difference between u ( t ) and u ¯ ( t ) . Figure 5b also depicts the evolution of the error ( E ( t ) L 2 [ 0 , 8 × 10 4 ] ), which indicates that the solution obtained by RG method can be used as an approximate solution of the original equation well.
Additionally, to derive distribution of the maximum error, we generate 1000 sets of independent fractional white noise, and simulate the solution of the equation. Figure 6 illustrates the histogram of maximum error under two perturbation parameters.
Remark 1. 
Unlike neural networks, deep learning and machine learning algorithms (for example, GA-KELM, [31]), our method does not need to build training samples, nor do we need to optimize parameters to control the complexity of the system and avoid over-fitting. As can be seen from the above two examples, the RG method is effective and the approximate effect is ideal.

5. Conclusions

This paper investigated a class of singular perturbation systems of stochastic differential equations in infinite dimensions driven by fractional white noise under the framework of white noise analysis. The stochastic integration was in the Wick–Itô–Skorohod sense. Instead of finding the exact solution, we used the RG method to obtain the asymptotic solution and gave proof that the asymptotic one is uniformly effective. Compared with the system driven by classical Brownian motion, our results are non-trivial and significantly different.
The contributions of this paper are mainly as follows:
(1)
We eliminated the inconsistency terms from the direct expansion solution, obtained a renormalized system and obtained an efficient estimate of the solution to the original equation.
(2)
We proved that the error between the approximate solution and the exact solution remains O ( ε 1 / 2 ) .
Different from numerical method and machine learning algorithms, we construct the approximate solutions from a vary point of view. It is hoped that our research can provide a new idea for finding approximate solutions of stochastic differential equations. For future work, it is meaningful to apply the RG method to stochastic differential equations driven by different noises, such as mixed fractional white noise and white noise or Lévy noise.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The author would like to thank the editor and three anonymous reviewers for their criticisms and suggestions, which have greatly improved this article. The author also wish to thank Shaoyun Shi, Wenlei Li and Zhiguo Xu for their useful comments and help.

Conflicts of Interest

The author declare no conflicts of interest.

Appendix A

Proposition A1 
([10]). There exist constants C > 0 and r > 0 such that
ξ n ( x ) C n 1 12 , | x | 2 n , C e r x 2 , | x | > 2 n ,
for all n.
Proposition A2. 
Let B be a bounded linear operator on V , which satisfies Hypothesis 1(ii). B H ( t ) is the cylindrical fractional Brownian motion with H ( 1 / 2 , 1 ) . For any θ S ( V ) 1 , 3 , we have
lim t B θ B H ( t ) 1 , 3 2 t 2 H 1 = C ,
where C > 0 is a constant.
Proof. 
Case I. θ is a V -valued deterministic constant.
Suppose that θ = i = 1 c i e i S ( V ) 1 , 3 with c i R and e i as the orthonormal basis of V . Then, B θ = i = 1 λ i c i e i , λ i R . We have
B θ B H ( t ) = B θ · B H ( t ) = i = 1 k = 1 δ k , n ( i , j ) λ i c i 0 t M ξ j ( s ) d s H σ k e i
and the norm
B θ · B H ( t ) 1 , 3 2 = i = 1 k = 1 δ k , n ( i , j ) λ i c i 2 0 t M ξ j ( s ) d s 2 ( 2 k ) 3 .
We claim that 0 t M ξ j ( s ) d s 2 is proportional to t 2 H 1 when t > 1 . In fact, for any fixed s [ 0 , t ] , H ( 1 / 2 , 1 ) and j N ,
R ξ j ( u ) | u s | H 3 2 d u = 0 ξ j ( u ) ( s u ) H 3 2 d u + 0 s ξ j ( u ) ( s u ) H 3 2 d u + s ξ j ( u ) ( u s ) H 3 2 d u = : I 1 + I 2 + I 3 .
For the first term, by using Proposition A1, there exist C > 0 and r > 0 such that
I 1 = 0 ξ j ( v ) ( s + v ) H 3 2 d v 0 2 j | ξ j ( v ) | ( s + v ) H 3 2 d v + 2 j | ξ j ( v ) | ( s + v ) H 3 2 d v C 0 2 j j 1 12 ( s + v ) H 3 2 d v + C 2 j e r v 2 ( s + v ) H 3 2 d v C 1 .
For the second term, we have
I 2 0 s | ξ j ( u ) | ( s u ) H 3 2 d u C 0 s ( s u ) H 3 2 d u C 2 .
For the last term, since ξ j L 1 ( R ) = O ( j 1 / 4 ) (see, e.g., [26], p. 208), we have
I 3 s s + 1 | ξ j ( u ) | ( u s ) H 3 2 d u + s + 1 | ξ j ( u ) | ( u s ) H 3 2 d u C s s + 1 ( u s ) H 3 2 d u + s + 1 | ξ j ( u ) | d u C 3 .
Here, C 1 , C 2 and C 3 are constants that depend on s, H and j.
Therefore, R ξ j ( u ) | u s | H 3 / 2 d u is integrable. By exchanging the integral order, we have
0 t M ξ j ( s ) d s = C H R ξ j ( u ) 0 t | u s | H 3 2 d s d u = C ˜ H 0 + ξ j ( u ) u H 1 2 d u t + ξ j ( u ) ( u t ) H 1 2 d u 0 ξ j ( u ) ( u ) H 1 2 d u + t ξ j ( u ) ( t u ) H 1 2 d u = : C ˜ H J 1 + J 2 + J 3 + J 4 ,
where C ˜ H = 2 C H / ( 2 H 1 ) .
In what follows, we will prove J 1 , J 2 and J 3 are uniformly bounded with respect to t R . By applying Proposition A1, there exist C > 0 and r > 0 such that
J 1 0 2 j | ξ j ( u ) | u H 1 2 d u + 2 j + | ξ j ( u ) | u H 1 2 d u C 0 2 j j 1 12 u H 1 2 d u + C 2 j + e r u 2 u H 1 2 d u C j H 2 + 1 6 .
Similar to the above analysis, we have
J 2 t + | ξ j ( u ) | ( u t ) H 1 2 d u t + | ξ j ( u ) | u H 1 2 d u C j H 2 + 1 6
and
J 3 0 | ξ j ( v ) | v H 1 2 d v C j H 2 + 1 6 .
Finally, we analyze J 4 :
J 4 = 2 j ξ j ( u ) ( t u ) H 1 2 d u + 2 j t ξ j ( u ) ( t u ) H 1 2 d u = : K 1 + K 2 .
For t [ 0 , 1 ] , J 4 is bounded. Actually, we have
K 1 2 j | ξ j ( v ) | ( t + v ) H 1 2 d v C 2 j e r v 2 ( 2 v ) H 1 2 d v C
and
K 2 2 j t | ξ j ( u ) | ( t u ) H 1 2 d u C 2 j t j 1 12 ( t u ) H 1 2 d u C j H 2 + 1 6 .
For t > 1 , we have
K 1 = t H 1 2 2 j ξ j ( u ) 1 u t H 1 2 d u = t H 1 2 2 j ξ j ( v ) 1 + v t H 1 2 d v = : t H 1 2 K 1 ,
where
K 1 2 j | ξ j ( v ) | ( 2 v ) H 1 2 d v C ,
which means that K 1 is uniformly bounded. Furthermore,
K 2 = t H 1 2 2 j t ξ j ( u ) 1 u t H 1 2 d u = : t H 1 2 K 2 ,
where
K 2 C 2 j t | ξ j ( u ) | j H 2 1 4 d u C j H 2 , ( by using ξ j L 1 ( R ) = O ( j 1 / 4 ) ) .
To summarize, Formula (A1) mainly contains the sum of finite terms represented by the following two terms:
L 1 = t 2 H 1 i = 1 k = 1 δ k , n ( i , j ) C ^ i K 1 2 k 3 , L 2 = i = 1 k = 1 δ k , n ( i , j ) C ˘ i K 1 2 k 3 ,
where C ^ i and C ˘ i are constants dependent on λ i , c i and C ˜ H . Since, for any H ( 1 / 2 , 1 ) and t > 0 , 2 ( H / 2 + 1 / 6 ) 3 < 1 , we conclude that terms like L 2 are bounded. Therefore, the existence of the following limits,
lim t B θ B H ( t ) 1 , 3 2 t 2 H 1 = C ,
holds with C > 0 .
Case II. θ is a V -valued random variable.
Suppose that θ = i = 1 c i e i S ( V ) 1 , 3 with c i = α T c i α H α . Then,
B θ = i = 1 α T λ i c i α H α e i , λ i R .
We have
B θ B H ( t ) = i = 1 γ T α + σ k = γ δ k , n ( i , j ) λ i c i α 0 t M ξ j ( s ) d s H γ e i
and the norm
B θ B H ( t ) 1 , 3 2 = i = 1 γ T α + σ k = γ δ k , n ( i , j ) λ i c i α 0 t M ξ j ( s ) d s 2 ( 2 N ) 3 γ .
Since T is a set of sequences of non-negative integers with only finitely many nonzero components, then
α + σ k = γ δ k , n ( i , j ) λ i c i α 0 t M ξ j ( s ) d s 2
is the sum of the finite terms. Accordingly, the essence of the problem is to discuss the consistency of 0 t M ξ j ( s ) d s 2 . The rest of discussion is similar to that for Case I. □
Remark A1. 
If we replace B H ( t ) in Proposition A2 with B ( t ) , the Brownian motion, we will see that B θ B ( t ) 1 , 3 2 is uniformly bounded. This is because the term 0 t M ξ j ( s ) d s in (A1) is replaced by 0 t ξ j ( s ) d s and the consistency of the latter is derived from ξ j L 1 ( R ) = O ( j 1 / 4 ) .

References

  1. Mandelbrot, B.B.; Van Ness, J.W. Fractional Brownian motions, fractional noises and applications. SIAM Rev. 1968, 10, 422–437. [Google Scholar] [CrossRef]
  2. Hurst, H.E. Long-term storage capacity of reservoirs. Trans. Am. Soc. Civ. Eng. 1951, 116, 770–799. [Google Scholar] [CrossRef]
  3. Leland, W.E.; Willinger, W.; Taqqu, M.S.; Wilson, D.V. On the self-similar nature of Ethernet traffic. Comput. Commun. Rev. 1995, 25, 202–213. [Google Scholar] [CrossRef]
  4. Hu, Y.; Øksendal, B. Fractional white noise calculus and applications to finance. Infin. Dimens. Anal. Quantum 2003, 6, 1–32. [Google Scholar] [CrossRef]
  5. Duncan, T.E.; Hu, Y.; Pasik-Duncan, B. Stochastic calculus for fractional Brownian motion I. Theory. SIAM J. Control Optim. 2000, 38, 582–612. [Google Scholar] [CrossRef]
  6. Mishura, Y.; Nualart, D. Weak solutions for stochastic differential equations with additive fractional noise. Statist. Probab. Lett. 2004, 70, 253–261. [Google Scholar] [CrossRef]
  7. Hu, Y.; Nualart, D.; Song, X. A singular stochastic differential equation driven by fractional Brownian motion. Statist. Probab. Lett. 2008, 78, 2075–2085. [Google Scholar] [CrossRef]
  8. Ferrario, B.; Olivera, C. Lp-solutions of the Navier-Stokes equation with fractional Brownian noise. AIMS Math. 2018, 3, 539–553. [Google Scholar] [CrossRef]
  9. Mohammed, W.W.; Al-Askar, F.M.; Cesarano, C. The analytical solutions of the stochastic mKdV equation via the mapping method. Mathematics 2022, 10, 4212. [Google Scholar] [CrossRef]
  10. Biagini, F.; Hu, Y.; Øksendal, B.; Zhang, T. Stochastic Calculus for Fractional Brownian Motion and Applications; Springer: London, UK, 2008. [Google Scholar]
  11. Duncan, T.E.; Maslowski, B.; Pasik-Duncan, B. Stochastic equations in Hilbert space with a multiplicative fractional Gaussian noise. Stoch. Proc. Appl. 2005, 115, 1357–1383. [Google Scholar] [CrossRef]
  12. Ohashi, A.M.F. Stochastic evolution equations driven by a fractional white noise. Stoch. Anal. Appl. 2006, 24, 555–578. [Google Scholar] [CrossRef]
  13. Maslowski, B.; Šnupárková, J. Stochastic affine evolution equations with multiplicative fractional noise. Appl. Math. 2018, 63, 7–35. [Google Scholar] [CrossRef]
  14. Shahnazi-Pour, A.; Moghaddam, B.P.; Babaei, A. Numerical simulation of the Hurst index of solutions of fractional stochastic dynamical systems driven by fractional Brownian motion. J. Comput. Appl. Math. 2021, 386, 113210. [Google Scholar] [CrossRef]
  15. Banihashemi, S.; Jafari, H.; Babaei, A. A stable collocation approach to solve a neutral delay stochastic differential equation of fractional order. J. Comput. Appl. Math. 2022, 403, 113845. [Google Scholar] [CrossRef]
  16. Nayfeh, A.H. Perturbation Methods; John Wiley: Hoboken, NJ, USA, 1973. [Google Scholar]
  17. Kabanov, Y.M.; Pergamenshchikov, S.M. Singular perturbations of stochastic differential equations. Mat. Sb. 1990, 181, 1170–1182. [Google Scholar] [CrossRef]
  18. Holmes, M.H. Introduction to Perturbation Methods; Springer: New York, NY, USA, 2012. [Google Scholar]
  19. Glatt-Holtz, N.; Ziane, M. Singular perturbation systems with stochastic forcing and the renormalization group method. Discrete Contin. Dyn. Syst. 2010, 26, 1241–1268. [Google Scholar] [CrossRef]
  20. Chen, L.Y.; Goldenfeld, N.; Oono, Y. Renormalization group and singular perturbations: Multiple scales, boundary layers, and reductive perturbation theory. Phys. Rev. E 1996, 54, 376–394. [Google Scholar] [CrossRef] [PubMed]
  21. Qu, S.; Li, W.; Shi, S. Renormalization group approach to SDEs with nonlinear diffusion terms. Mediterr. J. Math. 2021, 18, 183. [Google Scholar] [CrossRef]
  22. Ziane, M. On a certain renormalization group method. J. Math. Phys. 2000, 41, 3290–3299. [Google Scholar] [CrossRef]
  23. DeVille, R.L.; Harkin, A.; Holzer, M.; Josić, K.; Kaper, T.J. Analysis of a renormalization group method and normal form theory for perturbed ordinary differential equations. Phys. D 2008, 237, 1029–1052. [Google Scholar] [CrossRef]
  24. Kirkinis, E. The renormalization group: A perturbation method for the graduate curriculum. SIAM Rev. 2012, 54, 374–388. [Google Scholar] [CrossRef]
  25. Guo, L.; Chen, Y.; Shi, S.; West, B.J. Renormalization group and fractional calculus methods in a complex world: A review. Fract. Calc. Appl. Anal. 2021, 24, 5–53. [Google Scholar] [CrossRef]
  26. Holden, H.; Øksendal, B.; Ubøe, J.; Zhang, T. Stochastic Partial Differential Equations; Springer: Boston, MA, USA; Birkhäuser: Basel, Switzerland, 1996. [Google Scholar]
  27. Pazy, A. Semigroups of Linear Operators and Applications to Partial Differential Equations; Springer: New York, NY, USA, 1983. [Google Scholar]
  28. Goldenfeld, N.; Martin, O.; Oono, Y.; Liu, F. Anomalous dimensions and the renormalization group in a nonlinear diffusion process. Phys. Rev. Lett. 1990, 64, 1361–1364. [Google Scholar] [CrossRef] [PubMed]
  29. Higham, D.J. An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Rev. 2001, 43, 525–546. [Google Scholar] [CrossRef]
  30. Namachchivaya, N.S.; Lin, Y.K. Method of stochastic normal forms. Int. J. Nonlin. Mech. 1991, 26, 931–943. [Google Scholar] [CrossRef]
  31. Chai, W.; Zheng, Y.; Tian, L.; Qin, J.; Zhou, T. GA-KELM: Genetic-algorithm-improved kernel extreme learning machine for traffic flow forecasting. Mathematics 2023, 11, 3574. [Google Scholar] [CrossRef]
Figure 1. (a) Comparison between the exact solution and the approximate solution, u 1 ( 0 ) = 1 , u 2 ( 0 ) = 1 , H = 0.75 , a 1 = 1 , a 2 ( t ) = sin ( t ) , b = 10 , α = 1.2 , ε = 10 2 . (b) Time evolution of the error E ( t ) L 2 u ( t ) u ¯ ( t ) L 2 ( μ ; R 2 ) .
Figure 1. (a) Comparison between the exact solution and the approximate solution, u 1 ( 0 ) = 1 , u 2 ( 0 ) = 1 , H = 0.75 , a 1 = 1 , a 2 ( t ) = sin ( t ) , b = 10 , α = 1.2 , ε = 10 2 . (b) Time evolution of the error E ( t ) L 2 u ( t ) u ¯ ( t ) L 2 ( μ ; R 2 ) .
Mathematics 12 00379 g001
Figure 2. (a) Comparison between the exact solution and the approximate solution, u 1 ( 0 ) = 1 , u 2 ( 0 ) = 1 , H = 0.75 , a 1 = 1 , a 2 ( t ) = sin ( t ) , b = 10 , α = 1.2 , ε = 10 4 . (b) Time evolution of the error E ( t ) L 2 u ( t ) u ¯ ( t ) L 2 ( μ ; R 2 ) .
Figure 2. (a) Comparison between the exact solution and the approximate solution, u 1 ( 0 ) = 1 , u 2 ( 0 ) = 1 , H = 0.75 , a 1 = 1 , a 2 ( t ) = sin ( t ) , b = 10 , α = 1.2 , ε = 10 4 . (b) Time evolution of the error E ( t ) L 2 u ( t ) u ¯ ( t ) L 2 ( μ ; R 2 ) .
Mathematics 12 00379 g002
Figure 3. Histogram of maximum error distribution. (a) The mean is 8.1 × 10 3 . (b) The mean is 4.6 × 10 3 .
Figure 3. Histogram of maximum error distribution. (a) The mean is 8.1 × 10 3 . (b) The mean is 4.6 × 10 3 .
Mathematics 12 00379 g003
Figure 4. (a) Comparison between the exact solution and the approximate solution, x 1 ( 0 ) = 0.2 , x 2 ( 0 ) = 0.1 , x 3 ( 0 ) = 0.1 , H = 0.75 , ρ = 0.1 , β = 1.25 , m = 1.2 , ε = 10 2 . (b) Time evolution of the error E ( t ) L 2 x ( t ) x ¯ ( t ) L 2 ( μ ; R 2 ) .
Figure 4. (a) Comparison between the exact solution and the approximate solution, x 1 ( 0 ) = 0.2 , x 2 ( 0 ) = 0.1 , x 3 ( 0 ) = 0.1 , H = 0.75 , ρ = 0.1 , β = 1.25 , m = 1.2 , ε = 10 2 . (b) Time evolution of the error E ( t ) L 2 x ( t ) x ¯ ( t ) L 2 ( μ ; R 2 ) .
Mathematics 12 00379 g004
Figure 5. (a) Comparison between the exact solution and the approximate solution, x 1 ( 0 ) = 0.2 , x 2 ( 0 ) = 0.1 , x 3 ( 0 ) = 0.1 , H = 0.75 , ρ = 0.1 , β = 1.25 , m = 1.2 , ε = 10 4 . (b) Time evolution of the error E ( t ) L 2 x ( t ) x ¯ ( t ) L 2 ( μ ; R 2 ) .
Figure 5. (a) Comparison between the exact solution and the approximate solution, x 1 ( 0 ) = 0.2 , x 2 ( 0 ) = 0.1 , x 3 ( 0 ) = 0.1 , H = 0.75 , ρ = 0.1 , β = 1.25 , m = 1.2 , ε = 10 4 . (b) Time evolution of the error E ( t ) L 2 x ( t ) x ¯ ( t ) L 2 ( μ ; R 2 ) .
Mathematics 12 00379 g005
Figure 6. Histogram of maximum error distribution. (a) The mean is 1.6 × 10 3 . (b) The mean is 7.8 × 10 4 .
Figure 6. Histogram of maximum error distribution. (a) The mean is 1.6 × 10 3 . (b) The mean is 7.8 × 10 4 .
Mathematics 12 00379 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, L. Renormalization Group Method for a Stochastic Differential Equation with Multiplicative Fractional White Noise. Mathematics 2024, 12, 379. https://0-doi-org.brum.beds.ac.uk/10.3390/math12030379

AMA Style

Guo L. Renormalization Group Method for a Stochastic Differential Equation with Multiplicative Fractional White Noise. Mathematics. 2024; 12(3):379. https://0-doi-org.brum.beds.ac.uk/10.3390/math12030379

Chicago/Turabian Style

Guo, Lihong. 2024. "Renormalization Group Method for a Stochastic Differential Equation with Multiplicative Fractional White Noise" Mathematics 12, no. 3: 379. https://0-doi-org.brum.beds.ac.uk/10.3390/math12030379

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop