Next Article in Journal
Artificial Neural Based Speed and Flux Estimators for Induction Machine Drives with Matlab/Simulink
Next Article in Special Issue
On Geometric Mean and Cumulative Residual Entropy for Two Random Variables with Lindley Type Distribution
Previous Article in Journal
A Comparative Study of SSA-BPNN, SSA-ENN, and SSA-SVR Models for Predicting the Thickness of an Excavation Damaged Zone around the Roadway in Rock
Previous Article in Special Issue
Testing for the Rayleigh Distribution: A New Test with Comparisons to Tests for Exponentiality Based on Transformed Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fourth Cumulant Bound of Multivariate Normal Approximation on General Functionals of Gaussian Fields

Division of Data Science and Data Science Convergence Research Center, College of Information Science, Hallym University, Chuncheon 200-702, Korea
*
Author to whom correspondence should be addressed.
Submission received: 25 March 2022 / Revised: 14 April 2022 / Accepted: 15 April 2022 / Published: 18 April 2022
(This article belongs to the Special Issue Probability, Stochastic Processes and Optimization)

Abstract

:
We develop a technique for obtaining the fourth moment bound on the normal approximation of F, where F is an R d -valued random vector whose components are functionals of Gaussian fields. This study transcends the case of vectors of multiple stochastic integrals, which has been the subject of research so far. We perform this task by investigating the relationship between the expectations of two operators Γ and Γ * . Here, the operator Γ was introduced in Noreddine and Nourdin (2011) [On the Gaussian approximation of vector-valued multiple integrals. J. Multi. Anal.], and Γ * is a muilti-dimensional version of the operator used in Kim and Park (2018) [An Edgeworth expansion for functionals of Gaussian fields and its applications, stoch. proc. their Appl.]. In the specific case where F is a random variable belonging to the vector-valued multiple integrals, the conditions in the general case of F for the fourth moment bound are naturally satisfied and our method yields a better estimate than that obtained by the previous methods. In the case of d = 1 , the method developed here shows that, even in the case of general functionals of Gaussian fields, the fourth moment theorem holds without conditions for the multi-dimensional case.

1. Introduction

For a given real separable Hilbert space H , we write X = { X ( h ) , h H } to indicate an isonormal Gaussian process defined on a probability space ( Ω , F , P ) . Let { F n , n 1 } be a sequence of random variables of functionals of Gaussian fields associated with X. The authors in [1] discovered a central limit theorem (CLT), known as the fourth moment theorem, for a sequence of random variables belonging to a fixed Wiener chaos.
Theorem 1.
[Fourth moment theorem] Let { F n , n 1 } be a sequence of random variables belonging to the q ( 2 ) th Wiener chaos with E [ F n 2 ] = 1 for all n 1 . Then, F n L Z if and only if E [ F n 4 ] 3 , where Z is a standard normal random variable and the notation L means a convergence in distribution.
Such a result provides a remarkable simplification of the method of moments or cumulants. In [2], the fourth moment theorem is expressed in terms of the Malliavin derivative. However, the results given in [1,2] do not provide any estimates, whereas the authors in [3] find an upper bound for various distances by combining Malliavin calculus (see, e.g., [4,5,6]) and Stein’s method for normal approximation (see, e.g., [7,8,9]). Moreover, the authors in [10,11] obtain optimal Berry–Esseen bounds as a further refinement of the main results proven in [3] (see, e.g., [12] for a short survey).
For the fourth moment theorem, the key step for the proof of this theorem is to show the following inequality:
V a r ( D F , D L 1 F H ) c ( E [ F 4 ] 3 ( E [ F 2 ] ) 2 ) ,
where D F is the Malliavin derivative of F and L 1 is the pseudo-inverse of the Ornstein–Uhlenbeck generator (see Section 2). In the particular case where F = I q ( f ) , f H q , with E [ F 2 ] = 1 , the bound in (1) is given by
d K o l ( F , Z ) V a r ( D F , D L 1 F H ) q 1 3 q E [ F 4 ] 3 .
where d K o l stands for the Kolmogorov distance.
Another research of this line can be found: [13] for multiple Winger integrals in a fixed order of free Winger chaos, and [14,15,16] for multi-dimensional vectors of multiple stochastic integrals, such that each integral belongs to a fixed order of Wiener chaos. In particular, the new techniques for the proof of the fourth moment theorem are also found in [17,18,19]. In [19], the authors prove this theorem by using the asymptotic independence between blocks of multiple stochastic integrals. At this point, it is important to mention that all of these approaches deal with only the random variables in a fixed chaos, and thus do not cover the cases that are not part of some chaoses. For this reason, we are interested in the conditions that the property of (2) holds for the generalized random variables that are not in a fixed Wiener chaos.
In this paper, we will develop a method for finding a bound on the multivariate normal approximation of a random vector F for which the fourth moment theorem holds even when F is a d-dimensional random vector whose components are general functionals of Gaussian fields. By applying this method to a random vector whose components belong to some Wiener chaos, we derive the fourth moment theorem with an upper bound more sharply than the previous one given in Theorem 4.3 of [19].
Differently from the fourth moment theorem for functionals of Gaussian fields studied so far, the findings of our research represent a further extension and refinement of the fourth moment theorem, in the sense that (i) they do not require the involved random vector whose components belong to some Wiener chaos, and (ii) the constant part except for the fourth cumulant may be significantly improved. The main aim in this paper is to discover under what conditions the fourth moment bound holds for vector-valued general functionals of Gaussian fields, where each of which needs not to belong to some Wiener chaos. In the case of vector-valued multiple integrals, the conditions on the fourth moment theorem are quite naturally satisfied.
On the other hand, in the case of d = 1 , the application of the method developed here shows that, even in case of general functionals of Gaussian fields, the fourth moment theorem holds without any conditions needed for the case of d 2 . The only necessary condition is that the fourth cumulant is non-zero. The result in the one-dimensional case is different from the result obtained by substituting d = 1 into the multi-dimensional case. For these reasons, we will see how the random vector case can be reformulated in the one-dimensional case.
Our paper is organized in the following way. Section 2 contains some basic notion on Malliavin calculus. Section 3 is devoted to developing a method for obtaining the fourth moment bound for a R d -valued random vector whose components are functionals of Gaussian fields. In Section 4, we will show the fourth moment theorem by applying the new method developed in Section 3 to vector-valued multiple stochastic integrals. In Section 5, we will describe how the random vector case can be reconstructed in the one-dimensional case.

2. Preliminaries

In this section, we describe some basic facts on Malliavin calculus for Gaussian processes. For a more detailed explanation on this subject, see [4,5]. Fix a real separable Hilbert space H with an inner product denoted by · , · H . Let B = { B ( h ) , h H } be an isonormal Gaussian process that is a centered Gaussian family of random variables, such that E [ B ( h ) B ( g ) ] = h , g H . If H q is the qth Hermite polynomial, then the closed linear subspace, denoted by H q of L 2 ( Ω ) generated by { H q ( B ( h ) ) : h H , h H = 1 } is called the qth Wiener chaos of B.
We define a linear isometric mapping I q : H q H q by I q ( h n ) = q ! H q ( B ( h ) ) , where H q is the symmetric qth tensor product. It is well known that any square integrable random variable F L 2 ( Ω , G , P ) , where G denotes the σ -field generated by B, admits a series expansion of multiple stochastic integrals:
F = q = 0 I q ( f q ) ,
where the series converges in L 2 ( Ω ) and the functions f q H q and q 0 are uniquely determined with f 0 = E [ F ] .
Let { e i , i = 1 , 2 , } be a complete orthonormal system of the Hilbert space H . For f H p and g H q , the contraction f r g of f and g, r { 0 , 1 , , p q } , is the element of H ( p + q 2 r ) defined by
f r g = i 1 , , i r = 1 f , e i 1 e i r H r g , e i 1 e i r H r .
The product formula for the multiple stochastic integrals is given below.
Proposition 1.
If f H p and g H q , then
I p ( f ) I q ( g ) = r = 0 p q r ! p r q r I p + q 2 r ( f r g ) .
We denoted by S the class of smooth and cylindrical random variables F of the form
F = f ( B ( φ 1 ) , , B ( φ n ) ) , n 1 ,
where f C b ( R n ) and φ i H , i = 1 , , n . For these random variables, the Malliavin derivative of F with respect to B is the element of L 2 ( Ω , H ) defined as
D F = i = 1 n f x i ( B ( φ 1 ) , , B ( φ n ) ) φ i .
Let D q , p be the closure of its associated smooth random variable class with respect to the norm
F q , p p = E [ | F | p ] + k = 1 q E [ D k F H k p ] .
Let δ be the adjoint of the Malliavin derivative D. The domain of δ , denoted by Dom ( δ ) , is composed of those elements u L 2 ( Ω ; H ) such that there exists a constant C satisfying
| E [ D k F , u H l ] | C ( E [ | F | 2 ] ) 1 / 2 f o r   a l l F D k , 2 .
If u Dom ( δ ) , then δ ( u ) is an element of L 2 ( Ω ) defined as the following duality formula, called an integration by parts,
E [ F δ ( u ) ] = E [ D F , u H ] f o r   a l l F D 1 , 2 .
Recall that any square integrable random variable F can be expanded as F = E [ F ] + q = 1 J q ( F ) , where J q , q = 0 , 1 , 2 , is the projection of F onto H q . We say that this random variable belongs to D o m ( L ) if q = 1 q 2 E [ J q ( F ) 2 ] < . For such a random variable F, we define an operator L = q = 0 q J q , which coincides with the infinitesimal generator of the Ornstein–Uhlhenbeck semigroup. Then, F D o m ( L ) if and only if F D 1 , 2 and D F D o m ( δ ) , and, in this case, δ D F = L F . We also define the operator L 1 , called the pseudo-inverse of L, as L 1 F = q = 1 1 q J q ( F ) . Then, L 1 is an operator with values in D 2 , 2 , and L L 1 F = F E [ F ] for all F L 2 ( Ω ) .

3. Main Results

In this section, we will find a sufficient condition on the fourth moment bound for a vector-valued random variable whose components are functionals of Gaussian fields. It is important to note that these functionals of Gaussian fields do not necessarily belong to some Wiener chaos. The next lemma will play a fundamental role in this paper.
Lemma 1.
Suppse that F D 1 , 2 and G L 2 ( Ω ) . Then, we have that L 1 G D 2 , 2 and
E [ F G ] = E [ F ] E [ G ] + E [ D L 1 G , D F H ] .
A multi-index is a vector of a non-negative integer of the form α = ( α 1 , , α d ) . Then, we write
| α | = j = 1 d α j , j = x j , α = 1 α 1 d α d , x α = i = 1 d x i α i ,
where x = ( x 1 , , x d ) . By convention, we set 0 0 = 1 .
For the rest of this section, we fix a random vector F = ( F 1 , , F d ) , d 2 .
Definition 1.
Assume that E [ | F | α ] < for some α N d { 0 } . The joint cumulant of order | α | of F is defined by
κ α ( F ) = ( i ) | α | α | t = 0 log ϕ F ( t ) f o r t R d ,
where ϕ F ( t ) = E e i t , F R d is the characteristic function of F.
Suppose that F i D 1 , 2 for each i = 1 , , d . Let l 1 , l 2 , be a sequence taking values in { e 1 , , e d } , where e i is the multi-index of length d given by
e i = ( 0 , , 0 , 1 , 0 , , 0 ) .
If l 1 = e i , then Γ l 1 * ( F ) = F i . Suppose that Γ l 1 , , l k * ( F ) is a well-defined random variable of L 2 ( Ω ) . We define
Γ l 1 , , l k + 1 * ( F ) = D L 1 F l k + 1 , D Γ l 1 , , l k * ( F ) H .
For the multivariate Gamma operator Γ l 1 , , l k ( F ) , see Definition 4.2 in [14]. For simplicity, we will frequently write Γ i 1 , , i k * ( F ) and Γ i 1 , , i k ( F ) instead of Γ e i 1 , , e i k * ( F ) and Γ e i 1 , , e i k ( F ) , respectively.
Using the Gamma operators Γ l 1 , , l k of F, we can state a formula for the cumulants of any random vector F (see, e.g., [14,20]).
Lemma 2
(Noreddine and Nourdin). Let α = ( α 1 , , α d ) N d { 0 } be a d-dimensional multi-index with the unique decomposition { l 1 , , l | α | } . If F i D | α | , 2 | α | for 1 i d , then
κ α ( F ) = σ E Γ l 1 , l σ ( 2 ) , , l σ ( | α | ) ( F ) ,
where the sum σ is taken over all permutations σ of the set { 2 , 3 , , | α | } .
Remark 1.
Obviously, the above lemma can be expressed in the one-dimensional case as follows: Let m 1 be an integer, and suppose that F D m , 2 m . Then
κ m + 1 ( F ) = m ! E [ Γ m ( F ) ] .
Remark 2.
Successive applications of Lemma 1 yield that
E [ Γ i , i , j , j ( F ) ] = 1 2 E [ D F j 2 , D L 1 Γ i , i ( F ) H ] = 1 2 E [ F j 2 Γ i , i ( F ) ] E [ F j 2 ] E [ Γ i , i ( F ) ] = 1 2 E [ F i 2 F j 2 ] 2 E [ F i F j Γ i , j ( F ) ] E [ F i 2 ] E [ F j 2 ] = 1 2 E [ F i 2 F j 2 ] 2 ( E [ F i F j ] ) 2 E [ F i 2 ] E [ F j 2 ] E [ Γ i , j , i , j ( F ) ] + E [ Γ i , j , j , i ( F ) ] .
Equation (9) gives that
E [ Γ i , i , j , j ( F ) ] + E [ Γ i , j , i , j ( F ) ] + E [ Γ i , j , j , i ( F ) ] = 1 2 E [ F i 2 F j 2 ] 2 ( E [ F i F j ] ) 2 E [ F i 2 ] E [ F j 2 ] .
For the forthcoming theorem, first we define a set:
E ( d ) ( F ) = { e R : i , j = 1 d l 1 + l 2 + l 3 = e i + 2 e j E [ Γ i , l 1 , l 2 , l 3 * ( F ) ] e i , j = 1 d l 1 + l 2 + l 3 = e i + 2 e j E [ Γ i , l 1 , l 2 , l 3 ( F ) ] } .
Theorem 2.
Let F = ( F 1 , , F d ) , d 2 , with F i D 3 , 2 3 and E [ F i ] = 0 for i = 1 , , d , and Z be a centered normal random vector with the covariance Σ = ( σ i j ) 1 i , j d , where σ i j = E [ F i F j ] . Suppose that, for 1 i , j d ,
( α ) E [ Γ i , i * ( F ) Γ j , j * ( F ) ] E [ Γ i , i * ( F ) ] E [ Γ j , j * ( F ) ] , ( β ) E [ Γ i , j * ( F ) Γ j , i * ( F ) ] ( E [ Γ i , j * ( F ) ] ) 2 , ( γ ) e E ( d ) ( F ) .
Assume that Σ is invertible. We have that, for any Lipschitz function h : R d R ,
| E [ h ( F ) ] E [ h ( Z ) ] | d Σ o p 1 / 2 Σ 1 o p h L i p 2 e 2 i , j = 1 d κ e i , e j , e i , e j ( F ) ,
or, as another expression,
| E [ h ( F ) ] E [ h ( Z ) ] | d Σ o p 1 / 2 Σ 1 o p h L i p 2 e 2 E [ F R d 4 ] E [ Z R d 4 ] .
where · o p and · R d denote the operator norm of a matrix and the euclidean norm in R d , respectively, and
h L i p = sup x , y R d | h ( x ) h ( y ) | x y R d .
Proof. 
Recall that, for a Lipschitz function h : R d R , Theorem 6.1.1 in [4] shows that
| E [ h ( F ) ] E [ h ( N ) ] | d Σ o p 1 / 2 Σ 1 o p h L i p i , j = 1 d E σ i j Γ i , j ( F ) 2 .
Since Γ i , j * = Γ j , i for 1 i , j d , the right-hand side of (13) can be expressed as
| E [ h ( F ) ] E [ h ( N ) ] | d Σ o p 1 / 2 Σ 1 o p h L i p i , j = 1 d E σ i j Γ i , j * ( F ) 2 .
By the definition of the operator Γ * , we have that, for 1 i , j d ,
E [ Γ i , j * ( F ) 2 ] = E [ Γ i , j * ( F ) D L 1 F j , D F i H ] = E [ D L 1 F j , D ( F i Γ i , j * ( F ) ) H ] E [ F i D L 1 F j , D Γ i , j * ( F ) H ] = E [ F i F j Γ i , j * ( F ) ] E [ Γ i , j , j , i * ( F ) ] .
For a + b + c = 1 , we write, using Lemma 1 and the definition of Γ * , the first term in (14) as follows:
E [ F i F j Γ i , j * ( F ) ] = a E [ F i F j D L 1 F j , D F i H ] + b E [ D L 1 F i , D ( F j Γ i , j * ( F ) ) H ] + c E [ D L 1 F j , D ( F i Γ i , j * ( F ) ) H ] : = A 1 + A 2 + A 3 .
It is obvious that
A 1 = a E [ D L 1 F j , D ( F i F j × F i ) H ] a E [ F i D L 1 F j , D ( F i F j ) H = a E [ F i 2 F j 2 ] a E [ F i 2 Γ j , j * ( F ) ] a E [ F i F j Γ i , j * ( F ) ] = a E [ F i 2 F j 2 ] a E [ Γ i , i * ( F ) Γ j , j * ( F ) ] a E [ Γ j , j , i , i * ( F ) ] A 1 .
The above Equation (15) gives
A 1 = a 2 E [ F i 2 F j 2 ] E [ Γ i , i * ( F ) Γ j , j * ( F ) ] E [ Γ j , j , i , i * ( F ) ] .
Also using Lemma 1 and the definition of Γ * , the terms A 2 and A 3 can be expressed as
A 2 = b E [ Γ i , j , i , j * ( F ) ] + b E [ Γ i , j * ( F ) Γ j , i * ( F ) ] ,
A 3 = c E [ Γ i , j , j , i * ( F ) ] + c E [ Γ i , j * ( F ) 2 ] .
Combining (16)–(18), we obtain, together with (14), that
E [ Γ i , j * ( F ) 2 ] = a 2 ( 1 c ) E [ F i 2 F j 2 ] E [ Γ i , i * ( F ) Γ j , j * ( F ) ] E [ Γ j , j , i , i * ( F ) ] + b 1 c E [ Γ i , j , i , j * ( F ) ] + E [ Γ i , j * ( F ) Γ j , i * ( F ) ] + c 1 1 c E [ Γ i , j , j , i * ( F ) ] .
Now, we choose a, b, and c such that a + b + c = 1 and
a 2 ( 1 c ) = b 1 c = c 1 1 c .
Obviously, we may take a = 1 , b = 1 / 2 , and c = 1 / 2 . The assumptions ( α ) and ( β ) yield that the left-hand side of (19) can be bounded by
E [ Γ i , j * ( F ) 2 ] E [ F i 2 F j 2 ] E [ Γ j , j , i , i * ( F ) ] E [ Γ i , j , i , j * ( F ) ] E [ Γ i , j , j , i * ( F ) ] E [ Γ i , i * ( F ) ] E [ Γ j , j * ( F ) ] ( E [ Γ i , j * ( F ) ] ) 2 .
Therefore the Inequality (20) and the assumption ( γ ) prove that, if e E ( d ) ( F ) ,
i , j = 1 d E [ ( σ i j Γ i , j * ( F ) ) 2 ] i , j = 1 d { E [ F i 2 F j 2 ] l 1 + l 2 + l 3 = e i + 2 e j E [ Γ i , l 1 , l 2 , l 3 * ( F ) ] 2 ( E [ F i F j ] ) 2 E [ F i 2 ] E [ F j 2 ] } i , j = 1 d { E [ F i 2 F j 2 ] e l 1 + l 2 + l 3 = e i + 2 e j E [ Γ i , l 1 , l 2 , l 3 ( F ) ] 2 ( E [ F i F j ] ) 2 E [ F i 2 ] E [ F j 2 ] } .
Applying (10) in Remark 2 (or Lemma 2) to the right-hand side of (21), we have, together with the assumptions ( α ) and ( β ) , that
i , j = 1 d E [ ( σ i j Γ i , j * ( F ) ) 2 ] i , j = 1 d { E [ F i 2 F j 2 ] e 2 E [ F i 2 F j 2 ] + ( e 2 ) ( E [ F i F j ] ) 2 + e 2 2 E [ F i 2 ] E [ F j 2 ] } = 2 e 2 i , j = 1 d E [ F i 2 F j 2 ] 2 ( E [ F i F j ] ) 2 E [ F i 2 ] E [ F j 2 ] = 2 e 2 i , j = 1 d κ e i , e j , e i , e j ( F ) .
The Inequality (22) proves the desired conclusion (11). Since E [ Z i 2 Z j 2 ] = 2 ( E [ Z i Z j ] ) 2 + E [ Z i 2 ] E [ Z j 2 ] , the identity E [ Z R d 4 ] = i , j = 1 d ( 2 σ i j 2 + σ i i σ j j ) holds, which gives another expression (12). Hence, the proof of this theorem is completed. □
Remark 3.
Our techniques do not require the components of a random vector F = ( F 1 , , F d ) to belong to a fixed Wiener chaos. Since the assumptions ( α ) , ( β ) , and ( γ ) are satisfied in the case of a random vector whose entries are element of some Wiener chaos, our result is an extension of Theorem 4.3 in [19]. This fact makes it possible to estimate how restrictive the assumptions given in Theorem 2 are in practice. In addition, for this random vector, the constant of the estimate in Theorem 4.3 in [19] corresponds to e = 0 in (12).

4. Vector-Valued Multiple Stochastic Integrals

In this section, we consider a special case of the previous result such that F is a vector-valued multiple stochastic integral. First, for an explicit expression of Γ * , we introduce the combinatorial constants
β q i 1 , , q i a * ( r 1 , , r a )
recursively defined by the relation
β q i 1 , q i 2 * ( r 2 ) = q i 2 ( r 2 1 ) ! q i 1 1 r 2 1 q i 2 1 r 2 1 ,
and for any a 3 ,
β q i 1 , , q i a * ( r 2 , , r a ) = β q i 1 , , q i a 1 * ( r 2 , , r a 1 ) ( q i 1 + + q i a 1 2 r 2 2 r a 1 ) ( r a 1 ) ! × q i 1 + + q i a 1 2 r 2 2 r a 1 1 r a 1 q i a 1 r a 1 .
For an explicit expression of Γ , we use the notations
β q i 1 , q i 2 ( r 2 ) = q i 2 ( r 2 1 ) ! q i 1 1 r 2 1 q i 2 1 r 2 1 ,
and
β q i 1 , , q i a ( r 2 , , r a ) = β q i 1 , , q i a 1 ( r 2 , , r a 1 ) q i a ( r a 1 ) ! × q i 1 + + q i a 1 2 r 2 2 r a 1 1 r a 1 q i a 1 r a 1 f o r a 3 .
Theorem 3.
Fix d 2 . Let q i 2 , i = 1 , , d , be positive integers, and let F be a random vector
F = ( F 1 , , F d ) = ( I q 1 ( f q 1 ) , , I q d ( f q d ) ) ,
where f q i H q i for i = 1 , , d . Let Z be a centered multivariate normal random variable with the covariance Σ = ( σ i j ) 1 i , j d , where σ i j = E [ F i F j ] . For any Lipschitz function h : R d R , it holds that
| E [ h ( F ) ] E [ h ( Z ) ] | 2 e 2 d Σ o p 1 / 2 Σ 1 o p h L i p i , j = 1 d κ e i , e j , e i , e j ( F ) ,
or
| E [ h ( F ) ] E [ h ( Z ) ] | 2 e 2 d Σ o p 1 / 2 Σ 1 o p h L i p E [ F R d 4 ] E [ Z R d 4 ] ,
where a constant e is given by
e = 1 max 1 i d q i .
Moreover, if q 1 = = q d = q , then e is given by
e = 2 q .
Proof. 
It is sufficient to prove that F satisfies the assumptions ( α ) , ( β ) , and ( γ ) in Theorem 2.
For the condition ( α ) : By the definition of Γ * , we have that
Γ i i * ( F ) Γ j j * ( F ) = q i q j r 1 = 1 q i r 2 = 1 q j ( r 1 1 ) ! ( r 2 1 ) ! q i 1 r 1 1 2 q j 1 r 2 1 2 × I 2 q i 2 r 1 ( f q i ˜ r 1 f q i ) I 2 q j 2 r 2 ( f q j ˜ r 2 f q j ) ,
which yields
E [ Γ i i * ( F ) Γ j j * ( F ) ] = q i q j r = 1 q i ( r 1 1 ) ! ( q j q i + r 1 ) ! q i 1 r 1 2 q j 1 q j q i + r 1 2 × ( 2 q i 2 r ) ! f q i ˜ r f q i , f q j ˜ q j q i + r f q j H ( 2 q i 2 r ) = q i ! q j ! ( f q i ˜ q i f q i ) ( f q j ˜ q j f q j ) + q i q j r = 1 q i 1 ( r 1 ) ! ( q j q i + r 1 ) ! q i 1 r 1 2 q j 1 q j q i + r 1 2 × ( 2 q i 2 r ) ! f q i ˜ r f q i , f q j ˜ q j q i + r f q j H ( 2 q i 2 r ) .
On the other hand,
E [ Γ i i * ( F ) ] E [ Γ j j * ( F ) ] = q i ! ( f q i ˜ q i f q i ) × q j ! ( f q j ˜ q j f q j ) .
Denote by ( a ) the length of a vector a . To prove ( α ) , we need to show that, for every 1 i , j d , the inner products in (26)
f q i ˜ r f q i , f q j ˜ q j q i + r f q j H ( 2 q i 2 r ) 0 .
For this, it is sufficient, from the symmetry of f q i , i = 1 , , d , and symmetrization of contractions, to show that, for every 1 i , j d ,
Z 2 ( q i + q j ) f q i ( u 1 , w ) f q i ( u 2 , w ) f q j ( u 1 , v ) × f q j ( u 2 , v ) μ 2 ( q i + q j ) ( d u 1 , d u 2 , d v , d w ) 0 ,
where ( w ) = r and ( u 1 ) + ( u 2 ) = 2 q i 2 r . Since ( u 1 ) = ( u 2 ) = q i r , the integral in (28) can be expressed as
Z q j q i + 2 r ( f q i ( u 1 ) f q j ) ( w , v ) ( f q i ( u 2 ) f q j ) ( w , v ) μ q j + r 1 + r 2 ( d w , d v ) = Z q j q i + 2 r ( f q i ( u 1 ) f q j ) 2 ( w , v ) μ q j + r 1 + r 2 ( d w , d v ) 0 .
Using (26) and (27) together with (29) yields that, for 1 i , j d ,
E [ Γ i i * ( F ) Γ j j * ( F ) ] E [ Γ i i * ( F ) ] E [ Γ j j * ( F ) ] .
For the condition ( β ) : Obviously,
Γ i j * ( F ) Γ j i * ( F ) = q i q j r 1 = 1 q i q j r 2 = 1 q i q j ( r 1 1 ) ! ( r 2 1 ) ! q i 1 r 1 1 2 q j 1 r 2 1 2 × I q i + q j 2 r 1 ( f q i ˜ r 1 f q j ) I q i + q j 2 r 2 ( f q i ˜ r 2 f q j ) .
The expectation of (30) gives
E [ Γ i j * ( F ) Γ j i * ( F ) ] = q i q j r = 1 q i q j [ ( r 1 ) ! ] 2 q i 1 r 1 2 q j 1 r 1 2 × ( q i + q j 2 r ) ! f q i ˜ r f q j H ( q i + q j 2 r ) 2 .
For q i < q j , the expectation (31) can be written as
E [ Γ i j * ( F ) Γ j i * ( F ) ] = q i q j [ ( q i 1 ) ! ] 2 f q i ˜ q i f q j H ( q i + q j 2 r ) 2 + r = 1 q i 1 [ ( r 1 ) ! ] 2 q i 1 r 1 2 q j 1 r 1 2 × ( q i + q j 2 r ) ! f q i ˜ r f q j H ( q i + q j 2 r ) 2 .
Since E [ Γ i j * ( F ) ] = 0 for q i < q j , we deduce, from (32), that
E [ Γ i j * ( F ) Γ j i * ( F ) ] ( E [ Γ i j * ( F ) ] ) 2 f o r q i < q j .
On the other hand, if q i = q j , then
E [ Γ i j * ( F ) Γ j i * ( F ) ] = ( q i ! ) 2 f q i H q i 4 + r = 1 q i 1 [ ( r 1 ) ! ] 2 q i 1 r 1 2 q j 1 r 1 2 × ( 2 q i 2 r ) ! f q i ˜ r f q i H ( 2 q i 2 r ) 2 ( E [ Γ i j * ( F ) ] ) 2 .
For the condition ( γ ) : First, write
l 1 + l 2 + l 3 = e i + 2 e j E [ Γ i , l 1 , l 2 , l 3 * ( F ) ] = E [ Γ i , i , j , j * ( F ) ] + E [ Γ i , j , i , j * ( F ) ] + E [ Γ i , j , j , i * ( F ) ] .
Next, we compute the three expectations in (34). By the definition of the operator Γ * , we obtain
Γ i 1 , i 2 , i 3 , i 4 * ( F ) = r 2 = 1 q i 1 q i 2 r 3 = 1 ( q i 1 + q i 2 2 r 1 ) q i 3 r 4 = 1 ( q i 1 + q i 2 + q i 3 2 r 1 2 r 2 ) q i 4 × β q i 1 , , q i 4 * ( r 2 , r 3 , r 4 ) 1 { 2 r 2 < q i 1 + q i 2 } 1 { 2 r 2 + 2 r 3 < q i 1 + q i 2 + q i 3 } × I q i 1 + + q i 4 2 r 2 2 r 3 2 r 4 ( ( ( f q i 1 ˜ r 2 f q i 2 ) ˜ r 3 f q i 3 ) ˜ r 4 f q i 4 ) ,
and
Γ i 1 , i 2 , i 3 , i 4 ( F ) = r 2 = 1 q i 1 q i 2 r 3 = 1 ( q i 1 + q i 2 2 r 1 ) q i 3 r 4 = 1 ( q i 1 + q i 2 + q i 3 2 r 1 2 r 2 ) q i 4 × β q i 1 , , q i 4 ( r 2 , r 3 , r 4 ) 1 { 2 r 2 < q i 1 + q i 2 } 1 { 2 r 2 + 2 r 3 < q i 1 + q i 2 + q i 3 } × I q i 1 + + q i 4 2 r 2 2 r 3 2 r 4 ( ( ( f q i 1 ˜ r 2 f q i 2 ) ˜ r 3 f q i 3 ) ˜ r 4 f q i 4 ) .
When q i 1 + + q i 4 = 2 r 2 + 2 r 3 + 2 r 4 and r 3 q i 1 + q i 2 + q i 3 2 r 2 2 r 3 , we have that q i 4 r 4 . Hence, r 4 = q i 4 . Taking an expectation on (35) and (36) yields that
E [ Γ i 1 , i 2 , i 3 , i 4 * ( F ) ] = r 2 = 1 q i 1 q i 2 r 3 = 1 ( q i 1 + q i 2 2 r 2 ) q i 3 β q i 1 , , q i 4 * ( r 2 , r 3 , q i 4 ) × J 1 ( i 1 , , i 4 ; r 2 , r 3 ) 1 { 2 r 2 < q i 1 + q i 2 } × 1 { 2 r 2 + 2 r 3 = q i 1 + q i 2 + q i 3 q i 4 } ,
and
E [ Γ i 1 , i 2 , i 3 , i 4 ( F ) ] = r 2 = 1 q i 1 q i 2 r 3 = 1 ( q i 1 + q i 2 2 r 2 ) q i 3 β q i 1 , , q i 4 ( r 2 , r 3 , q i 4 ) × J 1 ( i 1 , , i 4 ; r 2 , r 3 ) 1 { 2 r 2 < q i 1 + q i 2 } × 1 { 2 r 2 + 2 r 3 = q i 1 + q i 2 + q i 3 q i 4 } ,
where
J 1 ( i 1 , , i 4 ; r 2 , r 3 ) = ( f q i 1 ˜ r 2 f q i 2 ) ˜ r 3 f q i 3 , f q i 4 H q i 4 .
Using the definition of coefficients β * and β , we compute
β q i 1 , , q i 4 * ( r 2 , r 3 , q i 4 ) e β q i 1 , , q i 4 ( r 2 , r 3 , q i 4 ) = ( q i 4 ) ! β q i 1 , q i 2 , q i 3 * ( r 2 , r 3 ) e β q i 1 , q i 2 , q i 3 ( r 2 , r 3 ) = ( q i 1 + q i 2 2 r 2 e q i 3 ) J 2 ( i 1 , , i 4 ; r 2 , r 3 ) ,
where
J 2 ( i 1 , , i 4 ; r 2 , r 3 ) = ( q i 4 ) ! β q i 1 , q i 2 ( r 2 ) ( r 3 1 ) ! × q i 1 + q i 2 2 r 2 1 r 3 1 q i 3 1 r 3 1 .
If ( i 1 , , i 4 ) = ( i , i , j , j ) , ( i , j , i , j ) or ( i , j , j , i ) , then we have, from a similar estimate as for (29), that, for 1 r 2 q i 1 q i 2 and 1 r 3 ( q i 1 + q i 2 2 r 2 ) q i 3 ,
J 1 ( i 1 , , i 4 ; r 2 , r 3 ) 0 .
Indeed, for ( i 1 , , i 4 ) = ( i , i , j , j ) , it is sufficient to show that
Z 2 ( q i + q j ) f q i ( u 1 , v 1 , w ) f q i ( u 2 , v 2 , w ) f q j ( u 1 , u 2 , v 3 ) × f q j ( v 1 , v 2 , v 3 ) μ 2 ( q i + q j ) ( d u 1 , d u 2 , w , d v 1 , d v 2 , d v 3 ) = Z 2 ( q i + q j ) ( f q i ( u 1 ) f q j ) ( v 1 , u 2 , w , v 3 ) × ( f q i ( v 2 ) f q j ) ( v 1 , u 2 , w , v 3 ) ( d v 1 , d u 2 , w , d v 3 ) 0 ,
where ( u 1 ) = ( v 2 ) . Similarly, we can show that, for ( i 1 , , i 4 ) = ( i , j , i , j ) or ( i , j , j , i ) ,
J 1 ( i 1 , , i 4 ; r 2 , r 3 ) 0 .
These facts lead us to E [ Γ i 1 , i 2 , i 3 , i 4 ( F ) ] 0 and E [ Γ i 1 , i 2 , i 3 , i 4 * ( F ) ] 0 for ( i 1 , , i 4 ) = ( i , i , j , j ) , ( i , j , i , j ) or ( i , j , j , i ) , which implies that E ( d ) ( F ) . Now, we find a constant e > 0 such that e E ( d ) ( F ) . Let us set J ( ) = J 1 ( ) × J 2 ( ) . From (37) and (38), we have, together with (39), that
i , j = 1 d l 1 + l 2 + l 3 = e i + 2 e j E [ Γ i , l 1 , l 2 , l 3 * ( F ) ] e E [ Γ i , l 1 , l 2 , l 3 ( F ) ] = i , j = 1 d { E [ Γ i , i , j , j * ( F ) ] e E [ Γ i , i , j , j ( F ) ] + E [ Γ i , j , i , j * ( F ) ] e E [ Γ i , j , i , j ( F ) ] + E [ Γ i , j , j , i * ( F ) ] e E [ Γ i , j , j , i ( F ) ] } = V 1 , d + V 2 , d + V 3 , d ,
where
V 1 , d = i , j = 1 d r 2 = 1 q i ( 2 q i 2 r 2 e q j ) J ( i , i , j , j ; r 2 , r 3 ) × 1 { r 2 < q i } 1 { r 2 + r 3 = q i } , V 2 , d = i , j = 1 d r 2 = 1 q i q j ( q i + q j 2 r 2 e q j ) J ( i , j , i , j ; r 2 , r 3 ) × 1 { 2 r 2 < q i + q j } 1 { r 2 + r 3 = q i } ,
and
V 3 , d = i , j = 1 d r 2 = 1 q i q j ( q i + q j 2 r 2 e q j ) J ( i , j , j , i ; r 2 , r 3 ) × 1 { 2 r 2 < q i + q j } 1 { r 2 + r 3 = q j } .
For every i , j { 1 , , d } and r 2 { 1 , , q i 1 } , we have
( 2 q i 2 r 2 e q j ) ( 2 e max 1 i d q i ) .
This leads us to
V 1 , d ( 2 e max 1 i d q i ) V ˜ 1 , d ,
where
V ˜ 1 , d = i , j = 1 d r 2 = 1 q i J ( i , i , j , j ; r 2 , r 3 ) 1 { r 2 < q i } 1 { r 2 + r 3 = q i } .
For the second sum V 2 , d in (41), we change the range of r 2 from the inequality 2 r 2 < q i + q j to
r 2 q i + q j 2 α i , j for α i , j ( 0 , 1 ] ,
where [ ( q i + q j ) / 2 ] α i , j is a positive integer. For fixed i , j { 1 , , d } ,
( q i + q j 2 r 2 e q j ) q i + q j 2 q i + q j 2 α i , j q i e q j .
If q i = q j for 1 i , j d , then, from (43), we have
( q i + q j 2 r 2 e q j ) ( 2 q i 2 ( q i 1 ) e q i ) ( 2 e max 1 i d q i )
for every i , j { 1 , , d } and r 2 { 1 , , q i 1 } . For q j q i 2 , we deduce, from (43), for fixed i , j { 1 , , d } , that
( q i + q j 2 r 2 e q j ) ( q i + q j 2 q i e q j ) ( 2 e max 1 i d q i ) .
For q j = q i + 1 and 0 < α i , j 0.5 , the Inequality (43) yields
( q i + q j 2 r 2 e q j ) ( 2 q i + 1 2 q i e q j ) ( 1 e max 1 i d q i ) .
On the other hand, if q j = q i + 1 and 0.5 < α i , j 1 , then we obtain, from (43), that
( q i + q j 2 r 2 e q j ) 2 q i + 1 2 q i + 1 2 α i , j e q j ( 2 α i , j e q j ) ( 1 e max 1 i d q i ) .
Combining the above results (44)–(47), we obtain
V 2 , d ( 1 e max 1 i d q i ) V ˜ 2 , d ,
where
V ˜ 2 , d = i , j = 1 d r 2 = 1 q i q j J ( i , j , i , j ; r 2 , r 3 ) 1 { 2 r 2 < q i + q j } 1 { r 2 + r 3 = q i } .
Similarly,
V 3 , d ( 1 e max 1 i d q i ) V ˜ 3 , d ,
where
V ˜ 3 , d = i , j = 1 d r 2 = 1 q i q j J ( i , j , j , i ; r 2 , r 3 ) 1 { 2 r 2 < q i + q j } 1 { r 2 + r 3 = q j } .
The Inequalities (42), (48), and (49) yield
i , j = 1 d l 1 + l 2 + l 3 = e i + 2 e j E [ Γ i , l 1 , l 2 , l 3 * ( F ) ] e E [ Γ i , l 1 , l 2 , l 3 ( F ) ] ( 1 e max 1 i d q i ) ( V ˜ 1 , d + V ˜ 2 , d + V ˜ 3 , d ) 0 f o r e 0 , 1 max 1 i d q i ,
so that the condition ( γ ) is satisfied. Hence, applying Theorem 2 gives the desired conclusion. If q 1 = = q d = q , the estimate in (42) yields a constant e given in (25). □
Remark 4.
1. Theorem 3 proves that the three assumptions in Theorem 2 are satisfied under the same conditions as in Theorem 4.3 of [19]. To achieve this, we just need to explicitly compute the expected values of Gamma operators and compare them.
2. The estimate in Theorem 4.3 of [19] corresponds to the estimate (24) with e = 0 . Hence, our approach improves the rate of constants appearing in the previous estimate given in [19]. If q 1 = = q d = 1 , then e = 2 , which implies that F has the same distribution with Z.

5. Results in Dimension One ( d = 1 )

In this section, we specialize the results given in the previous Section 3 and Section 4 to the one-dimensional case. We begin with a one-dimensional version of Gamma operators Γ and Γ * (for these operators, see [21,22]). We set Γ 1 ( F ) = F and Γ 1 * ( F ) = F . If F is a well-defined element in L 2 ( Ω ) , we set Γ k + 1 ( F ) = D F , D L 1 Γ k ( F ) H and Γ k + 1 * ( F ) = D L 1 F , D Γ k * ( F ) H for k = 1 , 2 , .
Theorem 4.
If d = 1 , the conditions ( α ) , ( β ) , and ( γ ) are satisfied under the assumption E [ Γ 4 ( F ) ] 0 .
Proof. 
The assumptions ( α ) and ( β ) obviously hold. Indeed, the Cauchy–Schwartz inequality proves that
E [ Γ 2 * ( F ) 2 ] ( E [ Γ 2 * ( F ) ] ) 2 ,
where Γ 2 * ( F ) = Γ 2 ( F ) = D L 1 F , D F H . A repeated application of Lemma 1 proves that
E [ Γ 2 ( F ) 2 ] = E [ F 2 Γ 2 ( F ) ] E [ Γ 4 * ( F ) ] = 2 E [ Γ 4 ( F ) ] + ( E [ F 2 ] ) 2 E [ Γ 4 * ( F ) ] .
This shows that V a r ( Γ 2 ( F ) ) = 2 E [ Γ 4 ( F ) ] E [ Γ 4 * ( F ) ] . Let ϕ ( x ) = E [ Γ 4 ( F ) ] x E [ Γ 4 * ( F ) ] . Then, ϕ ( 2 ) 0 . Since E [ Γ 4 ( F ) ] 0 , there exists a constant e R such that ϕ ( e ) 0 . This implies that the condition ( γ ) is satisfied. □
Remark 5.
If E [ F ] = 0 , it follows from (8) that
E [ Γ 4 ( F ) ] = 1 6 E [ F 4 ] 3 ( E [ F 2 ] ) 2 .
Studies so far have shown that Inequality (1) holds true only when F belongs to a fixed Wiener chaos. However, the technique developed here can be applied to prove that the fourth moment theorem (1) holds even if F is not an element of a fixed Wiener chaos. The proof in Theorem 4 yields, together with (50), that
V a r ( Γ 2 ( F ) ) 2 e 6 E [ F 4 ] 3 ( E [ F 2 ] ) 2 ,
where a constant e satisfies ϕ ( e ) 0 . Note that the constant given in (12) is three times that in (51).
Proposition 2.
Let ϕ be a linear function in the proof of Theorem 4. Let F = I q ( f ) with f H q ( q 2 ). Then, there exists a constant e [ 2 / q , 2 ) such that ϕ ( e ) 0 , and ( , 2 / q ] E ( 1 ) ( F ) .
Proof. 
A direct computation yields that
E [ Γ 4 * ( F ) ] = q ! r = 1 q 1 β q , q * ( r ) ( 2 q 2 r ) ( q r 1 ) ! 2 q 2 r 1 q r 1 × q 1 q r 1 f ˜ r f H ( 2 q 2 r ) 2 > 0 .
On the other hand, Theorem 5.1 in [22] shows that
E [ Γ 4 ( F ) ] = q ! r = 1 q 1 β q , q ( r ) q ( q r 1 ) ! 2 q 2 r 1 q r 1 × q 1 q r 1 f ˜ r f H ( 2 q 2 r ) 2 > 0 .
Combining (52) and (53) (or V 1 , d for d = 1 in (41) in the proof of Theorem 3) together with β q , q * = β q , q , we obtain that
ϕ ( e ) = E [ Γ 4 * ( F ) ] e E [ Γ 4 ( F ) ] = q ! r = 1 q 1 β q , q ( r ) ( 2 q 2 r e q ) ( q r 1 ) ! 2 q 2 r 1 q r 1 × q 1 q r 1 f ˜ r f H ( 2 q 2 r ) 2 ( 2 e q ) q ! r = 1 q 1 β q , q ( r ) ( q r 1 ) ! 2 q 2 r 1 q r 1 × q 1 q r 1 f ˜ r f H ( 2 q 2 r ) 2 .
This Inequality (54) shows that ϕ ( 2 / q ) 0 . Since E [ Γ 4 * ( F ) ] > 0 and E [ Γ 4 ( F ) ] > 0 , it may be possible for e to belong to [ 2 / q , 2 ) . □
Remark 6.
Substituting 2 / q for e in (51), we can derive the fourth moment theorem in (2). By using the new method developed in this paper, we show that the constant term given in (51) is less than or equal to the one in (2). This means that
2 e 6 q 1 3 .
Let’s take an example that satisfies (55).
Example 1.
We consider the case of q = 3 . Let F = I 3 ( h 3 ) with h H . A similar computation as for () proves that
E [ Γ 4 * ( F ) ] e E [ Γ 4 ( F ) ] = 3 ! × 3 r = 1 2 ( r 1 ) ! 2 r 1 2 ( 6 2 r e q ) ( 3 r 1 ) ! × 6 2 r 1 3 r 1 2 3 r 1 h 3 ˜ r h 3 H ( 6 2 r ) 2 = ( 3 ! × 18 ) ( 4 3 e ) h 3 ˜ 1 h 3 H 4 2 + ( 3 ! × 12 ) ( 2 3 e ) h 3 ˜ 2 h 3 H 2 2 = 72 8 15 2 e h H 6 .
From (56), it follows that ( , 16 / 15 ] = C ( 1 ) ( F ) and
e = E [ Γ 4 * ( F ) ] E [ Γ 4 ( F ) ] = 16 / 15 .
As a consequence of (51), the upper bound is given by
V a r ( Γ 2 ( F ) ) 7 45 E [ F 4 ] 3 ( E [ F 2 ] ) 2 .
On the other hand, the estimate (2) ( q = 3 ) gives
V a r ( Γ 2 ( F ) ) 30 45 E [ F 4 ] 3 ( E [ F 2 ] ) 2 .
Compare the constant in (57) with that in (58).

6. Conclusions and Future Works

This paper finds a method to obtain the fourth moment bound on the normal approximation of F, where F is a d-dimensional random vector whose components are general functionals of Gaussian fields. In order to prove the fourth moment theorem, all we need to do is to show that the conditions ( α ) , ( β ) , and ( γ ) in Theorem 2 are satisfied. The significant feature of our works is that these conditions are naturally satisfied in the specific case where F is a random variable belonging to the vector-valued multiple integrals. In addition, our technique yields a much better estimate than the conventional method. Comparing with the studies in literatures [3,14,15,16,19,20], our study is not only an extension of these studies, but it is also possible to naturally derive the results of existing studies.
As future research directions, we will apply our approach for the fourth moment theorem, developed here, to more general processes, including Markov diffusion processes and Poisson processes. Our developed approach is expected to integrate the fourth moment theorem for many processes.

Author Contributions

Conceptualization, Y.-T.K. and H.-S.P.; methodology, Y.-T.K.; writing and original draft preparation, Y.-T.K. and H.-S.P.; co-review and validation, H.-S.P.; writing—editing and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Hallym University Research Fund 2021 (HRF-202112-005).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We are very grateful to the anonymous Referees for their suggestions and valuable advice.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nualart, D.; Peccati, G. Central limit theorems for sequences of multiple stochastic integrals. Ann. Probab. 2005, 33, 177–193. [Google Scholar] [CrossRef]
  2. Nualart, D.; Ortiz-Latorre, S. Central limit theorems for multiple stochastic integrals and Malliavin calculus. Ann. Probab. 2008, 33, 177–193. [Google Scholar] [CrossRef] [Green Version]
  3. Nourdin, I.; Peccati, G. Stein’s method on Wiener chaos. Probab. Theory Related Fields 2009, 145, 75–118. [Google Scholar] [CrossRef] [Green Version]
  4. Nourdin, I.; Peccati, G. Normal Approximations with Malliavin Calculus: From Stein’s Method to Universality; Cambridge Tracts in Mathematica; Cambridge University Press: Cambridge, MA, USA, 2012; Volume 192. [Google Scholar]
  5. Nualart, D. Malliavin Calculus and Related Topics, 2nd ed.; Probability and Its Applications; Springer: Berlin, Germany, 2006. [Google Scholar]
  6. Nualart, D. Malliavin Calculus and Its Applications; Regional Conference Series in Mathematics Number 110; American Mathematical Society: Providence, RI, USA, 2008. [Google Scholar]
  7. Stein, C. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probabiltiy; University of California Press: Berkeley, CA, USA, 1972; Volume II, pp. 583–602. [Google Scholar]
  8. Stein, C. Approximate Computation of Expectations; IMS: Hayward, CA, USA, 1986. [Google Scholar]
  9. Chen, L.H.Y.; Goldstein, L.; Shao, Q.-M. Normal Approximation by Stein’s Method; Springer: Heidelberg/Berin, Germany, 2011. [Google Scholar]
  10. Nourdin, I.; Peccati, G. Stein’s method and exact Berry-Esseen asymptotics for functionals of Gaussian fields. Ann. Probab. 2009, 37, 2231–2261. [Google Scholar] [CrossRef]
  11. Nourdin, I.; Peccati, G. The optimal fourth moment theorem. Proc. Am. Math. Soc. 2015, 143, 3123–3133. [Google Scholar] [CrossRef] [Green Version]
  12. Nourdin, I.; Peccati, G. Stein’s method meets Malliavin calculus: A short survey with new estimates. In Recent Development in Stochastic Dynamics and Stochasdtic Analysis; World Sci. Publ.: Hackensack, NJ, USA, 2010; Volume 8, pp. 207–236. [Google Scholar]
  13. Kemp, T.; Nourdin, I.; Peccati, G.; Speicher, R. Winger chaos and the fourth moment. Ann. Probab. 2012, 40, 1577–1635. [Google Scholar] [CrossRef]
  14. Noreddine, S.; Nourdin, I. On the Gaussian approximation of vector-valued multiple integrals. J. Multi. Anal. 2011, 102, 1008–1017. [Google Scholar] [CrossRef] [Green Version]
  15. Nourdin, I.; Peccati, G.; Réveillac, A. Multivariate normal approximation using Stein’s method and Malliavin calcululus. Ann. L’Institut Henri-PoincarÉ-Probab. Atstistiques 2010, 46, 45–58. [Google Scholar]
  16. Peccati, G.; Tudor, C. Gaussian limits for vector-valued multiple stochastic integrals. In Séminaire de Probabilités XXXVIII; Springer: Berlin, Germany, 2005; Volume 1857, pp. 247–262. [Google Scholar]
  17. Azmoodeh, E.; Campese, S.; Poly, G. Fourth moment theorems for Markov diffusion generators. J. Funct. Anal. 2014, 9, 473–500. [Google Scholar] [CrossRef]
  18. Ledoux, M. Chaos of a Markov operator and the fourth moment theorem. Ann. Probab. 2012, 40, 2439–2459. [Google Scholar] [CrossRef]
  19. Nourdin, I.; Rosinski, J. Asymptotic independence of multiple Wiener-Itô integrals and the resulting limit laws. Ann. Probab. 2014, 42, 497–526. [Google Scholar] [CrossRef]
  20. Campese, S. Optimal convergence rates and one-term Edgeworth expansions for multidimensional functionals of Gaussian fields. ALeA Lat. Am. J. Probab. Math. Stat. 2013, 10, 881–919. [Google Scholar]
  21. Kim, Y.T.; Park, H.S. An Edeworth expansion for functionals of Gaussian fields and its applications. Stoch. Proc. Their Appl. 2018, 44, 312–320. [Google Scholar]
  22. Nourdin, I.; Peccati, G. Cumulants on the Wiener space. J. Funct. Anal. 2010, 258, 3775–3791. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, Y.-T.; Park, H.-S. Fourth Cumulant Bound of Multivariate Normal Approximation on General Functionals of Gaussian Fields. Mathematics 2022, 10, 1352. https://0-doi-org.brum.beds.ac.uk/10.3390/math10081352

AMA Style

Kim Y-T, Park H-S. Fourth Cumulant Bound of Multivariate Normal Approximation on General Functionals of Gaussian Fields. Mathematics. 2022; 10(8):1352. https://0-doi-org.brum.beds.ac.uk/10.3390/math10081352

Chicago/Turabian Style

Kim, Yoon-Tae, and Hyun-Suk Park. 2022. "Fourth Cumulant Bound of Multivariate Normal Approximation on General Functionals of Gaussian Fields" Mathematics 10, no. 8: 1352. https://0-doi-org.brum.beds.ac.uk/10.3390/math10081352

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop