Next Article in Journal
Modeling the Device Behavior of Biological and Synthetic Nanopores with Reduced Models
Previous Article in Journal
Sparse Multicategory Generalized Distance Weighted Discrimination in Ultra-High Dimensions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simultaneous Inference for High-Dimensional Approximate Factor Model

1
Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei 230026, China
2
International Institute of Finance, School of Management, University of Science and Technology of China, Hefei 230026, China
*
Author to whom correspondence should be addressed.
Submission received: 30 September 2020 / Revised: 29 October 2020 / Accepted: 2 November 2020 / Published: 5 November 2020
(This article belongs to the Section Statistical Physics)

Abstract

:
This paper studies simultaneous inference for factor loadings in the approximate factor model. We propose a test statistic based on the maximum discrepancy measure. Taking advantage of the fact that the test statistic can be approximated by the sum of the independent random variables, we develop a multiplier bootstrap procedure to calculate the critical value, and demonstrate the asymptotic size and power of the test. Finally, we apply our result to multiple testing problems by controlling the family-wise error rate (FWER). The conclusions are confirmed by simulations and real data analysis.

1. Introduction

The high-dimensional factor model is becoming more and more important in different scientific areas including finance and macroeconomics. For example, the data in the World Bank contain two-hundred countries over forty years and the number of stocks can be in the thousands which is larger than or of the same order of the sample size for portfolio allocation. Due to its broad applications, much efforts have been devoted to analyzing factor model in different aspects. Examples include estimation of factors and loadings for latent factor model [1,2], covariance matrix estimation [3,4,5,6], and simultaneous inference for factor loadings of dynamic factor model [7,8], among others.
This work focuses on the simultaneous inference for the loading matrix with observed factors, which is an important issue in the analysis of approximate factor models. For example, in the study of gene expression genomics, it is commonly assumed that each gene is associated with only a few factors. For example, the authors of [9] showed that several oncogenes are related to Rb/E2F pathway rather than any other pathways. The authors of [10] also considered sparse loading matrix for gene expression data. Therefore, it is necessary to test the sparsity of the factor loadings. In the literature, some inference procedures have been proposed for latent factor models. For example, in low-dimensional setting, the authors of [11] considered testing the homogeneity assumption, i.e., the loadings associated to a factor are identical for all variables. The same testing problem has been considered by the authors of [12] in high-dimensional situation. As for observed factors, to the best of our knowledge, very limited work has been conducted.
Inference for the factor loadings with observed factors is not trivial. The approaches for latent factors cannot be directly applied to observed factors. The major difficulty is due to the high dimensionality, which poses significant challenges in deriving the asymptotic null limiting distribution of the test statistic. We propose a test statistic based on the maximum discrepancy measure. The distribution of this statistic is attractive in high-dimensional statistical inference such as model selection, simultaneous inference, and multiple testing. Examples include the works in [13,14,15,16,17], among others.
We use the multiplier bootstrap procedure to obtain the critical value of our test statistic. Based on the fact that the test statistic can be approximated by the sum of the independent random variables, we show that the proposed multiplier bootstrap method consistently approximates the null limiting distribution of the test statistic, and thus the testing procedure achieves the prespecified significance level asymptotically. There are some related works applying multiplier bootstrap method to high-dimensional inference; see in [16,18,19], among others. However, their procedures require sparsity assumption on the parameters and cannot be directly applied to factor model. Compared with the works with latent factors, we do not require homogeneity constraints or sparsity on the model and our procedure is adaptive to high-dimensional regime.
Another application of our procedure is the multiple testing problem. Combining the multiplier bootstrap method with step-down procedure proposed by [17], we show that our procedure has a strong control of the family-wise error rate (FWER). Our method is asymptotically non-conservative as compared to the Bonferroni–Holm procedure since the correlation among the test statistics has been taken into account. We also want to point out that any procedure controlling the FWER will also control the false discovery rate [20] when there exist some true discoveries.
The rest of the paper is organized as follows. In Section 2.1, we develop the multiplier bootstrap procedure for simultaneous test of parameters for a single factor and demonstrate its asymptotic level and power. In Section 2.2, we give the result of simultaneous test of parameters for multiple factors. Section 3 discusses the multiple testing problem by combining the multiplier bootstrap procedure with the step-down method proposed by [17]. Section 4 investigates the numerical performance of the proposed test by simulations. We also conduct real data analysis on portfolio risk of S&P stocks via Fama–French model in Section 5. The proofs of the main results are given in Appendix A.
Finally, we introduce some notation. For set S, let | S | denote the cardinality of S. Let 0 p R p be the vector of zeros. For p × p matrix A = ( a i j ) i , j = 1 p , denote by λ min ( A ) and λ max ( A ) the minimum and maximum eigenvalues of A , respectively. The matrix element-wise maximum norm and L 2 norm are defined as A = max 1 i , j p | a i j | and A = λ max 1 / 2 ( A A ) , respectively. For a = ( a 1 , , a p ) R p and q > 0 , denote by a q = ( i = 1 p | a i | q ) 1 / q and a = max 1 j p | a j | . Let v i R K be the ith column of the K × K identity matrix. We write a t b t if a t is smaller than or equal to b t up to a universal positive constant. For a , b R , we write a b = max { a , b } . For two sets, A and B, A B denotes their symmetric difference, that is, A B = ( A B ) ( B A ) .

2. Methodology

2.1. Simultaneous Test for a Single Factor

We consider the factor model defined as follows,
y i t = b i f t + u i t , i = 1 , , p and t = 1 , , T ,
where y i t is the observed response for the ith variable at time t, b i R K is the unknown vector of factor loadings, f t R K is the observed vector of common factors, and u i t is the latent error. Here, K is a fixed integer denoting the number of factors, p is the number of variables, and T denotes the sample size. Model (1) is commonly used in finance and macroeconomics, see, e.g., in [3,4,21], among others.
Denote by B = ( b 1 , , b p ) , y t = ( y 1 t , , y p t ) and u t = ( u 1 t , , u p t ) , then model (1) can be re-expressed as
y t = B f t + u t .
We first focus on testing the coefficients b i k = b i v k corresponding to a single factor, i.e., the kth factor. Specifically, we consider the following simultaneous testing problem that given k = 1 , , K ,
H 0 , G : b i k = b i k null for all i G versus H 1 , G : b i k b i k null for some i G ,
where G is a subset of { 1 , , p } and b i k null are prespecified values. For example, if b i k null are 0, then the hypotheses are able to test whether the variables with indices in G are significantly associated with the kth factor. Throughout the paper, | G | is allowed to grow as fast as p, which may have exponential growth with T as in Assumption 3.
The ordinary least squares (OLS) estimator B ^ = Y F ( F F ) 1 is applied to estimate B , where Y = ( y 1 , , y T ) and F = ( f 1 , , f T ) . Therefore,
B ^ B = t = 1 T u t f t t = 1 T f t f t 1 .
We propose the following test statistic for H 0 , G ,
M T , k = max i G T | b ^ i k b i k null | ,
where ( b ^ i k ) i p , k K = B ^ . For each i G , the asymptotic normality of b ^ i k is straightforward due to the central limit theorem. However, when | G | diverges with p, it is very challenging to demonstrate the existence of the limiting distribution of M T , k . In order to approximate the asymptotic distribution of M T , k , we will use the multiplier bootstrap method. From (4), we know
T ( b ^ i k b i k ) = 1 T t = 1 T u i t f t Ω ^ f v k = 1 T t = 1 T ξ ^ i t ,
where ξ ^ i t = u i t f t Ω ^ f v k and Ω ^ f = ( t = 1 T f t f t / T ) 1 .
In order to apply the multiplier bootstrap procedure, we need to approximate t = 1 T ξ ^ i t / T by sum of independent random variables. As Ω ^ f is consistent for Ω f = { E ( f t f t ) } 1 , we can replace the former with the latter in ξ ^ i t , and define ξ i t = u i t f t Ω f v k . Then, for each i G , { ξ i t } t 1 are i.i.d. and t = 1 T ξ i t / T well approximates t = 1 T ξ ^ i t / T .
We then apply the multiplier bootstrap procedure to approximate the distribution of max i G | t = 1 T ξ i t | / T . Denote by Σ u = ( σ i j ) p × p the covariance matrix of u t , and hence cov ( ξ i t , ξ j t ) = Ω f ( k , k ) σ i j , where Ω f ( k , k ) = v k Ω f v k . We know that Ω ^ f ( k , k ) = v k Ω ^ f v k is T -consistent for Ω f ( k , k ) . To estimate σ i j , we first calculate the residuals
u ^ i t = y i t b ^ i f t .
Denote by u ^ t = ( u ^ 1 t , , u ^ p t ) , then the error covariance matrix is estimated by
Σ ^ u = 1 T t = 1 T u ^ t u ^ t = ( σ ^ i j ) i p , j p .
Let { e t } t = 1 T , a sequence of i.i.d. N ( 0 , 1 ) independent of { y t , f t } t = 1 T , be the multiplier random variables. Then the multiplier bootstrap statistic is defined as
W T , k = max i G T 1 / 2 Ω ^ f ( k , k ) | t = 1 T u ^ i t e t | .
Conditioning on { y t , f t } t = 1 T , the covariance of T 1 / 2 t = 1 T Ω ^ f ( k , k ) u ^ i t e t and T 1 / 2 t = 1 T Ω ^ f ( k , k ) u ^ j t e t is Ω ^ f ( k , k ) σ ^ i j , which can sufficiently approximate the covariance between ξ i t and ξ j t . Then, the bootstrap critical value can be obtained via
c W T , k ( α ) = inf { t R : P ( W T , k t | ( Y , F ) ) 1 α } .
c W T , k ( α ) is calculated by generating { e t } t = 1 T repeatedly. In our simulations and real data, we conduct bootstrap 500 times. We now present some technical assumptions.
Assumption 1.
(i)
{ f t , u t } t 1 are i . i . d . with E ( u t ) = 0 p and Σ u = cov ( u t ) .
(ii)
There exist constants c 1 , c 2 such that 0 < c 1 < λ min ( Σ u ) < λ max ( Σ u ) < c 2 < .
(iii)
{ u t } t 1 and { f t } t 1 are independent.
Assumption 2.
There exist positive constants r 1 , r 2 , b 1 , b 2 , such that for any s > 0 , t T , i p and j K ,
P ( | u i t | > s ) exp { ( s / b 1 ) r 1 } , P ( | f j t | > s ) exp { ( s / b 2 ) r 2 } .
The “ i . i . d . ” condition in Assumption 1 is commonly considered in the literature for high-dimensional inference, see, e.g., in [16]. Assumption 1 (ii) allows the bounded eigenvalue of the error covariance matrix. As noted in [22], such assumption is satisfied by two situations: (1) cov ( U 1 , , U p ) , where { U i , i 1 } is a stationary ergodic process with spectral density f, 0 < c 1 < f < c 2 and (2) cov ( X 1 , , X p ) where X i = U i + V i , i = 1 , , p , { U i } is a stationary process as above and { V i } is a noise process independent of { U i } . In Example 1 in [22], they demonstrated that ARMA( r , q ) process satisfies Assumption 1 (ii). Furthermore, this assumption is commonly considered in the literature, see, e.g., in [4,15].
Assumption 2 allows the application of the large deviation theory to ( 1 / T ) t = 1 T u i t u j t σ i j and ( 1 / T ) t = 1 T u i t f j t . In this paper, we assume that f t and u t have exponential-type tails. Let γ 1 1 = 3 r 1 1 and γ 2 1 = 1.5 r 1 1 + 1.5 r 2 1 .
Assumption 3.
Suppose γ 1 < 1 , γ 2 < 1 and there exists a constant c 1 > 0 , such that ( log p ) γ = o ( T ) , where γ = max { 2 / γ 1 1 , 2 / γ 2 1 , 7 + c 1 } .
Assumption 4.
There exists a constant C > 0 such that λ max ( Ω f ) < C .
Assumption 3 is needed in Bernstein-type inequality [23] and commonly assumed in the literature for Gaussian approximation theory. Assumption 4 is also reasonable by bounding the eigenvalues of Ω f .
Theorem 1.
Under Assumptions 1–4, we have
sup α ( 0 , 1 ) | P max i G T | b ^ i k b i k | > c W T , k ( α ) α | = o ( 1 ) .
Theorem 1 demonstrates that the multiplier bootstrap critical value c W T , k ( α ) well approximates the quantile of the test statistic. It is worth mentioning that our method does not require any sparsity assumption on either Σ u or B .
The proof of Theorem 1 depends on the two results: (1) max i G t = 1 T ξ i t / T is sufficiently close to max i G t = 1 T ξ ^ i t / T and (2) the covariances of ξ i t and ξ j t are well approximated by the bootstrap version. The first result is demonstrated in Lemma A7 that there exist ζ 1 > 0 and ζ 2 > 0 such that
P | max i G t = 1 T ξ ^ i t / T max i G t = 1 T ξ i t / T | > ζ 1 < ζ 2 ,
where ζ 1 1 log ( p / ζ 1 ) = o ( 1 ) and ζ 2 = o ( 1 ) . The second result is shown in Lemma A6 that
Δ = max 1 i , j p | Ω ^ f ( k , k ) σ ^ i j Ω f ( k , k ) σ i j | = o P ( ( log p ) 2 ) ,
i.e., the maximum discrepancy between the empirical and population covariance matrices converges to zero.
Based on Theorem 1, for a given significance level 0 < α < 1 , we define the test Φ α by
Φ α = I ( M T , k > c W T , k ( α ) ) .
The hypothesis H 0 , G is rejected whenever Φ α = 1 .
Bootstrap is a commonly used resampling method and full theories about it can be found in [24]. There are many versions of bootstrap, for example, wild bootstrap, empirical bootstrap, and multiplier bootstrap, among others. As discussed in [25], other exchangeable bootstrap methods are asymptotically equivalent to the multiplier bootstrap. As our test statistic can be approximated by the maximum of sum of independent random vectors, we adopt the multiplier bootstrap method in [25] based on Gaussian approximation.
Alternatively, we propose the studentized statistic M T , k * : = max i G T | b ^ i k b i k null | / ω ^ i i for H 0 , G , where ω ^ i i = Ω ^ f ( k , k ) σ ^ i i . Similarly to Section 2.1, we define the multiplier bootstrap statistic as
W T , k * = max i G T 1 / 2 | t = 1 T u ^ i t e t | Ω ^ f ( k , k ) / ω ^ i i = max i G T 1 / 2 σ ^ i i 1 / 2 | t = 1 T u ^ i t e t | ,
where { e t } t = 1 T i . i . d . N ( 0 , 1 ) are independent of { y t , f t } t = 1 T . Then, the bootstrap critical value can be obtained via
c W T , k * ( α ) = inf { t R : P ( W T , k * t | ( Y , F ) ) 1 α } .
Theorem 2 below justifies the validity of the bootstrap procedure for the studentized statistic.
Theorem 2.
Under the assumptions in Theorem 1, we have
sup α ( 0 , 1 ) | P max i G T | b ^ i k b i k | / ω ^ i i > c W T , k * ( α ) α | = o ( 1 ) .
Based on this result, for a given significance level 0 < α < 1 , we define the test Φ α * by
Φ α * = I ( M T , k * > c W T , k * ( α ) ) .
The hypothesis H 0 , G is rejected whenever Φ α * = 1 .
For the studentized statistic, we can derive its asymptotic distribution. By Lemma 6 of the work in [15], for any x R and as p , we have
P max 1 i p T | b ^ i k b i k | 2 / ω ^ i i 2 log ( p ) + log log ( p ) x exp 1 π exp x 2 .
However, the above alternative testing procedure may not perform well in practice, because it requires diverging p, and the convergence rate is typically slow.
In contrast to the extreme value approach, our testing procedure explicitly accounts for the effect of | G | in the sense that the bootstrap critical value c W T , k * ( α ) depends on G. Therefore, our approach is more robust to the change in | G | .
Next, we turn to the (asymptotic) power analysis of the above procedure. Denote by B k the kth column of B . We focus on the case where | G | as T below. Define the separation set
U G ( c ) = { ( b 1 k , , b p k ) T : max i G | b i k b i k null | / ω i i > c log ( | G | ) / T } ,
where ω i j = Ω f ( k , k ) σ i j . Let Θ = ( θ i j ) i , j = 1 p with θ i j = ω i j / ω i i ω j j = σ i j / σ i i σ j j , which is the correlation matrix of u t .
Assumption 5.
Suppose max 1 i j p | θ i j | c 0 < 1 for some constant c 0 .
Theorem 3.
Under Assumptions 1–5, for any ε 0 > 0 , we have
inf B k U G ( 2 + ε 0 ) P max i G T | b ^ i k b i k null | / ω ^ i i > c W T , k * ( α ) 1 .
As long as one entry of b i k b i k null has a magnitude larger than ( 2 + ε 0 ) log | G | / T , our bootstrap-assisted test can reject the null correctly. Therefore, with B being sparse, our procedure performs well in detecting non-sparse alternatives. According to Section 3.2 of [26], the separation rate ( 2 + ε 0 ) log ( | G | ) / T is minimax optimal under suitable assumptions.

2.2. Simultaneous Test for Multiple Factors

In this section, we test the elements of the loading matrix corresponding to different factors. The testing problem can be stated as follows,
H 0 , G * : b i k = b i k null for all ( i , k ) G * versus H 1 , G * : b i k b i k null for some ( i , k ) G * ,
where G * is a subset of M { ( i , j ) : i = 1 , p and j = 1 , , K } . Define
ω ( i , k ) , ( j , ) * = cov ( u i t f t Ω f v k , u j t f t Ω f v ) = σ i j v k Ω f v , ω ^ ( i , k ) , ( j , ) * = ( Ω ^ f v k ) 1 T t = 1 T u ^ i t u ^ j t f t f t ( Ω ^ f v ) .
We propose the studentized test statistic
M T , G * = max ( i , k ) G * T | b ^ i k b i k | / ω ^ ( i , k ) , ( i , k ) * .
From the linear expansion in (5), the multiplier bootstrap statistic is defined as
W T , G * = max ( i , k ) G * T 1 / 2 | t = 1 T u ^ i t f t Ω ^ f v k e t | / ω ^ ( i , k ) , ( i , k ) * ,
where { e t } t = 1 T i . i . d . N ( 0 , 1 ) are independent of { y t , f t } t = 1 T . Then, the bootstrap critical value can be obtained via
c W T , G * ( α ) = inf { t R : P ( W T , G * t | ( Y , F ) ) 1 α } .
Let γ 3 1 = 4 r 1 1 + 4 r 2 1 , r 3 1 = 3 r 1 1 + 9 r 2 1 and r = max { 2 / γ 3 1 , 2 / r 3 1 , c 1 + 7 } for a constant c 1 > 0 .
Theorem 4.
Suppose ( log p ) r = o ( T ) , under Assumptions 1,2 and 4, we have
sup α ( 0 , 1 ) | P max ( i , k ) G * T | b ^ i k b i k | / ω ^ ( i , k ) , ( i , k ) * > c W T , G * ( α ) α | = o ( 1 ) .
Based on Theorem 4, for a given significance level 0 < α < 1 , we define the test Φ α ( G * ) by
Φ α ( G * ) = I ( M T , G * > c W T , G * ( α ) ) .
The hypothesis H 0 , G * is rejected whenever Φ α ( G * ) = 1 .
Now we turn to the power analysis of the test Φ α ( G * ) . Similar to Section 2.1, we focus on the case where | G * | as T and define the separation set
V G * ( c ) = { ( b i k ) i p , k K : max ( i , k ) G * | b i k b i k null | / ω ( i , k ) , ( i , k ) * > c log ( | G * | ) / T } ,
Let θ ( i , k ) , ( j , ) * = ω ( i , k ) , ( j , ) * / ω ( i , k ) , ( i , k ) * ω ( j , ) , ( j , ) * . We consider the following condition.
Assumption 6.
Suppose max ( i , k ) , ( j , ) | θ ( i , k ) , ( j , ) * | c 0 * < 1 for some constant c 0 * .
The asymptotic power of the testing procedure is given as follows.
Theorem 5.
Under the assumptions in Theorem 4 and Assumption 6 , for any ε 0 > 0 , we have
inf B V G * ( 2 + ε 0 ) P max ( i , k ) G * T | b ^ i k b i k null | / ω ^ ( i , k ) , ( i , k ) * > c W T , G * ( α ) 1 .

3. Multiple Testing with Strong FWER Control

In this section, we study the following multiple testing problem,
H 0 , i : b i j b i j null versus H 1 , i : b i j > b i j null for all i G .
For simplicity, we set G = { 1 , 2 , , p } and let j be fixed. We combine the bootstrap-assisted procedure with the step-down method proposed by [17]. Our method can be seen as a special case in Section 5 of [25]. Note that this framework can cover the case of testing equalities ( H 0 , j : b i j = b i j null ) because equalities can be rewritten as pairs of inequalities.
We briefly illustrate the control of the FWER. Full details and theory can be found in [25]. Let Ω be the space for all data generating processes, and ω be the true process. Each null hypothesis H 0 , i is equivalent to ω Ω i for some Ω i Ω . For any η G , denote by Ω η = ( i η Ω i ) ( i η Ω i c ) with Ω i c = Ω Ω i . The strong control of the FWER means that
sup η G sup ω Ω η P ω ( reject at least one hypothesis H 0 , i , i ω ) α + o ( 1 ) ,
where P ω denotes the probability distribution under the data-generating process ω .
For i = 1 , , p , denote t i j = T ( b ^ i j b i j null ) . For a subset η G , let c η ( α ) be the bootstrapped estimate for the ( 1 α ) -quantile of max i η t i j . The step-down procedure in [17] is described as follows. Define η ( 1 ) = G at the first step and reject all H 0 , i satisfying t i j > c η ( 1 ) ( α ) . If no H 0 , i is rejected, then stop the procedure. If some H 0 , i are rejected, let η ( 2 ) be the set of indices for those hypotheses not being rejected at the first step. On step 2 , let η ( ) G be the subset of hypotheses that were not rejected at step 1 . Reject all hypotheses H 0 , i for i η ( ) satisfying that t i j > c η ( ) ( α ) . If no hypothesis is rejected, then stop the procedure. Proceed in this way until the algorithm stops.
Romano and Wolf [17] proved the following result:
c η ( α ) c η ( α ) , for η η
sup η G sup ω Ω η P ω max i η t i j > c η ( α ) α + o ( 1 ) .
Therefore, we can show that the step-down method together with the multiplier bootstrap provide strong control of the FWER by verifying (9) and (10). The theoretical results are given in the proposition below. The proofs are similar to those of Theorem 1, which are omitted here.
Proposition 1.
Under the assumptions in Theorem 1, the step-down procedure with the bootstrap critical value c η ( α ) satisfies (8).
Our multiple testing method has the following two important features: (i) It can be applied to models with an increasing dimension; (ii) It takes into account the correlation amongst statistics and hence is asymptotically non-conservative.
In the simulation, we also consider Benjamini–Hochberg procedure [20] to control the false discovery rate (FDR), which is summarized as follows. For each of H 0 , 1 , , H 0 , p , we calculate the p-values P 1 , , P p based on the studentized test statistic. Let P ( 1 ) P ( p ) be the ordered p-values, and denote by H 0 , ( i ) the null hypothesis corresponding to P ( i ) . Let k = max { i : P ( i ) i α / p } , and then reject all H 0 , ( i ) for i = 1 , , k .

4. Simulation Study

This section examines the performance of the proposed testing procedure by a simulation study. We fix the number of factors K = 3 , the sample size T { 200 , 400 } , and let the dimensionality p increase from 50 to 600. Throughout the simulation, we consider testing the first column of B and repeat multiplier bootstrap 500 times.
Each row of B is generated independently from N ( 0 , I K ) , where I K is K × K identity matrix. Let cov ( f t ) = ( σ i j f ) K × K with σ i j f = 0.6 | i j | . Here, we consider two models for the covariance structure Σ u .
(a)
Model 1 (sparse): Ω u = ( ω i j ) 1 i , j p where ω i i = 1 , ω i j = 0.8 for 2 ( k 1 ) + 1 i j 2 k , where k = 1 , , [ p / 2 ] and ω i , j = 0 otherwise. Σ u = Ω u 1 .
(b)
Model 2 (non-sparse): Σ u = ( σ i j ) 1 i , j p where σ i i = 1 and σ i j = 0.5 for i j .
Under each model, { f t } t = 1 T and { u t } t = 1 T are generated independently from N ( 0 , cov ( f t ) ) and N ( 0 , Σ u ) , respectively.
We calculate the empirical sizes of test for each column of B under each model by considering hypothesis (3) with G = { 1 , 2 , , p } and b i k null being the true value of b i k . The results are summarized in Table 1. Here “NST”, “ST” denote the non-studentized, studentized Bootstrap-based test, respectively, and “EX” denotes the test using extreme value distribution. The estimated sizes of the three tests are reasonably close to the nominal level 0.05 for the values of p ranging from 50 to 600.
For all i G , by varying b i k = b i k null + c / 40 with c = ± 0.8 and = 0 , , 10 , we plot the empirical powers of M T , k and M T , k * in Figure 1. For ease of presentation, we only consider p { 10 , 200 , 600 } . The results for other dimensionality are similar in spirit, and are not presented here. For all tests, the significance level is fixed at α = 0.05 . From Figure 1, we can tell that the empirical rejection rate grows from the nominal level to one as c deviates away from zero. The difference between NST test and ST test is slight. For small p, the EX test does not perform well because this approach requires diverging p. Furthermore, for non-sparse error covariance matrix, our method performs better than the EX method. These numerical results confirm our theoretical analysis.
Next, we study the numerical performance of the step-down method in Section 3 and compare it with the Bonferroni–Holm procedure. Consider the following two-sided multiple testing problem; H 0 , i : b i j = b ˜ i j null among all i = 1 , 2 , , p with j = 1 . For Models 1 and 2, the first s 0 entries of { b ˜ i j null } i = 1 p are b i j null + 0.5 and b i j null + 0.35 , respectively, and the rest are equal to b i j null . We set T { 200 , 400 } and p { 50 , 200 , 500 , 600 } .
We employ both the step-down method based on the studentized/non-studentized test statistic, and the Bonferroni–Holm procedure (based on the studentized test statistic) to control the FWER. We denote these three procedures by NST-FWER, ST-FWER, and BH-FWER, respectively. For comparison, we also consider using Benjamini–Hochberg procedure to control FDR. We denote this procedure by BH-FDR. Based on 500 replications, we calculate the average empirical FWER
Average { I { At least one hypothesis H 0 , i is rejected , i { s 0 + 1 , , p } } }
for methods NST-FWER, ST-FWER, and BH-FWER, the average empirical FDR
Average i S 0 I { H 0 , i is rejected } / i G I { H 0 , i is rejected }
for method BH-FDR, and the average empirical power
Average i S 0 I { H 0 , i is rejected } / s 0
for all the four methods, where S 0 = { 1 , 2 , , s 0 } and G = { 1 , , p } . Under each model, we consider the case s 0 = 3 and s 0 = 15 . Table 2 and Table 3 report the empirical FWER, FDR, and the average power. From Table 2 and Table 3, the proposed and Bonferroni–Holm procedures provide similar control on the FWER, and Benjamini–Hochberg procedure can control FDR. The empirical powers of the step-down method and Benjamini–Hochberg procedure are higher than that of the Bonferroni–Holm procedure. It is also seen that controlling the FDR is more powerful than controlling the FWER.

5. Real Data Analysis

This section conducts hypothesis testing for financial data from 1 January 2017 to 14 March 2018. The dataset consists of daily returns of 491 stocks from S&P 500 index. In addition, we collected Fama–French three factors [21] in the same period. In summary, the panel matrix is a 300 by 491 matrix Y , in addition to a factor matrix F of size 300 by 3. Here, 300 is the number of days and 491 is the number of stocks.
We first centralize and standardize the factor matrix F and Y is centralized as well. We consider testing the sparsity of each column of B and repeat the multiplier bootstrap 500 times. Simultaneous test of parameters corresponding to multiple factors is also considered. The hypotheses are
H 0 : b i k = 0 for all ( i , k ) s versus H 1 : b i k 0 for some ( i , k ) s ,
where s = { ( i , k ) : k s * { 1 , 2 , 3 } , | b ^ i k | are the smallest β % among { | b ^ i k | } i { 1 , , p } ; k s * } , with s * = { 1 } , { 2 } , { 3 } or { 2 , 3 } and β = 10 , 30 , 50 , 70 , 90 . The results are depicted in Table 4. For the first column of B , it is therefore not reasonable to assume b i 1 = 0 . However, we can claim that the last two columns of B are sparse. Hence, a sufficiently large number of stocks are not influenced by the last two factors.

Author Contributions

X.G. conceived and designed the experiments; Y.W. performed the experiments; Y.W. analyzed the data; X.G. contributed to analysis tools; X.G. and Y.W. wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grants 12071452, 11601500, 11671374 and 11771418 and the Fundamental Research Funds for the Central Universities.

Acknowledgments

We thank the three referees for insightful comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Technical Details

We prove the main results in this section. First, we introduce some notations. Throughout this section, we denote by c , c , C , C , C i constants that do not depend on p , T and may vary from place to place. Define T 0 = max i G t = 1 T ξ i t / T and T 1 = max i G t = 1 T ξ ^ i t / T . Let { z t } t = 1 T be a sequence of N ( 0 , Ω f ( k , k ) Σ u ) vectors. Define c W 0 ( α ) = inf { t R : P ( W 0 t | ( Y , F ) ) 1 α } with W 0 = max i G T 1 / 2 Ω ^ f ( k , k ) t = 1 T u ^ i t e t and c Z 0 ( α ) = inf { t R : P ( Z 0 t | ( Y , F ) ) 1 α } with Z 0 = max i G t = 1 T z i t / T . Denote by Σ f ( m , n ) = v m Σ f v n with Σ f = Ω f 1 . We begin by presenting some useful lemmas that will be used in the proofs of the main results.
Lemma A1.
Suppose that the random variables Z 1 , Z 2 both satisfy the exponential-type tail condition: There exist r 1 , r 2 ( 0 , 1 ) and b 1 , b 2 > 0 such that s > 0 ,
P ( | Z i | > s ) exp { 1 ( s / b i ) r i } , i = 1 , 2 .
Then, for some r 3 and b 3 > 0 , and any s > 0 ,
(i)
P ( | Z 1 Z 2 | > s ) exp { 1 ( s / b 3 ) r 3 } , where r 3 ( 0 , r ) with r = r 1 r 2 / ( r 1 + r 2 ) ,
(ii)
P ( | Z | > s ) exp { 1 ( s / b 3 ) r 3 } , where Z = max { Z 1 , Z 2 } .
Proof of Lemma A1. 
The proof of the first claim can be found in the proof of Lemma A.2 in [4], thus we prove the second claim. For any s > 0 , we have
P ( | Z | > s ) P ( | Z 1 | > s ) + P ( | Z 2 | > s ) = exp { 1 ( s / b 1 ) r 1 } + exp { 1 ( s / b 2 ) r 2 } 2 exp { 1 ( s / b ) r } ,
where b r = max { b 1 r 1 , b 2 r 2 } . Pick up an r 3 ( 0 , r ) , and b 3 > max { ( r 3 / r ) 1 / r b , ( 1 + log 2 ) 1 / r b } ; then, it can be shown that F ( s ) = ( s / b ) r ( s / b 3 ) r 3 is increasing when s > b 3 . Therefore, F ( s ) > F ( b 3 ) > log 2 when s > b 3 , which implies when s > b 3 ,
P ( | Z | > s ) 2 exp { 1 ( s / b ) r } exp { 1 ( s / b 3 ) r 3 } .
When s b 3 ,
P ( | Z | > s ) 1 exp { 1 ( s / b 3 ) r 3 } .
Then the proof is complete. ☐
Lemma A2.
Under Assumptions 1–4, we have
(i)
max i , j p | σ ^ i j σ i j | = O p ( log p / T ) .
(ii)
max i p , k K | ( 1 / T ) t = 1 T f k t u i t | = O p ( log p / T ) .
(iii)
max i , j K | ( 1 / T ) t = 1 T f i t f j t E f i t f j t | = O p ( log T / T ) .
Proof of Lemma A2. 
For a proof, see the proof of Lemma A.3 and Lemma B.1 in [4]. ☐
Lemma A3.
If a random variable X satisfies exponential-type tail: there exist r > 0 and b > 0 such that s > 0 , P ( | X | > s ) exp { 1 ( s / b ) r } . Then E ( | X | ) = O ( 1 ) .
Proof of Lemma A3. 
Note that
E ( | X | ) = 0 P ( | X | > x ) d x 0 exp { 1 ( x / b ) r } d x 0 exp ( x r ) d x : = I
It is not hard to check when r 1 , I 1 + e 1 . When r < 1 , I = α Γ ( α ) , where α = 1 / r and Γ ( α ) = 0 e x x α 1 d x = O ( 1 ) . Then, the proof is complete. ☐
Lemma A4.
Under the assumptions in Theorem 1, there exist constants c , C > 0 such that
ρ : = sup t R | P ( T 0 t ) P ( Z 0 t ) | C T c .
Proof of Lemma A4. 
Implied by Assumption 3, we have ( log ( p T ) ) 7 / T C 1 T c 1 for some constants c 1 , C 1 > 0 . We then apply Corollary 2.1 of [25] to the sequence ξ i t . What we should check is its Condition (E.2), that is, uniformly over i,
c 0 E ( ξ i t ) 2 C 0 , max k = 1 , 2 E ( | ξ i t | k + 2 / B k ) + E { ( max 1 i p | ξ i t | / B ) 4 } 4 ,
where c 0 , C 0 > 0 and B is some large enough constant. By Lemmas A1 and A3 we have E ( f i t f j t ) = O ( 1 ) uniformly for i , j K . This implies Ω f = O ( 1 ) . Uniformly for i p , we have
ξ i t = u i t f t Ω f v k u i t f t Ω f v k 1 : = γ i t .
By Lemma A1, we know γ i t and max 1 i p | γ i t | is exponential-type tail. Then by Lemma A3 we have E ( | ξ i t | 4 ) E ( | γ i t | 4 ) =O(1) and E ( max i p | ξ i t | ) 4 E ( max i p | γ i t | ) 4 = O ( 1 ) . Thus, we can find large enough B such that the above condition is satisfied. Then, the proof is complete. ☐
Lemma A5.
Under the assumptions in Theorem 1, there exists a sequence of positive numbers α T such that α T / p = o ( 1 ) and P ( α T ( log p ) 2 | Ω ^ f ( k , k ) Ω f ( k , k ) | > 1 ) 0 .
Proof of Lemma A5. 
By Lemma A2 (iii), we have Ω ^ f 1 Ω f 1 = O p ( log T / T ) . Since Ω f = O ( 1 ) and Ω ^ f = O p ( 1 ) , we have
Ω ^ f Ω f = Ω ^ f ( Ω f 1 Ω ^ f 1 ) Ω f Ω ^ f Ω ^ f 1 Ω f 1 Ω f = O p ( log T / T ) .
On the other hand,
| Ω ^ f ( k , k ) Ω f ( k , k ) | Ω ^ f Ω f = O p ( log T / T ) .
Choosing α T = log T , by Assumption 3, the proof is complete. ☐
Lemma A6.
Under the assumptions in Theorem 1, we have for every α ( 0 , 1 ) and ϑ > 0 ,
P ( c W 0 ( α ) c Z 0 ( α + π ( ϑ ) ) ) 1 P ( Δ > ϑ ) , P ( c Z 0 ( α ) c W 0 ( α + π ( ϑ ) ) ) 1 P ( Δ > ϑ ) ,
Proof of Lemma A6. 
For ϑ > 0 , let π ( ϑ ) = C 2 ϑ 1 / 3 ( 1 log ( p / ϑ ) ) 2 / 3 with C 2 > 0 . Recall that Δ = max 1 i , j p | Ω ^ f ( k , k ) σ ^ i j Ω f ( k , k ) σ i j | . As | Ω f ( k , k ) | = O ( 1 ) uniformly for k K , by Lemma A2 (i), we have
Δ = O p ( | Ω ^ f ( k , k ) Ω f ( k , k ) | + ( log p ) / T ) .
By Lemma A5 and Assumption 3, choosing ϑ = 1 / ( α T ( log p ) 2 ) , we have P ( Δ > ϑ ) = o ( 1 ) . By Lemma 3.1 of [25], on the event { ( Y , F ) : Δ ϑ } , we have | P ( Z 0 t ) P ( P ( W 0 t | ( Y , F ) ) | π ( ϑ ) for all t R , and so on this event
P ( P ( W 0 c Z 0 ( α + π ( ϑ ) ) | ( Y , F ) ) ) P ( Z 0 c Z 0 ( α + π ( ϑ ) ) ) π ( ϑ ) α + π ( ϑ ) π ( ϑ ) = α ,
implying the first claim. The second claim follows similarly. ☐
Lemma A7.
Under the assumptions in Theorem 1, there exist ζ 1 , ζ 2 > 0 such that
P | max 1 i p t = 1 T ξ ^ i t / T max 1 i p t = 1 T ξ i t / T | > ζ 1 < ζ 2 ,
where ζ 1 1 log ( p / ζ 1 ) = o ( 1 ) , ζ 2 = o ( 1 ) .
Proof of Lemma A7. 
The arguments in the proof of Lemma A5 imply that
Ω ^ f v k Ω f v k 1 = O p ( log T / T ) .
By Lemma A2 (ii), uniformly for i p , we have
| t = 1 T ξ ^ i t / T t = 1 T ξ i t / T | = | t = 1 T u i t f t ( Ω ^ f v k Ω f v k ) / T | Ω ^ f v k Ω f v k 1 max i p , k K | 1 T t = 1 T f k t u i t | = O p ( log p log T / T ) .
Choosing ζ 1 2 = O ( log p log T / T ) , we have
P max 1 i p | t = 1 T ξ ^ i t / T t = 1 T ξ i t / T | > ζ 1 ζ 2 , ζ 2 = o ( 1 ) .
Note that
| max 1 i p t = 1 T ξ ^ i t / T max 1 i p t = 1 T ξ i t / T | max 1 i p | t = 1 T ξ ^ i t / T t = 1 T ξ i t / T | ,
then the proof is complete. ☐
Lemma A8.
Under the assumptions in Theorem 4, we have
(i)
max i , j , m , n | ( 1 / T ) t = 1 T u i t u j t f m t f n t σ i j Σ f ( m , n ) | = O p ( log p / T ) ,
(ii)
max i , j , m , n | ( 1 / T ) t = 1 T u i t f j t f m t f n t | = O p ( log p / T ) ,
Proof of Lemma A8. 
(i) By Assumption 1 and Lemma A1, u i t f m t satisfies the exponential tail condition, with parameter 2 r 1 r 2 / ( 3 r 1 + 3 r 2 ) as shown in Lemma A1. Thus, u i t u j t f m t f n t satisfies the exponential tail condition, with parameter r 1 r 2 / ( 4 r 1 + 4 r 2 ) . It follows from 1.5 ( r 1 1 + r 2 1 ) > 1 that γ 3 < 1 . Therefore, by the Bernstein’s inequality [23], there exist constants C i , i = 1 , , 5 , for any s > 0
max i , j , m , n P | 1 T t = 1 T u i t u j t f m t f n t σ i j Σ f ( m , n ) | s T exp ( T s ) γ 3 C 1 + exp T 2 s 2 C 2 ( 1 + T C 3 ) + exp ( T s ) 2 C 4 T exp ( T s ) γ 3 ( 1 γ 3 ) C 5 ( log T s ) γ 3 .
Using Bonferroni’s method, we have
P ( max i , j , m , n | 1 T t = 1 T u i t u j t f m t f n t σ i j Σ f ( m , n ) | > s ) ( p K ) 2 max i , j , m , n P | 1 T t = 1 T u i t u j t f m t f n t σ i j Σ f ( m , n ) | > s .
Let s = C ( log p ) / T for some C > 0 . It is not hard to check that when ( log p ) 2 / γ 3 1 = o ( T ) (by assumption), for large enough C,
p 2 T exp ( T s ) γ 3 C 1 + p 2 exp ( T s ) 2 C 4 T exp ( T s ) γ 3 ( 1 γ 3 ) C 5 ( log T s ) γ 3 = o 1 p 2
and
p 2 exp T 2 s 2 C 2 ( 1 + T C 3 ) = O 1 p 2 .
As K = O ( 1 ) , this proves (i).
(ii) By Assumption 1 and Lemma A1, u i t f j t f m t f n t satisfies the exponential tail condition for the tail parameter r 1 r 2 / ( 9 r 1 + 3 r 2 ) . Therefore, again by the Bernstein’s inequality and the Bonferroni method on u i t f j t f m t f n t similar to (A1) with the parameter r 3 1 = 3 r 1 1 + 9 r 2 1 , it follows from 1.5 ( r 1 1 + r 2 1 ) > 1 that r 3 < 1 . Thus, when s = C log p / T for large enough C, as K is fixed, the term
p K 3 exp T 2 s 2 C 2 ( 1 + T C 3 ) p 2 ,
and the rest terms on the right-hand side of the inequality, multiplied by p K 3 are of order o ( p 2 ) . Hence when ( log p ) 2 / r 3 1 = o ( T ) (by assumption), we have
max i , j , m , n | 1 T t = 1 T u i t f j t f m t f n t | = O p ( log p / T ) ,
which completes the proof. ☐
Lemma A9.
Under the assumptions in Theorem 4, we have
(i)
max i , j , m , n | ( 1 / T ) t = 1 T ( u ^ i t u ^ j t f m t f n t u i t u j t f m t f n t ) | = O p ( log p / T ) .
(ii)
max i , j , m , n | ( 1 / T ) t = 1 T u ^ i t u ^ j t f m t f n t σ i j Σ f ( m , n ) | = O p ( log p / T ) .
Proof of Lemma A9. 
(i) By the triangular inequality, we have
| 1 T t = 1 T ( u ^ i t u ^ j t f m t f n t u i t u j t f m t f n t ) | | 1 T t = 1 T ( u ^ i t u ^ j t f m t f n t u ^ i t u j t f m t f n t ) | I + | 1 T t = 1 T ( u ^ i t u j t f m t f n t u i t u j t f m t f n t ) | I I
For I, we have
I = | 1 T t = 1 T u ^ i t ( b ^ j b j ) f t f m t f n t | | 1 T t = 1 T ( b ^ i b i ) f t ( b ^ j b j ) f t f m t f n t | i + | 1 T t = 1 T u i t ( b ^ j b j ) f t f m t f n t | i i
By Lemma 3.1 of [4], we have max i p b ^ i b i = O p ( log p / T ) . It is straightforward to see that
max m , n K 1 T t = 1 T f t f t f m t f n t = O p ( 1 ) .
then we have,
i max i , m , n b ^ i b i 2 1 T t = 1 T f t f t f m t f n t = O p ( log p / T ) .
By Lemma A8 (ii), we have ( 1 / T ) t = 1 T u i t f t f m t f n t = O p ( log p / T ) , which implies that
i i max j p b ^ j b j 1 T t = 1 T u i t f t f m t f n t = O p ( log p / T )
Part I I is similar to i i , thus we have
I I = | 1 T t = 1 T ( b ^ i b i ) f t u j t f m t f n t | O p ( log p / T ) .
Then the proof is complete.
(ii) By the triangular inequality, we have
max i , j , m , n | 1 T t = 1 T u ^ i t u ^ j t f m t f n t σ i j Σ f ( m , n ) | max i , j , m , n | 1 T t = 1 T ( u ^ i t u ^ j t f m t f n t u i t u j t f m t f n t ) | + max i , j , m , n | 1 T t = 1 T u i t u j t f m t f n t σ i j Σ f ( m , n ) | = O p ( log p / T ) ,
which proves the result. ☐
Proof of Theorem 1. 
Without loss of generality, we set G = { 1 , 2 , , p } . First, we prove the following fact,
sup α ( 0 , 1 ) | P ( T 1 > c W 0 ( α ) ) α | = o ( 1 ) ,
For ϑ > 0 , let π ( ϑ ) : = C 2 ϑ 1 / 3 ( 1 log ( p / ϑ ) ) 2 / 3 with C 2 > 0 . In addition, Let κ 1 ( ϑ ) : = c Z 0 ( α ζ 2 π ( ϑ ) ) and κ 2 ( ϑ ) : = c Z 0 ( α + ζ 2 + π ( ϑ ) ) . For every α ( 0 , 1 ) , note that
P ( { T 1 c W 0 ( α ) } { T 0 c Z 0 ( α ) } ) ( 1 ) P ( κ 1 ( ϑ ) 2 ζ 1 < T 0 κ 2 ( ϑ ) + 2 ζ 1 ) + P ( Δ > ϑ ) + ζ 2 ( 2 ) P ( κ 1 ( ϑ ) 2 ζ 1 < Z 0 κ 2 ( ϑ ) + 2 ζ 1 ) + P ( Δ > ϑ ) + ρ + ζ 2 ( 3 ) π ( ϑ ) + P ( Δ > ϑ ) + ρ + ζ 1 1 log ( p / ζ 1 ) + ζ 2 ,
where (1) follows from Lemmas A6 and A7, (2) follows from Lemma A4, and (3) follows from Lemma 2.1 in [25] and the fact that Z 0 has no point masses. Then, by the definition of ρ in Lemma A4, we have
sup α ( 0 , 1 ) | P ( T 1 > c W 0 ( α ) ) α | ρ + ρ ,
where ρ = sup α ( 0 , 1 ) P ( { T 1 c W 0 ( α ) } { T 0 c Z 0 ( α ) } ) . The right-hand side of the above inequality is o ( 1 ) , which has proved (A2). Since max i G T | b ^ i k b i k | = T max i G max { b ^ i k b i k , b i k b ^ i k } , similar arguments imply that
sup α ( 0 , 1 ) | P max i G T | b ^ i k b i k | > c W T , k ( α ) α | = o ( 1 ) ,
which completes the proof. ☐
Proof of Theorem 2. 
From the arguments in the proof of Lemma A6, we have
Δ = O p ( | Ω ^ f ( k , k ) Ω f ( k , k ) | + log p / T ) ,
which implies that max 1 i p | ω ^ i i ω i i | = O p ( | Ω ^ f ( k , k ) Ω f ( k , k ) | + log p / T ) . We then have
P ( ω i i / 2 < ω ^ i i < 2 ω i i for all 1 i p ) 1 .
Define T ¯ 1 = max i G t = 1 T ξ ^ i t / T ω ^ i i and T ¯ 0 = max i G t = 1 T ξ i t / T ω i i . Note that
| T ¯ 1 T ¯ 0 | max 1 i p | t = 1 T ξ ^ i t / T ω ^ i i t = 1 T ξ i t / T ω i i | max 1 i p | t = 1 T ξ ^ i t / T ω ^ i i t = 1 T ξ ^ i t / T ω i i | + max 1 i p | t = 1 T ξ ^ i t / T ω i i t = 1 T ξ i t / T ω i i | C max 1 i p | t = 1 T ξ ^ i t / T | max 1 i p | ω i i / ω ^ i i 1 | + C max 1 i p | t = 1 T ξ ^ i t ξ i t / T | : = I 1 + I 2 ,
where C , C > 0 .
On the event ω i i / 2 < ω ^ i i < 2 ω i i for all 1 i p ,
max 1 i p | ω i i / ω ^ i i 1 | max 1 i p | ω i i ω ^ i i | max 1 i p 2 / ω i i max 1 i p | ω i i ω ^ i i ω i i + ω ^ i i | max 1 i p 2 / ω i i max 1 i p | ω i i ω ^ i i | max 1 i p 1 / ω i i = O p ( | Ω ^ f ( k , k ) Ω f ( k , k ) | + ( log p ) / T ) .
On the other hand,
max 1 i p | t = 1 T ξ ^ i t / T | max 1 i p | t = 1 T ( ξ ^ i t ξ i t ) / T | + max 1 i p | t = 1 T ξ i t / T | = O P ( log p log T / T + log p ) = O P ( log p ) .
Therefore, on the above event, I 1 O p ( log p | Ω ^ f ( k , k ) Ω f ( k , k ) | + log p / T ) . By Lemma A5, we can find ζ 1 such that P ( I 1 > ζ 1 ) = o ( 1 ) and ζ 1 1 log ( p / ζ 1 ) = o ( 1 ) . Thus by Lemma A7 and (A3), we have
P ( | T ¯ 1 T ¯ 0 | > ζ 1 ) P ( I 1 + I 2 > ζ 1 ) < ζ 2 ,
for ζ 1 1 log ( p / ζ 1 ) = o ( 1 ) and ζ 2 = o ( 1 ) .
Let Δ ¯ = max 1 j , k p | ω j k / ω j j ω k k ω ^ j k / ω ^ j j ω ^ k k | . Note that
| ω j j ω k k ω ^ j j ω ^ k k | = | ω j j ω k k ω ^ j j ω ^ k k | ω j j ω k k + ω ^ j j ω ^ k k .
On the event ω i i / 2 < ω ^ i i < 2 ω i i for all 1 i p , we have
| ω j j ω k k ω ^ j j ω ^ k k | ω j j ω k k + ω ^ j j ω ^ k k | ω j j ω k k ω ^ j j ω ^ k k | ω j j ω k k + ω j j ω k k / 4 ( 2 / 3 ) | ω j j ω k k ω ^ j j ω ^ k k | max 1 j p 1 / ω j j ,
which implies that
max 1 j , k p | ω j j ω k k / ω ^ j j ω ^ k k 1 | max 1 j , k p | ω j j ω k k ω ^ j j ω ^ k k | max 1 j p 2 / ω j j ( 4 / 3 ) max 1 j , k p | ω j j ω k k ω ^ j j ω ^ k k | max 1 j p 1 / ω j j 2 = O p ( | Ω ^ f ( k , k ) Ω f ( k , k ) | + ( log p ) / T ) .
Choosing ϑ = 1 / ( α T ( log p ) 2 ) , we can show that P ( Δ ¯ > ϑ ) = o ( 1 ) . The rest of the proofs are similar to those in the proof of Theorem 1. We skip the details. ☐
Proof of Theorem 3. 
Let Z = ( Z 1 , , Z p ) d N ( 0 , Θ ) . Following the arguments in the proof of Theorem 2, we can show that the distribution of max i G T | b ^ i k b i k | / ω ^ i i can be approximated by max i G | Z i | . Under Assumption 5, by Lemma 6 of [15], we have for any x R and as | G | ,
P max i G | Z i | 2 2 log ( | G | ) + log log ( | G | ) x F ( x ) : = exp 1 π exp x 2 .
It implies that
P max i G T | b ^ i k b i k | 2 / ω ^ i i 2 log ( | G | ) log log ( | G | ) / 2 1 .
The bootstrap consistency result implies that
| ( c W T , k * ( α ) ) 2 2 log ( | G | ) + log log ( | G | ) q α | = o P ( 1 ) ,
where q α is the 100 ( 1 α ) th quantile of F ( x ) . Consider any i G such that | b i k null b i k | / ω i i > ( 2 + ε 0 ) log | G | / T . Using the inequality 2 a 1 a 2 δ 1 a 1 2 + δ a 2 2 for any δ > 0 , we have
T | b i k null b i k | 2 / ω ^ i i ( 1 + δ 1 ) T | b ^ i k b i k | 2 / ω ^ i i + ( 1 + δ ) T | b ^ i k b i k null | 2 / ω ^ i i ,
where T | b ^ i k b i k | 2 / ω ^ i i = o p ( log | G | ) as i is fixed and | G | grows. From the proof of Theorem 2, we know the difference between T | b i k null b i k | 2 / ω ^ i i and T | b i k null b i k | 2 / ω i i is asymptotically negligible. Thus, by (A6) and the fact that B k U G ( 2 + ε 0 ) , we have
max i G T | b ^ i k b i k null | 2 / ω ^ i i 1 1 + δ ( 2 + ε 0 ) 2 ( log | G | ) o p ( log | G | ) .
The conclusion thus follows from (A7) and (A5) provided that δ is small enough. ☐
Proof of Theorem 4. 
Without loss of generality, we set G * = M . Define
Γ ^ = 1 T t = 1 T u ^ i t u ^ j t f t f t , and Γ = σ i j Σ f .
Let
Δ * : = max i , j , m , n | ( Ω ^ f v m ) Γ ^ ( Ω ^ f v n ) σ i j Ω f ( m , n ) |
denote the maximum discrepancy between the empirical and population covariance matrices. By the triangular inequality, we have
| ( Ω ^ f v m ) Γ ^ ( Ω ^ f v n ) σ i j Ω f ( m , n ) | = | ( Ω ^ f v m ) Γ ^ ( Ω ^ f v n ) ( Ω f v m ) Γ ( Ω f v n ) | | ( Ω ^ f v m ) ( Γ ^ Γ ) ( Ω ^ f v n ) | I + | ( Ω ^ f v m Ω f v m ) Γ ( Ω ^ f v n ) | I I + | ( Ω f v m ) Γ ( Ω ^ f Ω f ) v n | I I I .
Note that Ω ^ f = O p ( 1 ) , by Lemma A9 (ii), we have
I Ω ^ f v m 2 Γ ^ Γ = O p ( log p / T ) .
By Lemma A2 (iii) and Γ = O ( 1 ) , we have
I I Ω ^ f v m Ω f v m Γ Ω ^ f v n = O p ( log T / T ) .
Since Ω f = O ( 1 ) , we have
I I I ( Ω f v m ) Γ ( Ω ^ f Ω f ) v n = O p ( log T / T ) .
The above results hold uniformly for i , j , m , n , thus we have
Δ * = O p ( log p / T + log T / T ) .
The rest of the proofs is similar to those in the proof of Theorem 2. We skip the details. ☐
Proof of Theorem 5. 
For a proof, see the proof of Theorem 3. ☐

References

  1. Bai, J.; Liao, Y. Efficient estimation of approximate factor models via penalized maximum likelihood. J. Econ. 2016, 191, 1–18. [Google Scholar]
  2. Heinemann, A. Efficient estimation of factor models with time and cross-sectional dependence. J. Appl. Econ. 2017, 32, 1107–1122. [Google Scholar]
  3. Fan, J.; Fan, Y.; Lv, J. High dimensional covariance matrix estimation using a factor model. J. Econ. 2008, 147, 186–197. [Google Scholar]
  4. Fan, J.; Liao, Y.; Mincheva, M. High-dimensional covariance matrix estimation in approximate factor models1. Ann. Stat. 2011, 39, 3320–3356. [Google Scholar]
  5. Fan, J.; Liao, Y.; Mincheva, M. Large covariance estimation by thresholding principal orthogonal complements. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2013, 75, 603–680. [Google Scholar]
  6. Fan, J.; Wang, W.; Zhong, Y. Robust covariance estimation for approximate factor models. J. Econom. 2019, 208, 5–22. [Google Scholar]
  7. Dickhaus, T.; Pauly, M. Simultaneous statistical inference in dynamic factor models. In Time Series Analysis and Forecasting; Springer: Berlin, Germany, 2016; pp. 27–45. [Google Scholar]
  8. Dickhaus, T.; Sirotko-Sibirskaya, N. Simultaneous statistical inference in dynamic factor models: Chi-square approximation and model-based bootstrap. Comput. Stat. Data Anal. 2019, 129, 30–46. [Google Scholar]
  9. Lucas, J.; Carvalho, C.; Wang, Q.; Bild, A.; Nevins, J.R.; West, M. Sparse statistical modelling in gene expression genomics. Bayesian Inference Gene Expr. Proteom. 2006, 1, 155–176. [Google Scholar]
  10. Carvalho, C.M.; Chang, J.; Lucas, J.E.; Nevins, J.R.; Wang, Q.; West, M. High-dimensional sparse factor modeling: applications in gene expression genomics. J. Am. Stat. Assoc. 2008, 103, 1438–1456. [Google Scholar]
  11. Reis, R.; Watson, M.W. Relative goods’ prices, pure inflation, and the Phillips correlation. Am. Econ. J. Macroecon. 2010, 2, 128–157. [Google Scholar]
  12. Amengual, D.; Repetto, L. Testing a Large Number of Hypotheses in Approximate Factor Models; Technical Report; CEMFI: Madrid, Spain, 2014. [Google Scholar]
  13. Candès, E.; Tao, T. The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Stat. 2007, 35, 2313–2351. [Google Scholar]
  14. Cai, T.; Liu, W.; Xia, Y. Two-sample covariance matrix testing and support recovery in high-dimensional and sparse settings. J. Am. Stat. Assoc. 2013, 108, 265–277. [Google Scholar] [CrossRef]
  15. Cai, T.T.; Liu, W.; Xia, Y. Two-sample test of high dimensional means under dependence. J. R. Stat. Soc. Ser. (Stat. Methodol.) 2014, 76, 349–372. [Google Scholar]
  16. Zhang, X.; Cheng, G. Simultaneous inference for high-dimensional linear models. J. Am. Stat. Assoc. 2017, 112, 757–768. [Google Scholar] [CrossRef] [Green Version]
  17. Romano, J.P.; Wolf, M. Exact and approximate stepdown methods for multiple hypothesis testing. J. Am. Stat. Assoc. 2005, 100, 94–108. [Google Scholar]
  18. Zhu, Y.; Yu, Z.; Cheng, G. High dimensional inference in partially linear models. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), Naha, Okinawa, Japan, 16–18 April 2019; Volume 89, pp. 2760–2769. [Google Scholar]
  19. Zhang, X.; Cheng, G. Bootstrapping high dimensional time series. arXiv 2014, arXiv:1406.1037. [Google Scholar]
  20. Benjamini, Y.; Hochberg, Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. (Methodol.) 1995, 57, 289–300. [Google Scholar] [CrossRef]
  21. Fama, E.F.; French, K.R. The cross-section of expected stock returns. J. Financ. 1992, 47, 427–465. [Google Scholar] [CrossRef]
  22. Bickel, P.; Levina, E. Some theory for Fisher’s linear discriminant function, “naive Bayes”, and some alternatives when there are many more variables than observations. Bernoulli 2004, 10, 989–1010. [Google Scholar] [CrossRef]
  23. Merlevède, F.; Peligrad, M.; Rio, E. A Bernstein type inequality and moderate deviations for weakly dependent sequences. Probab. Theory Relat. Fields 2011, 151, 435–474. [Google Scholar] [CrossRef] [Green Version]
  24. Kosorok, M.R. Introduction to Empirical Processes and Semiparametric Inference; Springer: New York, NY, USA, 2008. [Google Scholar]
  25. Chernozhukov, V.; Chetverikov, D.; Kato, K. Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors. Ann. Stat. 2013, 41, 2786–2819. [Google Scholar]
  26. Ingster, Y.I.; Tsybakov, A.B.; Verzelen, N. Detection boundary in sparse regression. Electron. J. Stat. 2010, 4, 1476–1526. [Google Scholar] [CrossRef]
Figure 1. Empirical powers of the NST, ST, and EX methods. The figures in the left panels are based on Model 1, while those in the right panels are for Model 2. The red solid line corresponds to the nominal level. (a) p = 10 , (b) p = 10 , (c) p = 200 , (d) p = 200 , (e) p = 600 , (f) p = 600 .
Figure 1. Empirical powers of the NST, ST, and EX methods. The figures in the left panels are based on Model 1, while those in the right panels are for Model 2. The red solid line corresponds to the nominal level. (a) p = 10 , (b) p = 10 , (c) p = 200 , (d) p = 200 , (e) p = 600 , (f) p = 600 .
Entropy 22 01258 g001
Table 1. Empirical sizes of tests, α = 0.05 , T = 400 , and 500 replications.
Table 1. Empirical sizes of tests, α = 0.05 , T = 400 , and 500 replications.
p = 50p = 100p = 200p = 400p = 600
Model 1
NST0.0760.0600.0580.0500.058
ST0.0740.0640.0640.0580.078
EX0.0460.0460.0380.0460.058
Model 2
NST0.0500.0520.0560.0600.038
ST0.0700.0580.0640.0680.048
EX0.0380.0300.0240.0180.016
Table 2. Empirical family-wise error rate (FWER) and false discovery rate (FDR) with power in the brackets of multiple testing based on Model 1, α = 0.05 , and 500 replications.
Table 2. Empirical family-wise error rate (FWER) and false discovery rate (FDR) with power in the brackets of multiple testing based on Model 1, α = 0.05 , and 500 replications.
T s 0 Methodp = 50p = 200p = 500p = 600
2003NST-FWER0.058 (0.551)0.062 (0.405)0.048 (0.309)0.056 (0.291)
ST-FWER0.074 (0.554)0.082 (0.431)0.086 (0.337)0.090 (0.324)
BH-FWER0.054 (0.528)0.070 (0.409)0.074 (0.319)0.068 (0.300)
BH-FDR0.061 (0.635)0.064 (0.470)0.086 (0.380)0.069 (0.353)
15NST-FWER0.056 (0.569)0.050 (0.412)0.040 (0.306)0.046 (0.303)
ST-FWER0.066 (0.583)0.086 (0.430)0.074 (0.334)0.084 (0.327)
BH-FWER0.060 (0.561)0.066 (0.410)0.056 (0.310)0.068 (0.309)
BH-FDR0.043 (0.810)0.064 (0.655)0.06 (0.518)0.061 (0.509)
4003NST-FWER0.050 (0.935)0.058 (0.889)0.062 (0.839)0.058 (0.808)
ST-FWER0.070 (0.937)0.062 (0.885)0.078 (0.842)0.066 (0.813)
BH-FWER0.052 (0.931)0.054 (0.873)0.062 (0.834)0.052 (0.795)
BH-FDR0.057 (0.957)0.056 (0.924)0.064 (0.889)0.068 (0.863)
15NST-FWER0.058 (0.947)0.054 (0.881)0.040 (0.819)0.070 (0.815)
ST-FWER0.052 (0.946)0.066 (0.881)0.058 (0.825)0.084 (0.882)
BH-FWER0.050 (0.942)0.056 (0.871)0.050 (0.809)0.060 (0.806)
BH-FDR0.035 (0.989)0.052 (0.968)0.056 (0.946)0.059 (0.941)
Table 3. Empirical FWER and FDR with power in the brackets of multiple testing based on Model 2, α = 0.05 , and 500 replications.
Table 3. Empirical FWER and FDR with power in the brackets of multiple testing based on Model 2, α = 0.05 , and 500 replications.
T s 0 Methodp = 50p = 200p = 500p = 600
2003NST-FWER0.044 (0.805)0.052 (0.692)0.066 (0.622)0.056 (0.609)
ST-FWER0.058 (0.807)0.066 (0.701)0.084 (0.638)0.066 (0.621)
BH-FWER0.030 (0.759)0.042 (0.620)0.024 (0.517)0.024 (0.505)
BH-FDR0.039 (0.819)0.046 (0.691)0.038 (0.592)0.030 (0.570)
15NST-FWER0.042 (0.805)0.060 (0.697)0.058 (0.626)0.048 (0.618)
ST-FWER0.050 (0.809)0.080 (0.708)0.080 (0.637)0.072 (0.630)
BH-FWER0.028 (0.757)0.042 (0.621)0.034 (0.530)0.038 (0.519)
BH-FDR0.035 (0.922)0.044 (0.822)0.046 (0.746)0.040 (0.717)
4003NST-FWER0.060 (0.989)0.052 (0.985)0.052 (0.971)0.050 (0.970)
ST-FWER0.064 (0.989)0.054 (0.985)0.068 (0.970)0.072 (0.973)
BH-FWER0.046 (0.983)0.022 (0.975)0.026 (0.951)0.024 (0.945)
BH-FDR0.045 (0.995)0.034 (0.990)0.040 (0.972)0.035 (0.973)
15NST-FWER0.066 (0.992)0.072 (0.986)0.056 (0.975)0.056 (0.975)
ST-FWER0.072 (0.992)0.076 (0.986)0.056 (0.975)0.064 (0.975)
BH-FWER0.046 (0.988)0.036 (0.973)0.024 (0.952)0.022 (0.950)
BH-FDR0.043 (0.999)0.042 (0.998)0.031 (0.993)0.044 (0.992)
Table 4. Results of sparse testing.
Table 4. Results of sparse testing.
β = 10 β = 30 β = 50 β = 70 β = 90
1st loadingRRRRR
2nd loadingAAAAA
3rd loadingAAAAR
2nd and 3rd loadingAAARR
Note: “A” means accepting the null hypothesis; “R” denotes rejecting the null hypothesis.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Guo, X. Simultaneous Inference for High-Dimensional Approximate Factor Model. Entropy 2020, 22, 1258. https://0-doi-org.brum.beds.ac.uk/10.3390/e22111258

AMA Style

Wang Y, Guo X. Simultaneous Inference for High-Dimensional Approximate Factor Model. Entropy. 2020; 22(11):1258. https://0-doi-org.brum.beds.ac.uk/10.3390/e22111258

Chicago/Turabian Style

Wang, Yong, and Xiao Guo. 2020. "Simultaneous Inference for High-Dimensional Approximate Factor Model" Entropy 22, no. 11: 1258. https://0-doi-org.brum.beds.ac.uk/10.3390/e22111258

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop