Next Article in Journal
The Internet Shopping Optimization Problem with Multiple Item Units (ISHOP-U): Formulation, Instances, NP-Completeness, and Evolutionary Optimization
Previous Article in Journal
An Effective Hybrid-Energy Framework for Grid Vulnerability Alleviation under Cyber-Stealthy Intrusions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Testing for the Presence of the Leverage Effect without Estimation

1
Department of Mathematics, Faculty of Science and Technology, University of Macau, Avenida da Universidade, Taipa, Macau SAR, China
2
Zhuhai-UM Science and Technology Research Institute, Zhuhai 519072, China
Submission received: 22 June 2022 / Revised: 9 July 2022 / Accepted: 12 July 2022 / Published: 19 July 2022
(This article belongs to the Section Financial Mathematics)

Abstract

:
Problem: The leverage effect plays an important role in finance. However, the statistical test for the presence of the leverage effect is still lacking study. Approach: In this paper, by using high frequency data, we propose a novel procedure to test if the driving Brownian motion of an It o ^ semi-martingale is correlated to its volatility (referred to as the leverage effect in financial econometrics) over a long time period. The asymptotic setting is based on observations within a long time interval with the mesh of the observation grid shrinking to zero. We construct a test statistic via forming a sequence of Studentized statistics whose distributions are asymptotically normal over blocks of a fixed time span, and then collect the sequence based on the whole data set of a long time span. Result: The asymptotic behaviour of the Studentized statistics was obtained from the cubic variation of the underlying semi-martingale and the asymptotic distribution of the proposed test statistic under the null hypothesis that the leverage effect is absent was established, and we also show that the test has an asymptotic power of one against the alternative hypothesis that the leverage effect is present. Implications: We conducted extensive simulation studies to assess the finite sample performance of the test statistics, and the results show a satisfactory performance for the test. Finally, we implemented the proposed test procedure to a dataset of the SP500 index. We see that the null hypothesis of the absence of the leverage effect is rejected for most of the time period. Therefore, this provides a strong evidence that the leverage effect is a necessary ingredient in modelling high-frequency data.
MSC:
60F05; 60G5; 37M10; 62G20

1. Introduction

The leverage effect in financial economics refers to the correlation between the driven system of an asset price and its volatility. It is widely accepted that the leverage effect is a common stylized fact of financial data, like the skewed distribution, volatility asymmetry, etc. Moreover, empirical analyses have provided evidence that the returns of equities are usually negatively correlated to the near future volatility. Ang and Chen [1] found that volatility rises when stock prices go down and that volatility decreases if stock prices go up, see also Black [2], Christie [3].
The existence of the leverage effect is explained from an economic point of view, firstly. For example, Modigliani and Miller [4] linked the leverage effect to the financial leverage of a firm, namely, the debt–equity ratio, which reflects a firm’s capital structure. Using the idea of leverage well demonstrates the negative leverage effect usually found in financial data. Precisely, with an increase in the stock price, the value of equity increases more than the value of debt because the claims of the debt holders are limited in a short time interval; thus, its leverage (debt to equity ratio) decreases; hence, the firm will be less risky, which results in a drop in the volatility. The same logic applies to falling stock prices, which should lead to an increase in future volatility. However, the financial leverage is not the only driver for the price-volatility relation, there is also evidence that the leverage effect observed in financial time series is not fully explained by a firm’s leverage. Modigliani and Miller [4] found a strong leverage effect for falling stock prices but a very weak, even nonexistent, leverage effect for positive returns; Hens and Steude [5] and Hasanhodzic and Lo [6] found the leverage effect in many financial markets although the underlying asset did not exhibit any financial leverage at all.
The leverage effect characterizes the correlation between the latent process and the volatility process, since the volatility process is unobservable; hence, it has to be estimate with the observed data. For low-frequency data, it is hard to estimate unless assuming some stationary condition. However, for high-frequency data, it can be estimated more accurately. Therefore, the leverage effect can be more accurately measured in high-frequency data than low-frequency data. Recently, with help of widely available high-frequency financial data, the leverage effect can be quantitatively modelled and the statistical estimation of the leverage effect has been investigated. For example, Aït-Sahalia et al. [7] found that an estimator of the leverage effect using high-frequency data yields a “zero” estimator whatever the true value of the leverage effect is, based on the Heston’s stochastic volatility model and they created a bias-corrected consistent estimator; Wang and Mykland [8] considered the estimation of the leverage effect under the presence of microstructure noise; Aït-Sahalia et al. [9] further theoretically split the leverage effect into two parts: the continuous leverage effect and discontinuous leverage effect, which are the quadratic co-variations of continuous diffusion parts and jump parts of a volatility process and underlying-price process, respectively. Both leverage effects have been consistently estimated. They also showed the empirical evidence of the existence of the two kinds of leverage effects; Kalnina and Xiu [10] proposed a nonparametric model-free estimator for the leverage effect. The study provided two estimators for the leverage effect, the first one only uses the discretely observed stock-price data, as usual, while the second estimator also employs VIX data as the observation of the volatility process to improve estimation efficiency.
Despite the economic rationale and non-zero measure of the leverage effect in empirical analysis for most financial markets and equities, there is yet no theoretically valid test to tell whether it indeed exists under a given nominal level. If one wants to test the non-zero leverage effect, a natural way is to estimate the leverage effect first and then use the asymptotic normality (Studentized version) of the estimator to construct the confidence intervals. As the volatility is totally unobservable, we must estimate the volatility in advance, and then, based on these “estimated data” of the volatility and observed price data, we can compute the estimator of leverage effect, as stated in the above literature. This procedure, although theoretically feasible, must have a very slow convergence rate, which pays for the two estimation procedures ahead of the test statistic.
Mathematically speaking, we consider a continuous-time semi-martingale:
d X t = b t d t + σ t d W t ,
where W is a standard Brownian motion defined on an appropriate probability space, and b and σ are two processes satisfying some regular conditions (specified later), so that the stochastic differential equation is well-defined. The quantity of interest is the correlation between the two processes σ and X, precisely, the quadratic co-variation X , σ 2 t . In this paper, we provide a simple and easy-ito-mplement testing procedure in which we do not need to estimate the leverage effect. Precisely, we assume that the volatility process further follows a continuous semi-martingale:
d σ t = a t d t + L t d W t + H t d B t ,
where W is the driven Brownian motion of X and B is another standard Brownian motion, independent of W. Under this framework, we can see that L t 0 indicates the absence of the leverage effect. Therefore, it is necessary to find statistics whose limit is an explicit function of L. For instance, we can firstly estimate it by the method in either Wang and Mykland [8] or Aït-Sahalia et al. [9], and then create the test statistics from a related central limit theorem. Here, instead of estimating first and testing afterwards, we create a direct test procedure. Kinnebrock and Podolskij [11] derived the following limiting result for the cubic variation of semi-martingale:
1 Δ n i = 1 n ( Δ i n X ) 3 S 3 0 T L s σ s 2 d s + 3 0 T σ s 2 d X s + 6 0 T | σ s | 3 d W ˜ s .
From this limiting result, if L 0 (under the null hypothesis), the limit on the right-hand side is a conditional normal random variable. Moreover, the “conditional mean” 3 0 T σ s 2 d X s can be consistently estimated, and the conditional variance can be consistently estimated as well. That is, after Studentization, it would result in a standard normal distribution asymptotically. Our analysis shows that this de-biased and Studentized statistic, although it indeed yields a satisfactory performance under the null hypothesis as expected, it does not provide a unit power under the alternative hypothesis. To improve the power, we consider the high-frequency data of a long time period— [ 0 , T ] , for instance—and then implement and replicate the procedure in each time interval [ k 1 , k ] for k = 1 , , T , to obtain a sequence of asymptotically independent standard normal random variables. The final test statistic is constructed globally based on these independent standard normal random variables. We can show that this overall test procedure provides a unit power under the alternative hypothesis; the T is large. We establish the related asymptotic theory under both null and alternative hypotheses. The simulation studies assess the finite sample performance of the proposed test and show that the test provides satisfactory finite sample size and power. Finally, we also implement the procedure to test the presence of the leverage effect in modelling the SP500 index. The high-frequency dataset consists of the daily SP500 index in years 2000–2019, a total of 240 months. We consider the null hypothesis of the absence of the monthly leverage effect. The results show that the null hypothesis of zero monthly leverage effect is rejected for most of months. This supports the claim that the leverage effect is a necessary component in modelling high-frequency data.
The remainder of this paper is organized as follows. Section 2 contains the model setup and assumptions. In Section 3, we introduce some statistics and derive the related central limit theorem, based on which the test procedure is presented. Section 4 is the simulation studies and Section 5 is the application to a real high-frequency dataset. All the technical proofs are put into the Appendix A.

2. Setting and Assumptions

Let { X t , t 0 } denote the efficient log-price process defined on a filtered probability space ( Ω , F , F t , P ) equipped with a filtration { F t } t 0 of the form:
d X t = b t d t + σ t d W t ,
where, b t is a locally bounded and F t progressively measurable real-valued process, σ t is a c a ` dl a ` g F t -adapted real-valued process, and W t is a F t -adapted Wiener process. This is the most popular model for log-price processes due to the consideration of no arbitrage, see Delbaen and Schachermayer [12,13,14]. Some future literature about the stochastic volatility models can be found in Stojkoski et al. [15], Zheng and Wang [16], Bouchaud and Potters [17], and Fouque et al. [18], and references therein.
We assume that the volatility process { σ t , t 0 } is of form:
d σ t = a t d t + L t d W t + H t d B t ,
where, a, L and H are locally bounded and progressively measurable with respect to F t , and B is another F t -adapted Wiener process, independent of W.
Assumption 1.
The system satisfies (3) and (4), where X and σ are continuous processes. All the coefficients b t , a t , L t and H t are locally bounded in absolute value.
The problem of interest is to test:
H 0 : L t 0 , μ - a . s . v s . H 1 : L t 0 , μ - a . s . ,
where μ denotes the Lebesgue measure.
Remark 1.
Our test is equivalent to testing the presence of the leverage effect in Wang and Mykland [8]. They defined the contemporaneous leverage effect as
X , σ 2 T = 2 0 T F ( σ t 2 ) L t d t ,
where F is a twice continuously differentiable and monotonic function on ( 0 , ) . Considering the stochastic volatility model, L t generally can be written as δ t ρ t , where ρ t is the correlation between the driven Wiener process of X t and that of σ t . Thus, 0 T σ t 2 L t d t = 0 is essentially equivalent to ρ t 0 , and, therefore, 0 T F ( σ t 2 ) L t d t = 0 for any function F in Wang and Mykland [8].

3. Test

Over the time interval [ 0 , T ] , we assume that the process X is observed discretely on the time points i Δ n , i = 0 , , n , where n = T Δ n . Let Δ i n X : = X i Δ n X ( i 1 ) Δ n . We start from an auxiliary result from Kinnebrock and Podolskij [11], which gives the asymptotic behaviour of the cubic power variation of X:
n i = 1 n ( Δ i n X ) 3 S 3 0 T L s σ s 2 d s + 3 0 T σ s 2 d X s + 6 0 T | σ s | 3 d W ˜ s ,
where W ˜ is a standard Brownian motion defined on an extension of the original probability space independent of W and B and S denotes the stable convergence. (A sequence of random variables, ξ n , n 1 , is said to converge stably in law to ξ , which is defined on an appropriate extension of the origin probability space ( Ω , F , P ) if, for any F measurable, bounded random variable υ and any bounded continuous function f, we have the convergence
E [ υ f ( ξ n ) ] E [ υ f ( ξ ) ] ,
where E is the expectation defined on the extended probability space. The stable convergence is slightly stronger than the weak convergence (let υ = 1 ); we write the stable convergence as ξ n L ( s ) ξ . We refer to Renyi [19] and Aldous and Eagleson [20] for the more detailed discussion on stable convergence. The extension of stable convergence to stochastic processes has been discussed in Jacod and Shiryayev [21] (Chap. IX.7).) The first two terms in the limit are from the process X. Under H 0 , the first term vanishes; if we could consistently estimate the second term, then only a mixed normal variable remains in the limit. We now illustrate this standardization procedure.
In the following, L denotes the convergence in distribution if the limit is a random variable and denotes the weak convergence if the limit is a distribution, and P denotes the convergence in probability.
Observing the convergence in (20), under the null hypothesis H 0 , namely, L t 0 , we have
n i = 1 n ( Δ i n X ) 3 3 0 T σ s 2 d X s S 6 0 T | σ s | 3 d W ˜ s .
If we have a consistent estimator for the “bias” term 3 0 T σ s 2 d X s , then the quantity on the left-hand side will behave asymptotically as a mixed normal random variable. This idea results in the following proposition. We define the local volatility estimator as
σ ^ i Δ n 2 : = 1 k n Δ n j = i k n + 1 i ( Δ j n X ) 2 , i = k n , , n 1 k n Δ n j = 1 k n ( Δ j n X ) 2 , i = 0 , , k n 1 ,
where we take k n = θ Δ n for some θ > 0 .
Proposition 1.
Assumption 1 holds. If L t 0 , then we have
T n = i = 1 n ( Δ i n X ) 3 3 Δ n i = 1 n σ ^ ( i 1 ) Δ n 2 Δ i n X 2 5 i = 1 n ( Δ i n X ) 6 L Z ,
where Z is a standard normal variable.
Here, we take an appropriate Studentization to obtain the standard normal, which is necessary for a valid test procedure. The denominator is just the standard deviation of the normal variable in the numerator.
Remark 2.
The asymptotic result above provides a guarantee on the first type error. However, it is not a consistent text statistic. We see that, under the alternative hypothesis L t 0 , we have
T n L 3 0 T L s σ s 2 d s 6 0 T σ s 6 d s + 0 T | σ s | 3 d W ˜ s 0 T σ s 6 d s .
Here, the limit on the right-hand side is a mixed normal random variable with conditional mean
3 0 T L s σ s 2 d s 6 0 T σ s 6 d s .
Therefore, the asymptotic power for this test is not 1. The reason is that we only use the data from a fixed period [ 0 , T ] ; the bias under the alternative is not big enough to guarantee the asymptotic power is 1. Therefore, one natural way to increase the power is to use the data from a long time period, [ 0 , T ] .
Following this idea, we may overcome the issue of power by considering the data from a long time span ( T ). We denote m = ϑ Δ n as the sample size of one period for some ϑ > 0 , and define
T n ( k ) : = i = 1 m ( Δ i + ( k 1 ) m n X ) 3 3 Δ n i = 1 n σ ^ ( i 1 ) Δ n + ( k 1 ) m 2 Δ i + ( k 1 ) m n X 2 5 i = 1 m ( Δ i + ( k 1 ) m n X ) 6 1 / 2 ,
for k = 1 , , K n with K n : = n / m .
Theorem 1.
X and σ satisfy Assumption 1. If L t 0 , and if K n , K n Δ n 1 / 4 0 as Δ n 0 , for a continuous function g with finite first order derivative | g | < , and ρ g = E [ g ( Z ) ] < , ν g = V [ g ( Z ) ] < , then we have
T n : = 1 ν g K n k = 1 K n g ( T n ( k ) ) ρ g L Z ,
where Z defines a standard normal variable.
Now, under the alternative hypothesis, since the new test statistic T gathers K n non-central normal random variables (conditionally independent), it would tend to infinity with K n . This is demonstrated by the following corollary.
Corollary 1.
Assumption 1 holds. Denote with z α the α-quantile of standard normal distribution. g is a continuous function with finite first order derivative | g | < , and ρ g = E [ g ( Z ) ] < , ν g = V [ g ( Z ) ] < . If K n , K n Δ n 1 / 4 0 as Δ n 0 , then we have
P ( | T n | z 1 α / 2 ) 1 α , under H 0 0 , under H 1 .
Under the null, the limiting null distribution of T n coincides with that of the zero-mean normal random variable with unit variance while, under the alternative, the test statistic diverges in probability, thus delivering a unit power.
Remark 3. 
Some possible choices of g are given in the following:
  • g ( x ) = x : 1 K n k = 0 K n T ^ n ( k ) L N ( 0 , 1 ) ;
  • g ( x ) = | x | : 1 K n ( 1 2 π ) k = 0 K n | T ^ n ( k ) | 2 π L N ( 0 , 1 ) ;
  • g ( x ) = x 2 : 1 2 K n k = 0 K n ( T ^ n ( k ) ) 2 1 L N ( 0 , 1 ) ;
  • g ( x ) = log ( | x | ) : 1 K n σ l o g k = 0 b n log | T ^ n k | μ log L N ( 0 , 1 ) , where σ log = V [ log | Z | ] 1.24 , and μ log = E [ log | Z | ] 0.64 .
We can also use the extreme value distribution to construct the test. That is, we have
2 log K n max 0 k K n T n ( k ) 2 log K n + log log K n + log 4 π 2 2 log K n L G ,
where G is the Gumbel distribution with cumulative distribution function exp { exp ( x ) } .
Remark 4.
From the conditions of Theorem 1, K n Δ n 1 / 2 0 ( K n T ) is required to obtain the desired asymptotic results. Due to the restriction, therefore, K n (or T) cannot be too large. For example, if we consider the daily trading set (6.5 hours) with frequency one minute, then we have Δ n = 1 / 390 ; hence, a reasonable choice for K n will be 10–20. This may yield a low power, although the test is asymptotically consistent. To obtain a larger sample size K n , we can consider a shorter local time span, namely, m = ϑ Δ n for 0 < ϑ < 1 . It is easy to see that the asymptotic results keep true for this case as long as ϑ is fixed. For instance, if we consider the half-day trading set, K n will be doubled. Moreover, we may consider the extreme case of ϑ 0 (but m ). However, the asymptotic behaviour will be totally different to that obtained in the current setting, and it would be investigated in a future work.

4. Simulation

In this section, we conduct simulation studies to assess the finite sample performance of our proposed test statistics. We generate the high-frequency data from the following one-factor stochastic volatility model (see Andersen et al. [22]):
d S t = r d t + σ t d W t , d V t = a d t + η ρ d B t + κ 1 ρ 2 d W t , σ t = exp ( μ + γ V t ) ,
where, W and B are two standard Brownian motions, independent of each other, and ρ represents the level of the leverage effect. The parameters of the model are calibrated to real financial data. The parameter values used in the generating process under the null and alternative hypotheses are displayed in Table 1, where r = 0.0317 represents monthly interest rate, corresponding to an annual interest of about 8%.
The observation scheme is similar to that of our empirical application. We set Δ n = 1 / 78 , ϑ = 1 , and m = 78 , which corresponds to sampling every 5 min in a 6.5 h trading day. In the estimation of the local volatility, we let k n = 13 . We tested for presence of the leverage effect on an interval from length of T = 1 (day), to T = 66 (one quarter), by summing the test statistics over the T days. For the choice of function g (which is used in the Theorem 1), we selected the following functions:
  • g ( x ) = x , which corresponds to the test statistic W n ;
  • g ( x ) = | x | , which corresponds to the test statistic S n ;
  • g ( x ) = | x | 4 , which corresponds to the test statistic U n ;
  • g ( x ) = log ( | x | ) , which corresponds to the test statistic V n ;
These four functions are all have bounded derivatives and, therefore, satisfy the condition of Theorem 1. From the first to the last, the derivative becomes gradually smaller.
We computed the finite sample size and power for all cases. The results from the Monte Carlo, which is based on 10,000 replications, are reported in two tables (Table 2 and Table 3) and two figures (Figure 1 and Figure 2). Table 2 and Figure 1 display the finite sample size and Table 3 and Figure 2 exhibit the finite sample power, respectively.
From Table 2, we see that the test statistic W n performs stably relatively, with the length of time interval, T. The coverage rates of other three test statistics tend to decrease more clearly when T increases. Figure 1 shows the same results pictorially. This is consistent with our theoretical results, noting that Theorem 1 and Corollary 1 indicate that K n (hence T) cannot be too large, to guarantee a valid converge rate under the null hypothesis H 0 . Table 3 shows that all four test statistics tend to perform better uniformly when the time interval becomes longer under the alternative hypothesis H 1 , which is also consistent with our theoretical result. In fact, the only condition to assure the test statistics tending to infinity under alternative hypothesis H 1 is letting T . Therefore, an ideal choice for the length of the test period should be determined via a tradeoff between the size and power. To perform this, we computed the averaged errors of sizes and powers, namely, the difference between finite sample coverage rate and asymptotic coverage rate, of the four test statistics. The simulation results indicate that, for the high-frequency data with sampling frequency of five minutes, the best choice for the time interval of the test is between a month and two months.

5. Real Data Analysis

In this section, we implement the proposed test procedures to a real high-frequency financial dataset. The dataset consists of 5-minute close prices of the SP500 index from 1997 to 2000. To clear the data, we firstly removed the largest 3% returns to avoid the possible presence of jumps and no further adjustment was conducted for the data set. Finally, we obtained a total of 250,669 data points. We computed the test statistics with several periods, namely, a day, a week, a month, a quarter, half a year and a year, respectively. More precisely, for example, for the daily test, we took n = 78 (n = 390, 1716, 5418, 10,296 and 20,592 for the weekly test, monthly test, quarterly test, semiannual test and annual test, respectively.) and m n = k n = n , using the following statistic with K n = n / m :
T n : = 1 ν g K n k = 1 K n g ( T n ( k ) ) ρ g , T n ( k ) = i = 1 n ( Δ i n X ) 3 3 Δ n i = 1 n σ ^ ( i 1 ) Δ n 2 Δ i n X 2 5 i = 1 n ( Δ i n X ) 6
where we used the similar test function g used in the simulation.
Table 4 displays the values of the test statistics and related rejection rates of the tests based on these six different time spans.
From Table 4, for the time period longer than a day, all four test statistics reject the null hypothesis of the absence of the leverage effect. For the daily test, we see that T n shows the highest rejection rate for all kinds of time periods, followed by S n and U n ; V n has a lowest rejection rate. This is same as the order of empirical powers in the simulation study; hence, we may conclude that the leverage effect is a necessary component in modelling the high-frequency data of the SP500 index.

6. Discussion

In the current paper, we did not consider the common stylised facts of the high-frequency data, such that the jumps and microstructure noise. Here, we give some possible ways to deal with the effects of the jumps and microstructure noise. The process X contains jumps when, namely,
d X t = b t d t + σ t d W t + δ t d J t ,
where J is a pure jump process. In this case, one possible way to remove the effect of the jump is to use the threshold technique proposed by Mancini [23]:
n i = 1 n ( Δ i n X ) 3 1 { | Δ i n | α Δ n ϖ } S 3 0 T L s σ s 2 d s + 3 0 T σ s 2 d X s + 6 0 T | σ s | 3 d W ˜ s ,
where ϖ < 1 / 2 is a positive integer. Then, the theory established in this paper should be extended to the case with jumps. For microstructure noise, we may try some de-noising approaches, such as the pre-averaging method proposed in Jacod et al. [24] or the realized kernel method suggested by Barndorff-Nielsen et al. [25] to reduce the level of microstructure noise and then use the idea in this paper to construct a test statistic to test the presence of the leverage effect. The other direction concerns the future implications of the results obtained in this paper. As implied by the results of the empirical study in this paper, the leverage effect is a necessary feature of high-frequency modelling; therefore, we may consider how to use the leverage effect to improve the usefulness of high-frequency data; for instance, we may add the leverage effect as a factor in predictive models predicting volatility and asset price, such as in the HAR model, etc.

7. Conclusions

In this paper, we propose a class of test statistics to test the presence of the leverage effect in modelling high-frequency data in financial markets. The test procedure is valid under shrinking sampling intervals. In real application, it proved a valid and reasonable test approach for the presence of the leverage effect within a moderately long time span, from a week to a quarter, for instance. Widely simulated studies further confirmed the performance of the proposed test statistics. The proposed test method was also applied to a dataset of the SP500 index; the result shows that the leverage effect is a necessary factor when the time span is longer than a week.
Some limitations of the present study remain. The first one is that the jump is not incorporated into the current setting, and the second is that the test is not robust in the possible presence of microstructure noise. We listed some possible ways to deal with these two issues in the last section. In view of the importance of both jumps and microstructure noise in financial high-frequency data, the extension of the current research in this direction deserves a future investigation.

Funding

This research research is funded by The Science and Technology Development Fund, Macau SAR (File no. 0041/2021/ITP) and NSFC (No. 11971507).

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Technical Proofs

By a standard localization procedure, as described in Jacod and Protter [26], we may assume that b , a , H and σ are bounded; we also assume that the σ 2 is bounded from below uniformly in [ 0 , T ] and L is bounded from below uniformly in [ 0 , T ] as well as under the alternative hypothesis. Throughout the proofs, C defines a generic constant. We denote t i n = i Δ n and Y i : = Y t i n for a process Y.
Proof of Proposition 1.
Observe that
i = 1 n ( Δ i n X ) 3 = 1 Δ n i = 1 n ( t i 1 n t i n b s d s ) 3 + 1 Δ n i = 1 n ( t i 1 n t i n σ s d W s ) 3 + 3 Δ n i = 1 n ( t i 1 n t i n b s d s ) 2 t i 1 n t i n σ s d W s + 3 Δ n i = 1 n t i 1 n t i n b s d s ( t i 1 n t i n σ s d W s ) 2 = : I 1 n + I 2 n + I 3 n + I 4 n .
Thus
T n = ( I 1 n + I 2 n + I 3 n + I 4 n ) I 5 n 2 5 Δ n 2 i = 1 n ( Δ i n X ) 6 1 / 2 ,
where I 5 n : = 3 i = 1 n σ ^ i 1 2 Δ i n X . Boundedness of b and σ together with It o ^ -Isometry yield
E [ | I 1 n | ] C Δ n , E [ ( I 3 n ) 2 ] C Δ n 2 .
Next, we analyse I 2 n . By It o ^ ’s formula, we have the following decomposition:
I 2 n = 1 Δ n i = 1 n 2 t i 1 n t i n t i 1 n s σ u σ s d W u d W s + t i 1 n t i n σ s 2 d s · t i 1 n t i n σ s d W s = 1 Δ n i = 1 n 2 t i 1 n t i n t i 1 n s σ u σ s d W u d W s · t i 1 n t i n t i 1 n s d σ u d W s + 1 Δ n i = 1 n 2 t i 1 n t i n t i 1 n s σ u σ s d W u d W s · σ i 1 Δ i n W + 1 Δ n i = 1 n t i 1 n t i n ( σ s 2 σ i 1 2 ) d s · t i 1 n t i n σ s d W s + i = 1 n σ i 1 2 t i 1 n t i n σ s d W s = : I 2 , 1 n + I 2 , 2 n + I 2 , 3 n + I 2 , 4 n .
Recalling Assumption 1, σ itself is a semi-martingale; hence,
I 2 , 1 n = 2 Δ n i = 1 n t i 1 n t i n t i 1 n s σ u σ s d W u d W s · t i 1 n t i n t i 1 n s d σ u d W s = 2 Δ n i = 1 n t i 1 n t i n t i 1 n s σ u σ s d W u d W s · t i 1 n t i n t i 1 n s a u d u d W s + 2 Δ n i = 1 n t i 1 n t i n t i 1 n s σ u σ s d W u d W s · t i 1 n t i n t i 1 n s H u d B u d W s + 2 Δ n i = 1 n t i 1 n t i n t i 1 n s σ u σ s d W u d W s · t i 1 n t i n t i 1 n s L u d W u d W s = : I 2 , 1 n ( 1 ) + I 2 , 1 n ( 2 ) + I 2 , 1 n ( 3 ) .
If L 0 , then I 2 , 1 n ( 3 ) = 0 . It is easy to see
E [ I 2 , 1 n ( 1 ) ] C Δ n 1 2 .
and
I 2 , 1 n ( 2 ) = 2 Δ n i = 1 n t i 1 n t i n t i 1 n s σ u σ s d W u · t i 1 n s t i 1 n u H v d B v d W u d W s + 2 Δ n i = 1 n t i 1 n t i n t i 1 n s t i 1 n u σ v σ u d W v d W u · t i 1 n s H u d B u d W s + 2 Δ n i = 1 n t i 1 n t i n t i 1 n s σ s σ u d W u · t i 1 n s H u d B u d s = : I 2 , 1 n ( 2 , 1 ) + I 2 , 1 n ( 2 , 2 ) + I 2 , 1 n ( 2 , 3 ) .
Note that
I 2 , 1 n ( 2 , 3 ) = 2 Δ n i = 1 n t i 1 n t i n t i 1 n s t i 1 n u σ s σ u H v d B v d W u + t i 1 n s t i 1 n u σ u σ v H u d W v d B u d s = 2 Δ n i = 1 n t i 1 n t i n v t i n u t i n σ s σ u H v d s d W u d B v + t i 1 n t i n v t i n u t i n σ u σ v d s d B u d W v .
Boundedness of b , H , a and σ , Cauchy–Schwarz inequality and Burkholder inequality together yield
E [ | I 2 , 1 n ( 2 , 1 ) | 2 ] C Δ n , E [ | I 2 , 1 n ( 2 , 2 ) | 2 ] C Δ n , E [ | I 2 , 1 n ( 2 , 3 ) | 2 ] C Δ n 2 .
For I 2 , 3 n , we have
I 2 , 3 n = 1 Δ n i = 1 n t i 1 n t i n ( σ s 2 σ i 1 2 ) d s · t i 1 n t i n σ s d W s = 1 Δ n i = 1 n t i 1 n t i n t i 1 n s d σ u 2 d s · t i 1 n t i n σ s d W s + 2 σ i 1 t i 1 n t i n t i 1 n s d σ u d s · t i 1 n t i n σ s d W s = : I 2 , 3 n ( 1 ) + I 2 , 3 n ( 2 ) .
Again, Assumption 1, Cauchy–Schwarz inequality and Burkholder inequality together yield
E [ | I 2 , 3 n ( 1 ) | ] C Δ n 1 2 .
and
I 2 , 3 n ( 2 ) = 1 Δ n i = 1 n 2 σ i 1 t i 1 n t i n t i 1 n s d σ u d s · t i 1 n t i n σ s d W s = 2 Δ n i = 1 n σ i 1 t i 1 n t i n ( t i 1 n s d σ u · t i 1 n s σ u d W u ) d s + t i 1 n t i n ( σ s t i 1 n s t i 1 n u d σ v d u ) d W s = 2 Δ n i = 1 n σ i 1 t i 1 n t i n ( t i 1 n s a u d u · t i 1 n s σ u d W u ) d s + 2 Δ n i = 1 n σ i 1 t i 1 n t i n ( t i 1 n s H u d B u · t i 1 n s σ u d W u ) d s + 2 Δ n i = 1 n σ i 1 t i 1 n t i n ( t i 1 n s L u d W u · t i 1 n s σ u d W u ) d s + 2 Δ n i = 1 n σ i 1 t i 1 n t i n ( σ s t i 1 n s t i 1 n u d σ v d u ) d W s = : I 2 , 3 n ( 2 , 1 ) + I 2 , 3 n ( 2 , 2 ) + I 2 , 3 n ( 2 , 3 ) + I 2 , 3 n ( 2 , 4 ) .
I 2 , 3 n ( 2 , 3 ) if L 0 . It is easy to check that
E [ | I 2 , 3 n ( 2 , 1 ) | 2 ] C Δ n , E [ | I 2 , 3 n ( 2 , 2 ) | ] C Δ n 1 / 2 , E [ | I 2 , 3 n ( 2 , 3 ) | 2 ] C Δ n , E [ | I 2 , 3 n ( 2 , 4 ) | 2 ] C Δ n .
Now, we return to consideration of the two main terms, I 2 , 1 n and I 2 , 4 n . By employing It o ^ ’s product formula repeatedly, we obtain
I 2 , 2 n = 1 Δ n i = 1 n 2 t i 1 n t i n t i 1 n s σ u σ s d W u d W s · σ i 1 Δ i n W = 2 Δ n i = 1 n σ i 1 3 t i 1 n t i n t i 1 n s t i 1 n u σ u σ v d W v d W u d W s + 2 Δ n i = 1 n σ i 1 t i 1 n t i n σ s d W s · t i 1 n t i n σ u d u = 6 Δ n i = 1 n σ i 1 3 t i 1 n t i n t i 1 n s t i 1 n u d W v d W u d W s + 2 i = 1 n σ i 1 2 t i 1 n t i n σ s d W s + 2 Δ n i = 1 n σ i 1 3 t i 1 n t i n t i 1 n s t i 1 n u t i 1 n v σ u d σ q d W v d W u d W s + 2 Δ n i = 1 n σ i 1 3 t i 1 n t i n t i 1 n s t i 1 n u t i 1 n u σ v d σ q d W v d W u d W s + 2 Δ n i = 1 n σ i 1 t i 1 n t i n σ s d W s · t i 1 n t i n t i 1 n u d σ v d u = : I 2 , 2 n ( 1 ) + I 2 , 2 n ( 2 ) + I 2 , 2 n ( 3 ) + I 2 , 2 n ( 4 ) + I 2 , 2 n ( 5 ) .
Boundedness of b , H , a and σ , Cauchy–Schwarz inequality and Burkholder inequality together give
E [ | I 2 , 2 n ( 3 ) | 2 ] C Δ n , E [ | I 2 , 2 n ( 4 ) | 2 ] C Δ n , E [ | I 2 , 2 n ( 5 ) | ] C Δ n 1 2 .
For I 4 n , by It o ^ ’s formula, we obtain
I 4 n = 3 Δ n i = 1 n t i 1 n t i n b s d s · 2 t i 1 n t i n t i 1 n s σ u σ s d W u d W s + t i 1 n t i n σ s 2 d s = 6 Δ n i = 1 n t i 1 n t i n b s d s · t i 1 n t i n t i 1 n s σ u σ s d W u d W s + 3 Δ n i = 1 n t i 1 n t i n b s d s · t i 1 n t i n σ s 2 d s = : I 4 , 1 n + I 4 , 2 n .
Again, by boundedness of b and σ , H o ¨ lder’s inequality together with It o ^ -Isometry, we obtain
E [ | I 4 , 1 n | ] C Δ n 3 / 2 .
Writing I 4 , 2 n as
I 4 , 2 n = 3 Δ n i = 1 n t i 1 n t i n b s d s · t i 1 n t i n ( σ s 2 σ i 1 2 ) d s + 3 i = 1 n σ i 1 2 · t i 1 n t i n b s d s = : I 4 , 2 n ( 1 ) + I 4 , 2 n ( 2 ) .
It is easy to get
E [ | I 4 , 2 n ( 1 ) | ] C Δ n 2 .
Considering I 5 n , we firstly have
σ ^ i 1 2 = 1 k n Δ n j = ( i k n ) 1 ( i 1 ) k n ( Δ j n X ) 2 = σ i 1 2 + 1 k n Δ n j = ( i k n ) 1 ( i 1 ) k n ( t j 1 n t j n b s d s ) 2 + 1 k n Δ n j = ( i k n ) 1 ( i 1 ) k n t j 1 n t j n b s d s t j 1 n t j n σ s d W s + 1 k n Δ n j = ( i k n ) 1 ( i 1 ) k n t j 1 n t j n ( σ s 2 σ i 1 2 ) d s + 2 k n Δ n j = ( i k n ) 1 ( i 1 ) k n t j 1 n t j n t j 1 n s σ u σ s d W u d W s = : σ i 1 2 + e i , 1 n + e i , 2 n + e i , 3 n + e i . 4 n .
Hence, we have
I 5 n = 3 i = 1 n σ ^ i 1 2 Δ i n X = 3 i = 1 n σ i 1 2 Δ i n X + 3 i = 1 n Δ i n X ( e i , 1 n + e i , 2 n + e i , 3 n + e i . 4 n ) .
It is easy to see that
E [ | i = 1 n ( Δ i n X · e i , 1 n ) | ] C Δ n 1 / 2 , E [ | i = 1 n ( Δ i n X · e i , 3 n ) | 2 ] C Δ n .
Moreover,
i = 1 n ( Δ i n X · e i , 2 n ) = i = 1 k n Δ i n X · 1 k n Δ n j = 1 k n t j 1 n t j n b s d s t j 1 n t j n σ s d W s + i = k n + 1 n t i 1 n t i n b s d s · 1 k n Δ n j = i k n i 1 t j 1 n t j n b s d s t j 1 n t j n σ s d W s + i = k n + 1 n t i 1 n t i n σ s d W s · 1 k n Δ n j = i k n i 1 t j 1 n t j n b s d s t j 1 n t j n σ s d W s = : o 2 , 1 n + o 2 , 2 n + o 2 , 3 n .
We have the following estimates:
E [ o 2 , 1 n ] C k n 1 2 Δ n , E [ o 2 , 2 n ] C Δ n 1 2 , E [ o 2 , 3 n 2 ] C Δ n / k n .
Similarly,
i = 1 n ( Δ i n X · e i , 4 n ) = i = 1 k n Δ i n X · 1 k n Δ n j = 1 k n t j 1 n t j n t j 1 n s σ u σ s d W u d W s + i = k n + 1 n t i 1 n t i n b s d s · 1 k n Δ n j = i k n i 1 t j 1 n t j n t j 1 n s σ u σ s d W u d W s + i = k n + 1 n t i 1 n t i n σ s d W s · 1 k n Δ n j = i k n i 1 t j 1 n t j n t j 1 n s σ u σ s d W u d W s = : o 4 , 1 n + o 4 , 2 n + o 4 , 3 n .
and
E [ o 4 , 1 n ] C ( k n Δ n ) 1 2 , E [ o 4 , 2 n 2 ] C Δ n , E [ o 4 , 3 n 2 ] C / k n C Δ n 1 / 2 .
Observing that I 2 , 4 n + I 2 , 1 n ( 2 ) + I 4 , 2 n ( 2 ) = 3 i = 1 n σ i 1 2 Δ i n X , and recalling the decomposition (A1), we have
T n = ( I 1 n + I 2 n + I 3 n + I 4 n ) I 5 n 2 5 Δ n 2 i = 1 n ( Δ i n X ) 6 1 / 2 = I 2 , 2 n ( 1 ) + O p ( Δ n 1 / 4 ) 2 5 Δ n 2 i = 1 n ( Δ i n X ) 6 1 / 2 .
By standard martingale limit theorem, we obtain
I 2 , 2 n ( 1 ) = 6 Δ n i = 1 n σ i 1 3 t i 1 n t i n t i 1 n s t i 1 n u d W v d W u d W s S MN ( 0 , 6 0 T σ s 6 d s ) .
Now, because
1 Δ n 2 i = 1 n ( Δ i n X ) 6 P 15 0 T σ s 6 d s ,
thus, Slutsky’s theorem yields
T n L N ( 0 , 1 ) .
This completes the proof of Proposition 1. □
Before the proof of Theorem 1, we prove an auxiliary lemma. We have the decomposition for T n ( k ) :
T n ( k ) = 1 Δ n ( I 1 n , k + I 2 n , k + I 3 n , k + I 4 n , k ) I 5 n , k 2 5 i = 1 m ( Δ i + ( k 1 ) m n X ) 6 1 / 2 ,
and we denote
ξ k n : = I 2 , 2 n , k ( 1 ) 6 i = 1 m σ ( k 1 ) m + i 1 6 Δ n 1 / 2 .
Lemma A1.
Under the conditions in Theorem 1, we have
1 v g K n k = 1 K n g ( ξ k n ) ρ g L N ( 0 , 1 ) .
Proof. 
By taking condition on σ , then ξ k n ’s are independent for k = 1 , , K n . Then, by the standard central limit theorem, we obtain
1 K n k = 1 K n g ( ξ k n ) E [ g ( ξ k n ) ] V [ g ( ξ k n ) ] L N ( 0 , 1 ) .
Moreover, in view of (A8), Slutsky’s Theorem and dominated convergence theorem together imply that K n ( E [ g ( ξ k n ) ] ρ g ) 0 . Finally, observing that
V [ g ( ξ k n ) ] P v g ,
as Δ n 0 , uniformly in k = 1 , , K n , therefore, the desired result follows from Slutsky’s Theorem. The proof is complete. □
Proof of Theorem 1. 
Using the notations above, we have
T n = 1 v g K n k = 1 K n g ( ξ k n ) ρ g + R n ,
where,
R n = 1 v g K n k = 1 K n g ( T n ( k ) ) g ( ξ k n ) .
From the approximations in the previous proof and Taylor’s expansion, we have E [ R n ] C K n Δ n 1 / 4 . Thus, the required result follows from Lemma A1. □
Proof of Corollary 1. 
Under H 0 , the result in Theorem 1 holds when K n and K n Δ n 1 / 4 0 , thus we have
P ( T n z 1 α ) = α .
Under H 1 , we have
T n = 1 v g K n k = 1 K n g ( ξ k n ) ρ g + R n .
However, now,
R n = 1 v g K n k = 1 K n g ( T n ( k ) ) g ( ξ k n ) = 1 v g K n k = 1 K n g I 2 , 1 n , k ( 3 ) + I 2 , 3 n , k ( 2 , 3 ) + I 2 , 2 n , k ( 1 ) + O p ( Δ n 1 / 4 ) 6 i = 1 m σ ( k 1 ) m + i 1 6 Δ n 1 / 2 1 + O P ( Δ 1 / 2 ) g ( ξ k n ) = 1 v g K n k = 1 K n g ( ξ k n ) I 2 , 1 n , k ( 3 ) + I 2 , 3 n , k ( 2 , 3 ) + O p ( Δ n 1 / 4 ) 6 i = 1 m σ ( k 1 ) m + i 1 6 Δ n 1 / 2 1 + O P ( Δ 1 / 2 ) ,
where, ξ k n is a value between ξ k n and T n ( k ) . Note that
I 2 , 1 n , k ( 3 ) + I 2 , 3 n , k ( 2 , 3 ) 6 i = 1 m σ ( k 1 ) m + i 1 6 Δ n 1 / 2 P 3 k 1 k σ s 2 L s d s 6 k 1 k σ s 6 d s 1 / 2 ,
for all k = 1 , , K n . Therefore, by Lemma A1 and Assumptions 1, we obtain
R n P .
Hence, T n P from Lemma A1. This completes the proof of Corollary 1. □

References

  1. Ang, A.; Chen, J. Asymmetric correlation of equity portfolios. J. Financ. Econ. 2002, 63, 443–494. [Google Scholar] [CrossRef]
  2. Black, F. Studies in stock price volatility changes. In Proceedings of the 1976 Meeting of the Business and Economic Statistics Section; American Statistical Association: Washington, DC, USA, 1976; pp. 177–181. [Google Scholar]
  3. Christie, A. The stochastic behavior of common stock variances-Value, leverage, and interest rate effects. J. Financ. Econ. 1982, 10, 407–432. [Google Scholar] [CrossRef]
  4. Modigliani, F.; Miller, M. The cost of capital, corportation fiance, and the theory of investment. Am. Econ. Rev. 1958, 48, 261–297. [Google Scholar]
  5. Hens, T.; Steude, S. The leverage effect without leverage. Financ. Res. Lett. 2009, 6, 83–94. [Google Scholar] [CrossRef]
  6. Hasanhodzic, J.; Lo, A. Black’s Leverage Effect is Not Due to Leverage; Working Paper. Available online: http://ssrn.com/abstract=1762363 (accessed on 21 June 2022).
  7. Aït-Sahalia, Y.; Fan, J.; Li, Y. The Leverage Effect Puzzle: Disentangling Sources of Bias at High Frequency. J. Financ. Econ. 2013, 109, 224–249. [Google Scholar] [CrossRef] [Green Version]
  8. Wang, D.; Mykland, P. The Estimation of Leverage Effect With High-Frequency Data. J. Am. Stat. Assoc. 2014, 109, 197–215. [Google Scholar] [CrossRef]
  9. Aït-Sahalia, Y.; Fan, J.; Laeven, R.; Wang, D.; Yang, X. The Estimation of Continuous and Discontinuous Leverage Effects. J. Am. Stat. Assoc. 2017, 112, 1744–1758. [Google Scholar] [CrossRef] [PubMed]
  10. Kalnina, I.; Xiu, D. Nonparametric Estimation of the Leverage Effect: A Trade-off between Robustness and Efficiency. J. Am. Stat. Assoc. 2017, 112, 384–396. [Google Scholar] [CrossRef] [Green Version]
  11. Kinnebrock, S.; Podolskij, M. A note on the central limit theorem for bipower variation of general functions. Stoch. Process. Their Appl. 2008, 118, 1056–1070. [Google Scholar] [CrossRef] [Green Version]
  12. Delbaen, F.; Schachermayer, W. A general version of the fundamental theorem of asset pricing. Math. Ann. 1994, 300, 463–520. [Google Scholar] [CrossRef]
  13. Delbaen, F.; Schachermayer, W. The existence of absolutely continuous local martingale measures. Ann. Appl. Probab. 1995, 5, 926–945. [Google Scholar] [CrossRef]
  14. Delbaen, F.; Schachermayer, W. The Fundamental Theorem of Asset Pricing For Unbounded Stochastic Processes. Math. Ann. 1998, 312, 215–250. [Google Scholar] [CrossRef] [Green Version]
  15. Stojkoski, V.; Jolakoski, P.; Pal, A.; Ljupco Kocarev, T.S.; Metzle, R. Income inequality and mobility in geometric Brownian motion with stochastic resetting: Theoretical results and empirical evidence of non-ergodicity. Philos. Trans. R. Soc. A. 2022, 380, 20210157. [Google Scholar] [CrossRef] [PubMed]
  16. Zheng, C.; Wang, L. Option Pricing under the Subordinated Market Models. Discret. Dyn. Nat. Soc. 2022, 2022, 6213803. [Google Scholar]
  17. Bouchaud, J.P.; Potters, M. Theory of Financial Risk and Derivative Pricing: From Statistical Physics to Risk Management; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  18. Fouque, J.P.; Papanicolaou, G.; Sircar, K.R. Derivatives in Financial Markets with Stochastic Volatility; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  19. Renyi, A. On Stable Sequences of Events. Sankhya Indian J. Stat. Ser. A 1963, 25, 293–302. [Google Scholar]
  20. Aldous, D.; Eagleson, G.K. On mixing and stability of limit theorems. Ann. Probab. 1978, 6, 325–331. [Google Scholar] [CrossRef]
  21. Jacod, J.; Shiryayev, A.V. Limit Theorems for Stochastic Processes; Springer: New York, NY, USA, 2003. [Google Scholar]
  22. Andersen, T.G.; Bollerslev, T.; Diebold, F.; Wu, J. Realized beta: Persistence and Predictability. In Econometric Analysis of Financial and Economic Time Series; Fomby, T., Terrell, D., Eds.; Emerald Group Publishing Limited: Bingley, UK, 2006; Volume B, pp. 1–40. [Google Scholar]
  23. Mancini, C. Nonparametric threshold estimation for models with stochastic diffusion coefficient and jumps. Scand. J. Stat. 2009, 36, 270–296. [Google Scholar] [CrossRef]
  24. Jacod, J.; Li, Y.; Mykland, P.; Podolskij, M.; Vetter, M. Microstructure noise in the continuous case: The pre-averaging approach. Stoch. Process. Their Appl. 2009, 119, 2249–2276. [Google Scholar] [CrossRef] [Green Version]
  25. Barndorff-Nielsen, O.E.; Hansen, P.R.; Lunde, A.; Shephard, N. Designing realised kernels to measure the ex-post variation of equity prices in the presence of noise. Econometrica 2008, 76, 1481–1536. [Google Scholar] [CrossRef] [Green Version]
  26. Jacod, J.; Protter, P. Discretization of Processes; Springer: New York, NY, USA, 2012. [Google Scholar]
Figure 1. The converge rates of five test statistics under H 0 , against the length of time interval T. (Left): a 90% nominal level. (Right): a 95% nominal level.
Figure 1. The converge rates of five test statistics under H 0 , against the length of time interval T. (Left): a 90% nominal level. (Right): a 95% nominal level.
Mathematics 10 02511 g001
Figure 2. The converge rates of five test statistics under H 1 , against the length of time interval T. (Left): a 90% nominal level. (Right): a 95% nominal level.
Figure 2. The converge rates of five test statistics under H 1 , against the length of time interval T. (Left): a 90% nominal level. (Right): a 95% nominal level.
Mathematics 10 02511 g002
Table 1. The parameter values used in the simulation.
Table 1. The parameter values used in the simulation.
S 0 V 0 ra η κ μ γ ρ
H 0 log ( 40 ) 0.39890.03170.031750.0212−0.96820.9123−0.2919
H 1 log ( 40 ) 0.39890.03170.031750.0212−0.96820.91230
Table 2. Coverage rates of the test statistics under H 0 .
Table 2. Coverage rates of the test statistics under H 0 .
T α = 0.05 α = 0.1
W n S n U n V n W n S n U n V n
10.97460.97580.95540.9504 0.93310.95520.91440.9257
50.96910.97280.95360.9528 0.93040.93260.91070.9163
100.96830.96970.94990.9483 0.92890.91960.90440.9056
150.96710.95660.94430.9413 0.92700.90860.89950.9045
200.96710.95530.94590.9434 0.92610.90390.89520.8969
220.96650.95380.94560.9451 0.92420.89370.89250.8951
440.96620.92890.93200.9329 0.92340.85900.87580.8849
660.96430.89690.91720.9245 0.92180.82530.85910.8733
Table 3. Coverage rates of the test statistics under H 1 .
Table 3. Coverage rates of the test statistics under H 1 .
T α = 0.05 α = 0.1
W n S n U n V n W n S n U n V n
10.99950.99960.99740.9968 0.98640.99690.99590.9959
50.19140.98870.97710.9993 0.12340.92420.70830.8737
100.01540.84200.35520.4170 0.00560.55650.14160.1478
150.00060.52250.09050.1040 0.00020.23650.02980.0376
2000.25990.01850.0282 00.08810.00390.0091
2200.18530.00930.0167 00.05730.00300.0057
4400.001500 0000.0001
660000 0000
Table 4. Rejection rate of tests.
Table 4. Rejection rate of tests.
Time SpanNominal Level T n S n U n V n
Day0.050.980.920.890.68
0.10.980.900.870.57
Week0.051111
0.11111
Month0.051111
0.11111
Quarter0.051111
0.11111
Half year0.051111
0.11111
Year0.051111
0.11111
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Z. Testing for the Presence of the Leverage Effect without Estimation. Mathematics 2022, 10, 2511. https://0-doi-org.brum.beds.ac.uk/10.3390/math10142511

AMA Style

Liu Z. Testing for the Presence of the Leverage Effect without Estimation. Mathematics. 2022; 10(14):2511. https://0-doi-org.brum.beds.ac.uk/10.3390/math10142511

Chicago/Turabian Style

Liu, Zhi. 2022. "Testing for the Presence of the Leverage Effect without Estimation" Mathematics 10, no. 14: 2511. https://0-doi-org.brum.beds.ac.uk/10.3390/math10142511

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop