Next Article in Journal / Special Issue
On the Validity of Tests for Asymmetry in Residual-Based Threshold Cointegration Models
Previous Article in Journal
Not p-Values, Said a Little Bit Differently
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fixed and Long Time Span Jump Tests: New Monte Carlo and Empirical Evidence

1
Department of Finance, Lingnan (University) College, Sun Yat-sen University, 135 Xingang West Road, Guangzhou 510275, China
2
Department of Economics, School of Arts and Sciences, Rutgers University, 75 Hamilton Street, New Brunswick, NJ 08901, USA
*
Author to whom correspondence should be addressed.
Submission received: 5 September 2018 / Revised: 19 February 2019 / Accepted: 7 March 2019 / Published: 13 March 2019
(This article belongs to the Special Issue Resampling Methods in Econometrics)

Abstract

:
Numerous tests designed to detect realized jumps over a fixed time span have been proposed and extensively studied in the financial econometrics literature. These tests differ from “long time span tests” that detect jumps by examining the magnitude of the jump intensity parameter in the data generating process, and which are consistent. In this paper, long span jump tests are compared and contrasted with a variety of fixed span jump tests in a series of Monte Carlo experiments. It is found that both the long time span tests of Corradi et al. (2018) and the fixed span tests of Aït-Sahalia and Jacod (2009) exhibit reasonably good finite sample properties, for time spans both short and long. Various other tests suffer from finite sample distortions, both under sequential testing and under long time spans. The latter finding is new, and confirms the “pitfall” discussed in Huang and Tauchen (2005), of using asymptotic approximations associated with finite time span tests in order to study long time spans of data. An empirical analysis is carried out to investigate the implications of these findings, and “time-span robust” tests indicate that the prevalence of jumps is not as universal as might be expected.

1. Introduction

In this paper, we add to the financial econometrics literature by carrying out an extensive Monte Carlo and empirical analysis comparing different types of jump tests used in the specification process associated with fitting continuous time models of financial variables.1 We focus on tests due to Barndorff-Nielsen and Shephard (2006); Lee and Mykland (2008); Aït-Sahalia and Jacod (2009); Corsi et al. (2010), and Podolskij and Ziggel (2010), who study “fixed time span” jump tests; and tests due to Corradi et al. (2014) and Corradi et al. (2018) who study so-called “long time span” jump tests.2 The reason why a “horse race” comparing alternative jump tests of these varieties is of interest is because it is well known that tests constructed using observed sample paths of asset returns on a “fixed time span”, such as a day or a week, are not consistent against non-zero jump intensity, and are sensitive to sequential testing bias. On the other hand, the CSS and C S S 1 tests that we examine are based on direct evaluation of the data generating process, and are thus consistent and asymptotically correctly sized when the time span, T , and the sampling interval, Δ 0 . Put differently, one reason why detecting jumps using long time span tests is of potential interest is that empirical researchers often estimate DGPs after testing for jumps using fixed time span tests. This is problematic if the jump intensity is identically zero, since parameters characterizing jump size are unidentified, and consistent estimation of the rest of the parameters is thus no longer feasible (see Andrews and Cheng (2012)). Moreover, if researchers detect jumps on a particular sample path (say, when using a daily fixed time span jump test), they might conclude that the jump intensity is non-zero. However, if no jumps are found in a sample path, this does not mean there are no jumps in other sample paths, and hence that a DGP should be estimated with no jump component.3
The CSS and CSS1 tests examined in this paper are based on realized third moments, or tricity, and as discussed above, utilize observations over an increasing time span. Although various tricity-type tests have already been examined in the literature, it is worth noting that only the CSS and CSS1 tests are developed using both in-fill and long-span asymptotics. As discussed above, the use of long-span asymptotics ensures that these tests have power against non-zero intensity in the DGP rather than against realized jumps on a particular sample path with a fixed time span. A key difference between the CSS and CSS1 tests is that the latter test sacrifices power by using a rescaled bootstrap to ensure robustness against leverage, while the former test uses thresholding and requires the use of time span, T + , where T + / T , to eliminate the leverage effect. In our experiments, we consider two types of the CSS1 jump test (i.e., CSS1 and CSS 1 ˜ tests). Both of these build on earlier work of Aït-Sahalia (2002a, 2002b), and are special cases of the CSS test introduced in Corradi et al. (2018). One test assumes no leverage. The other test is robust to leverage, and is a rescaled version of the no leverage test. Both tests are derived under the assumption that E Y k Δ Y ( k 1 ) Δ 3 = 0 , whenever there are no jumps, where Y k Δ = ln X k Δ Δ T k = 2 n ln X k Δ and ln X k Δ is the k t h asset return over a shrinking sampling interval Δ .
In our Monte Carlo analysis, the finite sample properties of three fixed time span tests, as well as the three CSS type tests (i.e., CSS1, CSS 1 ˜ and CSS tests) are investigated.4 The three “fixed span” tests include the higher order power variation test of Aït-Sahalia and Jacod (2009) (ASJ), the classic bipower variation test of Barndorff-Nielsen and Shephard (2006) (BNS), and the truncated power variation test of Podolskij and Ziggel (2010) (PZ).5 Our findings can be summarized as follows. First, we show that the finite sample power of daily jump tests against non-zero jump intensity is low, particularly when jumps are infrequent or jump magnitudes are “weak”. For instance, when the jump intensity is 0.4 and the jump size parameter is our largest, rejection rates of the ASJ, BNS and PZ tests at a 0.05 significance level are only around 0.26, 0.38, and 0.36, respectively. Second, sequential testing bias grows rapidly as the time span increases. The size of a joint test based on the strategy of sequentially performing many fixed-T daily tests approaches unity very quickly. Even for the most conservative test (i.e., the ASJ test), empirical size is over 0.95 after 50 days. Importantly, we also show that the empirical sizes of fixed-T jump tests over samples with growing time spans also increase in T. However, size distortion accumulates more slowly when using the ASJ test than when using the BNS and PZ tests. Moreover, the power of ASJ test is very good for all long time spans, as long as jumps are not too rare and too weak. Finally, long time span jump tests are adequately sized, as long as T and Δ are carefully chosen, regardless of the presence of leverage. Additionally, power is reasonably good, even when jumps are infrequent and weak, and increases with an increasing time span, T, for fixed Δ .
In our empirical analysis, we examine 5-min intraday observations between 2006 and 2013 on 12 individual stocks, nine sector ETFs, and one market (SPDR S&P 500) ETF. Our main empirical findings are summarized as follows. First, using daily ASJ, BNS and PZ tests, jumps are widely detected in asset prices over almost all time periods considered. Moreover, in some cases, the annual percentage of jump days seems inconceivably large. For instance, all three tests detect jumps on around 35–40% of the days in 2006 for two of the ETFs that we examine. Second, these jump percentages have diminished over time. Third, long span jump tests tell a different story. Namely, the ASJ test, the CSS 1 ˜ test, and the CSS test indicate far fewer jumps than are found when using daily tests.6 This finding has important implications for both specification and estimation of asset price models.
The rest of the paper is organized as follows. Section 2 outlines the theoretical framework and introduces notation. Section 3 discusses statistical issues associated with testing for jumps. Section 4 discusses the long time span jump tests that we examine, and Section 5 briefly discusses the extant fixed time span tests examined in the sequel. Section 6 reports results from our Monte Carlo experiments. Section 7 presents the results of our empirical analysis of various individual stock and stock index data. Finally, Section 8 contains concluding remarks.

2. Setup

We use the setup of Corradi et al. (2018). Namely, assume that asset (log-)prices are recorded at an equally spaced discrete interval, Δ = 1 m , where m is the total number of observations on each trading day. In our model, we assume that Δ 0 ; or equivalently that m . Log-prices follow a jump diffusion model defined on some filtered probability space ( Ω , F , ( F t ) t 0 , P ) , with
d ln X t = μ d t + V t d W 1 , t + Z t d N t ,
where μ is the drift term, V t is the spot volatility, and W 1 , t is an adapted standard Brownian motion (i.e., it is F t -measurable for each t 0 ). Here, V t is defined according to either (i), (ii), (iii), or (iv), as follows:
(i) a constant:
V t = v for all t ;
(ii) a measurable function of the state variable:
V t is X t measurable ;
(iii) a stochastic volatility process without leverage:
d V t = μ V , t ( θ ) d t + g V t , θ d W 2 , t , E W 1 , t W 2 , t = 0 ;
where the vector θ parameterizes μ V ( · ) and g ( · ) .
(iv) a stochastic volatility process with leverage:
d V t = μ V , t ( θ ) d t + g V t , θ d W 2 , t , E W 1 , t W 2 , t = ρ 0 .
Evidently, the volatility process is quite general, although we do not consider jumps in volatility.
Now, define,
Pr N t + Δ N t = 1 | F t = λ t Δ + o Δ ,
Pr N t + Δ N t = 0 | F t = 1 λ t Δ + o Δ ,
and
Pr N t + Δ N t > 1 | F t = o Δ ,
where λ t characterizes the jump intensity. The jump size, Z t , is identically and independently distributed with density f ( z ; γ ) , where γ parameterizes the jump size distribution. Equation (8) implies that we rule out infinite-activity jumps.
When constructing the fixed time span realized jump tests discussed in the sequel, we remain agnostic about the jump generating process. However, for the case of our long time span jump intensity tests, we must provide a moderate amount of additional structure. This is one of the key trade-offs associated with using either variant of test. In particular, following Corradi et al. (2018), we consider two cases. First, N t , which is the number of jump arrivals up to t, follows a counting process, such as the widely used Poisson process. In this case, λ t = λ , for all t. Second, jumps may be “self-exciting”, in the sense that the jump intensity follows Hawkes diffusion (see Bowsher (2007) and Aït-Sahalia et al. (2015)), with
d λ t = a λ λ t d t + β d N s ,
where λ 0 , β 0 , a > 0 , and a > β , so that the process is mean reverting with E λ t = a λ a β . As noted in Corradi et al. (2018), if λ = 0, then E λ t = 0, which implies that λ t = 0, a.s. for all t. This implies that N t = 0, a.s., for all t. As a result, β , a and γ in this case are all unidentified. On the other hand, if λ > 0, then β and γ are identified. But if β = 0, a is not identified. These observations highlight the importance of pretesting for λ = 0 against λ > 0 , in order to obtain consistent estimation of parameters in the case of Hawkes diffusions.
In light of the above discussion, we are interested in testing
H 0 : λ = 0
versusb
H A : λ > 0 ,
where λ is the constant jump intensity, in the case of Poisson-type jumps; and is the expectation of the stochastic jump intensity (i.e., λ = E λ t ), in the case of self-exciting jumps.7 This is a nonstandard inference problem because, under H 0 , some parameters are not identified, and a parameter lies on the boundary of the null parameter space.
Before discussing the tests that are compared in our Monte Carlo experiments, we first provide some heuristic motivation for long time span jump testing. This discussion follows Corradi et al. (2014).

3. Heuristic Discussion

In recent years, a large variety of tests for realized jumps have been proposed and studied. One common feature of the preponderance of these tests is that they are all carried out using high-frequency observations over a fixed time span and are justified by in-fill asymptotic theorems. Therefore, they have power against realized jumps over fixed time spans, and none are consistent against the alternative of λ > 0 . This inconsistency issue has also been pointed out and illustrated by Huang and Tauchen (2005) and Aït-Sahalia and Jacod (2009), among others. Many of the fixed time span tests can be considered as Hausman-type tests in which a comparison of two realized measures of the integrated volatility is made, where one is robust to jumps, and one is not. Under the null of no jumps, both consistently estimate the integrated volatility. Under the alternative of jumps, however, the consistency of the non-robust realized measure fails. Instead, it estimates the total quadratic variation that contains the contribution from jump components. As a result, these two realized measures differ in the presence of jumps. In general, Hausman-type tests are designed to detect whether j = N t N t + 1 c j 2 = 0 or j = N t N t + 1 c j 2 > 0 , where N t denotes the number of jumps up to time t , and c j is the (random) size of the jumps. However, λ > 0 does not imply that j = N t N t + 1 c j 2 > 0 , given that Pr N t + 1 N t > 0 < 1 . Therefore, such tests have power against realized jumps, but not necessarily against positive jump intensity.
Two techniques are often employed in practice to construct the jump-robust realized measures. The first uses multipower variations, such as bipower variation or tripower variation. Under these measures, the effect of jumps is asymptotically “removed” by using the product of consecutive high-frequency observations. The second uses jump thresholding that allows for the separation of jump and continuous components, based on the difference between their orders of magnitude. (see, e.g., Mancini (2009) and Corsi et al. (2010)). Recent higher order power variation tests are motivated by the fact that for p > 2 , i = 1 n 1 ln X ( t + ( i + 1 ) Δ ln X t + i Δ p converges to t s t + 1 ln X s ln X s p , where t s t + 1 ln X s ln X s p is strictly positive if there are jumps, and zero otherwise (see, e.g., Aït-Sahalia and Jacod (2009) and Aït-Sahalia et al. (2012)). In this case, however, test power remains still because of realized jumps, and not because of positive jump intensity.
Additionally, other recent tests related to those discussed above have been proposed that are based on comparisons using pre-averaged volatility measures, in order to obtain tests that are robust to microstructure noise (see, e.g., Podolskij and Vetter (2009a, 2009b) and Aït-Sahalia et al. (2012)).
In the Monte Carlo and empirical experiments reported in this paper, we consider three fixed time span tests based on multipower variation, jump thresholding and higher order power variation, respectively.
Generally, jump tests performed over a fixed time span are designed to distinguish between:
Ω t , l C = ω : s ln X s ( ω ) Δ ln X s = ln X s ( ω ) ln X s ( ω ) = 0 , s [ t , t + l ) and Ω t , l J = ω : s ln X s ( ω ) Δ ln X s = ln X s ( ω ) ln X s ( ω ) 0 , s [ t , t + l ) ,
where l indicates a fixed time span. Hence, all of the tests discussed above are dependent upon pathwise behavior. Clearly, one might decide in favor of Ω t , l C , even if λ > 0 , simply because jumps are by coincidence absent over the interval [ t , t + l ) . Now, in order to carry out a consistent test against positive jump intensity, two approaches may be used. First, one may consider the following joint hypothesis:
Ω T C = t = 0 T 1 Ω t , l C , as T ,
versus its negation. Here, the objective is to test the joint null hypothesis that none of the fixed-span sample paths contain jumps. In fact, under mild conditions on the degree of heterogeneity of the process, failure to reject Ω C = lim T t = 0 T 1 Ω t , l C implies failure to reject λ = 0 . The difficulty lies in how to implement a test for Ω T C , when T gets large. Needless to say, sequential application of fixed time span jump tests leads to sequential test bias, and for T large, Ω T C is rejected with probability approaching unity. This is because the empirical size of the joint hypothesis test based on the sequential strategy is α ^ T = 1 i = 1 T ( 1 α ^ i ) , where α ^ i is the empirical size of the i t h individual fixed time span test. As a result,
lim T α ^ T = lim T 1 i = 1 T ( 1 α ^ i ) = 1 lim T i = 1 T ( 1 α ^ i ) = 1 .
In our Monte Carlo simulations, we illustrate this issue under a set of experiments conducted with an increasing time span. One common approach to this problem is based on controlling the overall Family-Wise Error Rate (FWER), which ensures that no single hypothesis is rejected at a level larger than a fixed value, say α . This is typically accomplished by sorting individual p-values, and using a rejection rule which depends on the overall number of hypotheses. For further discussion, see Holm (1979), who develops modified Bonferroni bounds, White (2000), who develops the so-called “reality check”, and Romano and Wolf (2005), who provide a refinement of the reality check. However, when the number of hypotheses in the composite grows with the sample size, the null will (almost) never be rejected. In other words, approaches based on the FWER are quite conservative.
An alternative approach, which allows for the number of hypotheses in the composite to grow to infinity, is based on the Expected False Discovery Rate (E-FDR). When using this approach, one controls the expected number of false discoveries (rejections). For further discussion, see Benjamini and Hochberg (1995) and Storey (2003). Although the E-FDR approach applies to the case of a growing number of hypotheses, it is very hard to implement in the presence of generic dependence across p-values, as in our context.
The above discussion, when coupled with issues of identification and test consistency, provides ample impetus for using long time span jump tests of the variety discussed in the sequel. Still, it should be noted that researchers have shown good performance of fixed time span tests over a day or a week, and we provide further evidence on this front in our Monte Carlo experiments. However, almost no one considers performing the tests over a year or even a decade. The only exception that we are aware of is Huang and Tauchen (2005). They propose using “full-sample statistics” based on BNS test statistics. They show that when the time span is long, the BNS test over-rejects the null of no realized jumps, since the approximation error on a short interval accumulates as the time span increases. Consequently, the empirical size is biased upwards.

4. Long Time Span Jump Intensity Test

Assume the existence of a sample of n observations over an increasing time span, T , and a shrinking discrete interval Δ , so that n = T Δ , with T and Δ 0 . Our interest lies in the following hypotheses:
H 0 : λ = 0
versus
H A = H A ( 1 ) H A ( 2 ) : λ > 0 and E Z t E Z t 3 0 λ > 0 and E Z t E Z t 3 = 0 .
Notice that the alternative hypothesis is the union of two different alternatives, designed to allow for both symmetric and asymmetric jump size density. This setup characterizes the C S S jump test of Corradi et al. (2018) and the C S S 1 test of Corradi et al. (2014). Moreover, both tests are based on statistics that are functions of realized third moments, or tricity.8 However, we discuss only the C S S 1 in the sequel, as the working paper in which that test appears will not be published, while Corradi et al. (2018) has been published. Although, their test differs in a key respect. Namely, their test is dependent on jump thresholding.
Let Y k Δ = ln X k Δ Δ T k = 2 n ln X k Δ , and Y ( k 1 ) Δ = ln X ( k 1 ) Δ Δ T k = 2 n ln X ( k 1 ) Δ . In addition, let
λ ^ T , Δ = 1 T k = 2 n Y k Δ Y ( k 1 ) Δ 3 .
Here, λ ^ T , Δ is the demeaned sample third moment. Consider the statistic
CSS 1 T , Δ = T Δ λ ^ T , Δ .
The asymptotic behavior of CSS 1 T , Δ can be analyzed under the following assumption.
Assumption A1.
(i) ln X t is generated by Equation (1) and V t is defined in Equations (2), (3), or (4). (ii) ln X t is generated by Equation (1) and V t is defined in Equation (5). For C a generic constant, (iii) E V t k C , k 3 , (iv) N t satisfies Equations (6)–(8), and λ t is either constant, or satisfies Equation (9). (v) The jump size, Z t , is independently and identically distributed, and E Z t k C , for k 6 .
Corradi et al. (2014) show that under Assumptions 1(i) and (iii)–(v), assuming that as n , T and Δ 0 , then under H 0 : CSS 1 T , Δ d N 0 , ω 0 , with ω 0 = 15 E V k Δ 3 + 4 E V k , Δ 3 12 E V k , Δ E V k , Δ 2 .In addition, they prove that under H A ( 1 ) , there exists an ε > 0 , such that: lim T , Δ 0 Pr Δ T CSS 1 T , Δ > ε = 1 ; and under H A ( 2 ) , there exists an ε > 0 , such that: lim T , Δ 0 Pr Δ CSS 1 T , Δ > ε = 1 .
It follows immediately that CSS 1 T , Δ converges to a normal random variable under the null hypothesis, diverges at rate T Δ under the alternative of asymmetric jumps, and diverges at the slower rate of 1 Δ under the alternative of symmetric jumps.
Given that the variance is of a different order of magnitude under the null and under each alternative, the “standard” nonparametric bootstrap is not asymptotically valid. This issue arises because the variance of the bootstrap statistic mimics the sample variance. This implies that the bootstrap statistic is of order Δ 1 under the alternative. This is not be a problem under H A ( 1 ) , since the statistic is of order T Δ 1 , but is a problem under H A ( 2 ) , since the actual and bootstrap statistics would be of the same order. To ensure power against H A ( 2 ) , it suffices to ensure that the bootstrap statistic is of a smaller order than the actual statistic. This can be accomplished by resampling observations over a rougher grid, Δ ˜ , using the same time span, T .
Set the new discrete interval to be Δ ˜ , such that Δ / Δ ˜ 0 , and resample, with replacement, Y k Δ ˜ * Y ( k 1 ) Δ ˜ * , . . . , Y n ˜ Δ ˜ * Y ( n ˜ 1 ) Δ ˜ * from Y k Δ ˜ Y ( k 1 ) Δ ˜ , . . . , Y n ˜ Δ ˜ Y ( n ˜ 1 ) Δ ˜ , where n ˜ = T / Δ ˜ . Now, let
λ ˜ T , Δ ˜ = 1 T k = 2 n ˜ Y k Δ ˜ Y ( k 1 ) Δ ˜ 3 ,
and
λ ˜ T , Δ ˜ * = 1 T k = 2 n ˜ Y k Δ ˜ * Y ( k 1 ) Δ ˜ * 3 .
Further, define the bootstrap statistic as
CSS 1 T , Δ ˜ * = T Δ ˜ λ ˜ T , Δ ˜ * λ ˜ T , Δ ˜ .
Finally, let c α , B , Δ , Δ ˜ * and c ( 1 α ) , B , Δ , Δ ˜ * be the ( α / 2 ) t h and ( 1 α / 2 ) t h critical values of the empirical distribution of CSS 1 T , Δ ˜ * , constructed using B bootstrap replications. Corradi et al. (2014) show that under Assumptions 1(i) and (iii)–(v), and assuming that as n , B , T , Δ 0 , Δ ˜ 0 and Δ / Δ ˜ 0 , then under H 0 :
lim T , B , Δ , Δ ˜ 0 Pr c α / 2 , B , Δ , Δ ˜ * CSS 1 T , Δ c ( 1 α / 2 ) , B , Δ , Δ ˜ * = 1 α ;
and under H A ( 1 ) H A ( 2 ) :
lim T , B , Δ , Δ ˜ 0 Pr c α / 2 , B , Δ , Δ ˜ * CSS 1 T , Δ c ( 1 α / 2 ) , B , Δ , Δ ˜ * = 0 .
It is immediate to see that rejecting the null whenever T Δ λ ^ T , Δ < c α / 2 , B , Δ , Δ ˜ * or T Δ λ ^ T , Δ > c ( 1 α / 2 ) , B , Δ , Δ ˜ * , and otherwise failing to reject, delivers a test with asymptotic size equal to α and asymptotic power equal to unity. Note that the bootstrap statistic is of P * probability order 1 Δ ˜ under both H A ( 1 ) and H A ( 2 ) , while the actual statistic is of P probability order T Δ under H A ( 1 ) and 1 Δ under H A ( 2 ) . Hence, the condition that Δ / Δ ˜ 0 ensures unit asymptotic power under H A ( 2 ) .
The CSS1 test is not robust to leverage. In particular, the results presented above rely on the fact that under the null of no jumps, returns are symmetrically distributed. More precisely, all results are derived under the assumption that E Y k Δ Y ( k 1 ) Δ 3 = 0 , whenever there are no jumps. However, in the presence of leverage, (i.e., V t is generated as in Equation (5)), E ( k 1 ) Δ k Δ V s 1 / 2 d W 1 , s 3 0 , and is instead of order Δ 2 . For example, if V t is generated by a square root process (i.e., d V t = κ θ V t d t + η V t 1 / 2 d W 2 , t ) , then E Y k Δ Y ( k 1 ) Δ 3 = λ E Z t E Z t 3 Δ + η θ ρ 2 κ Δ 2 (see Aït-Sahalia et al. (2015)). Although, the contribution to the third moment of the asymmetric jump component is of a larger order than that of the leverage component, inference based on the comparison of CSS 1 T , Δ with the bootstrap critical values c α , B , Δ , Δ ˜ * and c ( 1 α ) , B , Δ , Δ ˜ * will lead to the rejection of the null of no jumps, even if the null is true. To avoid spurious rejection due to the presence of leverage, use the following modified statistic:
CSS 1 ˜ T , Δ = 1 T 1 / 2 + ε CSS 1 T , Δ ,
with ε > 0 , arbitrarily small. For this test statistic, Corradi et al. (2014) show that under Assumptions 1(ii)–(v) hold, and assuming that as n , T , Δ 0 , Δ ˜ 0 , and ( T 1 / 2 + ε Δ ) / Δ ˜ 0 , then under H 0 :
lim T , B , Δ , Δ ˜ 0 Pr c α / 2 , B , Δ , Δ ˜ * CSS 1 ˜ T , Δ c ( 1 α / 2 ) , B , Δ , Δ ˜ * = 1 ;
and under H A ( 1 ) H A ( 2 ) :
lim T , B , Δ , Δ ˜ 0 Pr c α / 2 , B , Δ , Δ ˜ * CSS 1 ˜ T , Δ c ( 1 α / 2 ) , B , Δ , Δ ˜ * = 0 .
It follows that inference based on the comparison of CSS 1 ˜ T , Δ with the bootstrap critical values c α , B , Δ , Δ ˜ * and c ( 1 α ) , B , Δ , Δ ˜ * delivers a test with zero asymptotic size and unit asymptotic power.
There are two major differences between the CSS test developed in Corradi et al. (2018) and the CSS1 type tests outlined above. First, instead of using an adjustment term in Equation (11) to account for the leverage effect, Corradi et al. (2018) use a longer time span, T + , with κ = T + / T , to eliminate the leverage effect. Specifically, now assume the existence of a sample of n + observations over an increasing time span, T + , and a shrinking discrete interval Δ , so that n + = T + Δ , with T + and Δ 0 . Define n = T / Δ = n + T + T Δ , with T + / T . Let
μ ^ T , Δ = 1 T k = 1 n 1 ln X ( k + 1 ) Δ ln X k Δ ln X n Δ ln X Δ n
1 T + k = 1 n + 1 ln X ( k + 1 ) Δ ln X k Δ ln X n Δ ln X Δ n + 1 { | ln X ( k + 1 ) Δ ln X k Δ | α Δ ϖ }
where the second term on the right-hand side correct for the leverage effect.
Define the statistic,
C S S ^ T , Δ = s q r t ( T ) Δ μ ^ T , Δ .
The C S S ^ T , Δ is evidently closely related to the CSS 1 T , Δ test in Equation (10).
Second, instead of using a rescaled bootstrap to account for the fact that the variance of the statistic is of a different order of magnitude under the null and under each alternative, Corradi et al. (2018) introduce a threshold variance estimator to consistently estimate the variance of C S S ^ T , Δ under the null and bounded in probability under the union of alternatives. Specifically, define,
σ ^ T , Δ 2 = 1 T Δ 2 k = 1 n 1 ln X ( k + 1 ) Δ ln X k Δ ln X n Δ ln X Δ n 3 1 { | ln X ( k + 1 ) Δ ln X k Δ | α Δ ϖ }
then the following t-statistic can be used for inference,
t ^ T , Δ = C S S ^ T , Δ σ ^ T , Δ
Corradi et al. (2018) show that under certain assumptions, t ^ T , Δ converges to a standard normal variable under the null, diverges at rate T Δ under H A ( 1 ) , and diverges at the slower rate of 1 Δ under H A ( 2 ) .

5. Fixed Time Span Realized Jump Tests

In this section, we briefly review three fixed time span realized jump tests that are evaluated in our Monte Carlo and empirical experiments. These tests are the ASJ, BNS and PZ tests discussed above.

5.1. Aït-Sahalia and Jacod (ASJ: 2009) Test

Aït-Sahalia and Jacod (2009) propose a jump test based on calculating the ratio between two realized higher order power variations with different sampling intervals Δ and q Δ , respectively, where q is an integer chosen prior to test construction. The p t h order realized higher order power variation is defined as follows,
B ^ ( p , Δ ) = k = 2 1 / Δ | ln X k Δ ln X ( k 1 ) Δ | p
where χ indicates the integer part of χ .
The ratio between two realized power variations with different sampling intervals is then,
S ^ ( p , q , Δ ) = B ^ ( p , q Δ ) B ^ ( p , Δ ) .
The test statistic is defined as,
A S J = q p 2 1 S ^ ( p , q , Δ ) V n c ,
where in the denominator, V n c , can be estimated using a truncated estimator,
V ^ n c = Δ A ^ ( 2 p , Δ ) M ( p , q ) A ^ ( p , Δ ) 2 ,
where
M ( p , q ) = 1 μ p 2 ( q p 2 ( 1 + q ) μ 2 p + q p 2 ( q 1 ) μ p 2 2 q p / 2 1 μ q , p ) ,
and A ^ ( p , Δ ) is defined as follows,
A ^ ( p , Δ ) = Δ 1 p / 2 μ p k = 2 1 / Δ | ln X k Δ ln X ( k 1 ) Δ | p 1 { | ln X k Δ ln X ( k 1 ) Δ | α Δ ϖ } ,
where α > 0 , ϖ ( 0 , 1 / 2 ) , and α Δ ϖ functions as a threshold separating the continuous component from the jump component. Alternatively, V n c , can be estimated using a multipower variation estimator,
V ˜ n c = Δ M ( p , q ) A ˜ ( p / ( p + 1 ) , 2 p + 2 , Δ ) A ˜ ( p / ( p + 1 ) , p + 1 , Δ ) 2 ,
where
A ˜ ( r , l , Δ ) = Δ 1 l r / 2 μ r l k = l 1 / Δ l + 1 j = 0 l 1 | ln X ( k + j ) Δ ln X ( k + j 1 ) Δ | r ,
and
μ r = E ( | U | r ) and μ q , p = E ( | U | p | U + q 1 V | p ) ,
for U , V i . i . d N ( 0 , 1 ) .
In practice, for any significance level α , if A S J > Z α , where Z α is the ( 1 α ) t h quantile of the standard normal distribution, one rejects the null of no jumps on the fixed interval [0, 1].

5.2. Barndorff-Nielsen and Shephard (BNS: 2006) Test

The Barndorff-Nielsen and Shephard (2006) test compares the difference between two estimators of integrated volatility: one which is robust to jumps and the other which is not, to test for jumps on a particular sample path. Barndorff-Nielsen and Shephard (2004) introduce the realized bipower variation (BPV) which is a robust estimator of the integrated volatility. Namely, they consider
B P V = π 2 ( m m 1 ) k = 2 1 / Δ | ln X ( k + 1 ) Δ ln X k Δ | | ln X k Δ ln X ( k 1 ) Δ | ,
where m = 1 / Δ . Realized BPV is a special case of the following realized multipower variation with p = 2 ,
M P V ( p ) = μ 2 p p ( m m p + 1 ) k = p 1 / Δ p + 1 j = 0 p 1 | ln X ( k + j ) Δ ln X ( k + j 1 ) Δ | 2 p .
In this paper, we analyze the following version of their test statistic:
B N S = Δ 1 2 1 B P V R V ( ( π 2 ) 2 + π 5 ) max ( 1 , T P V ( B P V ) 2 ) ,
where RV is the realized volatility (i.e., the sum of squared high-frequency returns), and TPV is tripower variation (i.e., MPV(3)).
The authors prove that under the null, B N S d N ( 0 , 1 ) . As a result, one rejects the null of no jumps on some fixed interval [ 0 , 1 ] , if the test statistic B N S > Z α .

5.3. Podolskij and Ziggel (PZ: 2010) Test

Podolskij and Ziggel (2010) modify the original truncated power variation statistic proposed in Mancini (2009) by introducing a sequence of positive i . i . d . random variables { η i } i [ 1 , 1 / Δ ] , with expectation one and finite variance. Namely, they consider
T ( ln X , p ) = Δ 1 p 2 k = 2 1 / Δ | ln X k Δ ln X ( k 1 ) Δ | p ( 1 η i 1 { | ln X k Δ ln X ( k 1 ) Δ | α Δ ϖ } ) .
The test statistic that they propose has the following form,
P Z = T ( ln X , p ) V a r * ( η ) A ^ ( 2 p , Δ ) ,
where A ^ ( 2 p , Δ ) is the original truncated power variation in Equation (17). The authors prove that under the null of no jumps, P Z d N ( 0 , 1 ) , and explodes under the alternative. As a result, one rejects the null if P Z > Z α .

6. Monte Carlo Simulations

In this section, we report the results of Monte Carlo experiments used to analyze the finite sample properties of the tests introduced above. The simulations are designed to show: (i) the relevance of inconsistency of the fixed time span tests, when tested against non-zero jump intensity in the underlying DGP; (ii) the relevance (or lack thereof) of sequential testing bias when performing daily jump tests, sequentially, along sample paths with a long time span; (iii) the empirical size and power of fixed time span jump tests when applied directly to samples with long time spans; and (iv) the finite sample properties of the CSS 1 , CSS 1 ˜ and CSS tests.
The DGP under the null hypothesis in all simulations is the following stochastic volatility model,
ln X t = ln X 0 + 0 t μ ¯ d s + 0 t σ s d W s , σ t 2 = σ 0 2 + κ σ 0 t ( σ ¯ 2 σ s 2 ) d s + ζ 0 t σ s 2 d B s ,
where the stochastic volatility follows a square root process.9 Leverage effects are characterized by corr ( d W s , d B s ) = ρ , where ρ ={0, −0.5}. Under the alternative, we simulate jumps as a compound Poisson process. Namely, we add i = 1 N t J i to the null DGP, where N t is a Poisson process characterized by intensity parameter λ , which determines the frequency of jump arrivals, and J i is independently and identically drawn from a normal distribution, which characterizes the jump size. All parameter values for various DGP permutations considered are given in Table 1. Of note is that the parameter values used in our experiments are chosen to regions of the parameter space where the tests shift from having strong finite sample properties to having weaker finite sample properties. Thus, for example, while we broadly mimic the parameterizations used in the extant literature (see e.g., Huang and Tauchen (2005) and Aït-Sahalia and Jacod (2009), in some cases, our parameters are slightly smaller. For example, Huang and Tauchen (2005) have jump magnitude standard deviation parameters ranging from 0.5 to 2.0, while ours range from 0.25 to 1.25. The sampling frequency in our simulations is 5-min (i.e., 78 observations per day).10 Using the Milstein discretization scheme, we simulate log-price sample paths over T = 500 days, so that there are 39,000 observations in total, for each sample path. Simulation results are calculated based on 1000 replications, and tests are implemented using 0.05 and 0.10 significance levels.
Table 2 reports empirical size of daily fixed time span jump tests. In this table, however, the test is applied in two different ways. For entries under the “Jump Days” header, the empirical size of the daily tests are reported. One can think of these experiments as reporting rejection frequencies of 500,000 tests (since T = 500 and there are 1000 Monte Carlo replications). For entries under the “Sequential Testing Bias” header, sequences of T tests (corresponding the the length of our daily samples) are run, and overall rejection frequencies across all T tests are reported, where T ranges from 1 to 500 days. Thus, these entries indicate the accumulation of sequential testing bias associated with repeated application of the tests across multiple days. Turning first to the “Jump Days” empirical size results, it is evident that the BNS test is least favorably sized, as expected, while the ASJ and PZ tests are very accurately sized, across all D G P s, with the only exception being that the PZ test is undersized when T = 5 . Now, consider the “Sequential Testing Bias” results in the table. As expected, sequential testing bias leads to a 1.000 rejection rate as T increases beyond 50 days, and these rejection rates are achieved surprisingly quickly, as T increases, although it is interesting to note that the PZ test suffers from slightly less bias, for smaller values of T.11
Table 3 reports empirical power of daily fixed time span jump tests, defined as the rejection rate of daily jump tests across each individual day in each sample path, averaged across all 1000 replications. As in Table 2, one can think of these experiments as reporting rejection frequencies of 500,000 tests. Interestingly, power is often small, even when λ = 0 . 4 , which is a relatively large value, for finite-activity jumps. Among the three jump tests, the ASJ test has the lowest power, while BNS and PZ test are somewhat better. In interpreting these results, note that, intuitively speaking, the empirical power of daily jump tests against non-zero jump intensity is largely determined by the magnitude of the jump intensity, since this parameter determines the frequency or probability of jump arrivals. Even if these tests have good power against jumps when they occur, for daily intervals without any jumps, it is not surprising to observe that these tests do not reject the null in favor of non-zero jump intensity. Therefore, as long as the intensity is finite, the probability of jumps not occurring on a particular fixed interval is positive, which in turn affects the empirical power of all fixed time span jump tests. However, the tests are also clearly impacted by jump size magnitude. For example, when σ increase from 0 . 25 to 1 . 25 (compare D G P s 3 and 4 with D G P s 5 and 6—symmetric jumps, or compare D G P s 7 and 8 with D G P s 9 and 10—asymmetric jumps), in such cases, empirical power increases by around 30 % under symmetric jumps. The exception is the BNS and PZ tests, which show little power improvement, under the asymmetric jump case. However, there is still a trade-off between the three tests, as the ASJ test has overall less power for the case of symmetric jumps.
Table 4, Table 5, Table 6 and Table 7 report findings from experiments where the “entire” sample of T days was used in a single application of the fixed time span tests. This testing strategy is of interest, because there is no reason that fixed time span tests need be implemented using only one day worth of data; and when they are implemented in this manner, they constitute a direct alternative to the use of our long time span tests. First, turn to Table 4, where empirical size is reported. Among the three tests, ASJ is the clear winner, with size remaining stable even when T = 500 . This is an interesting and surprising result, suggesting the broad usefulness of the ASJ test. The BNS and PZ tests perform as expected, on the other hand. For example, the empirical size of the PZ test approaches unity very quickly, and is already approximately 0.5, even for T = 50 , indicating the weak ability of this test to control size as the time span increases. As expected, and as can be seen upon inspection of Table 5, Table 6 and Table 7, the empirical power of all three tests approaches unity quickly as T increases. For example, the empirical power of the ASJ test is over 0.9 for almost all DGPs, when T = 50 . In summary, the ASJ test is well sized and has great power under long time span testing. This test, thus, is a clear alternative to the long time span tests discussed in the sequel.
Table 8, Table 9, Table 10, Table 11 and Table 12 contain the results of our experiments run using long time span jump tests (i.e., the CSS1, CSS 1 ˜ and CSS tests). As discussed above, the leverage-robust CSS1 test (i.e., CSS 1 ˜ ) sacrifices power in order to ensure robustness against leverage effects. Specifically, in Equation (11), due to the extra term, 1 T 1 / 2 + ϵ , the leverage-robust test statistic diverges at rate 1 T ϵ Δ and 1 T 1 / 2 + ϵ Δ , under the alternatives of asymmetric and symmetric jumps, respectively. In practice, the sampling interval, Δ , is usually small and fixed, and the test statistic under the alternatives shrinks with an increasing time span, particularly when jumps are symmetric, while the bootstrapped critical values are of order 1 / Δ ˜ . As a result, when constructing the leverage-robust test, we propose a rule-of-thumb called our “T-varying” strategy, in order to choose the subsampling interval, Δ ˜ , used in bootstrapping (see notes to Table 8 for details). This rule-of-thumb results in improved power in our experiments. However, it is an ad hoc data driven method, and further research into its properties remains to be done. Summarizing, we utilize coarser Δ ˜ , as T grows. Some choices of Δ ˜ in our experiments can be sub-optimal. For instance, consider Δ ˜ = 2 and 3.2 when T = 300 and 500, respectively. However, these choices appear sub-optimal only when T is very long. Thus, the subsamples used are still adequate for bootstrap resampling. Since a coarser value for Δ ˜ (i.e., a larger subsampling interval) diminishes the magnitude of our bootstrap critical values, this strategy reduces the magnitude of decreases in power that are due to the adjustment term being inversely proportional to T. The effect of this ad hoc data driven method for improving power on empirical size is found to be negligible, and hence the method is utilized in all of our leverage-robust testing experiments, and later in our empirical analysis.
Turning to the results reported in these tables, first consider the empirical size of the CSS1 and CSS 1 ˜ tests (see Table 8). It is immediately apparent that the CSS1 test has good empirical size for DGP 1 (i.e., the “no leverage” case). However, and as expected, size diverges when there is leverage. Again as expected (see Equation (12)), CSS 1 ˜ has zero empirical size, regardless of the presence of leverage, for values of T greater than 5. Interestingly, when T = 5 , the test is approximately correctly sized; thus indicating that our long time span test is an alternative to the short time span tests discussed earlier for small values of T. Of course, T should clearly not be equal to one for the application of the long time span tests. In Table 9, we report the empirical size of the CSS test, for different permutations of T and Δ . We find several interesting results. First, rejection frequencies are lower when κ = T + / T is closer to unity, especially for shorter T, given Δ .12 This is not surprising because for T + = T the test statistic is degenerate under the null (see Corradi et al. (2018)). Thus, it is advisable to use a reasonably large value for κ , such as κ = 10 , given the assumption that T + / T . Next, consider the results for the case where ρ = 0 . The CSS test is correctly sized for T = 60, 70, 80, when Δ = 1 / 78 , and is slightly undersized even for T = 220, when Δ = 1 / 156 . This finding suggests that the ratio of T to Δ is extremely crucial to the finite sample performance of the CSS test. Again, this is not surprising given the key assumption that 1/ Δ < T < 1 / Δ 2 . Finally, we observe that the test becomes oversized even faster when there is leverage (i.e., the case with ρ = 0 . 5 ). However, as long as T and Δ are carefully chosen, the finite sample performance of the CSS test is adequate.
Next, notice in Table 10 that the empirical power of the CSS1 is good across all cases. This result even holds for the case where jumps are symmetric and σ = 0 . 25 (i.e., D G P s 3 and 4), where power is still reasonable, although tests are naturally less powerful than the case where jumps are asymmetric. Turning to Table 11, where empirical power of the CSS 1 ˜ is reported. As expected, empirical power is sacrificed, particularly when jumps are symmetric and σ = 0 . 25 (i.e., D G P s 3 and 4). However, when σ = 1 . 25 (i.e., D G P s 5 and 6), this sacrifice is substantially reduced, and power is reasonably good in almost all cases, even when λ is small. Finally, the empirical power of the CSS test is reported in Table 12.13 The CSS test is much more powerful than the CSS 1 ˜ test for all cases. For example, when jumps are less frequent and are symmetric and very weak (i.e., DGPs 3 and 4 with λ = 0.1), the power of CSS test is over 47%, when T = 50 and Δ = 1/78, at a 10% nominal level, while the power of the CSS 1 ˜ test is less than 20%. Furthermore, the power of the CSS test increases as T grows, for a fixed Δ . In contrast, as discussed above, the power of the CSS 1 ˜ test decreases as T grows, for fixed Δ . Coupled with our earlier findings concerning size, we thus have strong evidence that the CSS 1 ˜ and the CSS tests are adequate tests for evaluating the presence of jumps in long time spans.

7. Empirical Examination of Stock Market Data

7.1. Data

We analyze intraday TAQ stock price data sampled at a 5-min frequency, for the period including observations from the beginning of 2006 through 2013.14 In particular, we examine: (i) 12 individual stocks including American Express Company (AXP), Bank of America Corporation (BAC), Cisco Systems, Inc. (CSCO), Citigroup Inc. (C), The Coca-Cola Company (KO), Intel Corporation (INTC), JPMorgan Chase & Co. (JPM), Merck & Co., Inc. (MRK), Microsoft Corporation (MSFT), The Procter & Gamble Company (PG), Pfizer Inc. (PFE) and Wal-Mart Stores, Inc. (WMT)); (ii) nine sector ETFs including Materials Select Sector SPDR ETF (XLB), Energy Select Sector SPDR ETF (XLE), Financial Select Sector SPDR ETF (XLF), Industrial Select Sector SPDR ETF (XLI), Technology Select Sector SPDR ETF (XLK), Consumer Staples Select Sector SPDR ETF (XLP), Utilities Select Sector SPDR ETF (XLU), Health Care Select Sector SPDR ETF (XLV) and Consumer Discretionary Select Sector SPDR ETF (XLY); and (iii) the SPDR S&P 500 ETF (SPY). Overnight returns are excluded from our dataset.

7.2. Empirical Findings

Turning our discussion first to Figure 1 and Figure 2, note that the bar charts in these figures depict annual ratios of jump days for all of our stocks and ETFs, based on application of the ASJ, BNS, and PZ tests (see legend to Figure 1). For example, 0.2 indicates that there were jumps founds on 20% of the trading days in a given year. As expected, jumps are widely detected in asset prices and indexes over almost any year. Sometimes, the annual percentage of jump days even appears to be inconceivably large, at near 50%. Additionally, while the alternative tests often perform similarly (e.g., all three testing methods find jumps during around 40% of the days in 2006 for XLU and XLP), there are substantial differences for some stocks (e.g., in 2013 the PZ tests detect jumps twice as frequently as the other fixed time span tests).
As expected, the ASJ test is the most conservative among the three tests. In almost all cases, the ASJ test detects the fewest number of “jump days”. For instance, in 2008, ASJ test only finds 7.5% jump days for XLK, while the PZ and BNS tests find jumps on 17.4% and 22.5% of days, respectively. For SPY, the ASJ test finds around 1/3 as many jumps as the other tests, in 2009. This finding is consistent with evidence from our Monte Carlo experiments (see Table 3). However, even with the most conservative test, we regularly detect over 15% jump days for many assets, including XLV for 2006 through 2010, XLB and XLY in 2006, and XLF and XLI for 2006 and 2007. Additionally, jump-day percentages are generally larger for our sector ETFs than for individual stocks. This is not surprising, considering that the individual stocks that we consider are all much larger than the sector ETFs, in terms of the trading volume, for the periods from 2006 to 2013. Still, it is also apparent, upon inspection of the figures, that the percentage of jumps detected in our ETFs is declining over time, on an annual basis. This pattern does not characterize individual stocks, however. We conjecture that a possible reason for this is that sector ETFs were not as frequently traded in the early years of our sample. For instance, typical daily trading volume for XLP or XLY was around 1 million, including pre-market trading and after-hours trading volumes, between 2006 and 2008. This volume is around 1% to 10% of the trading volume of BAC, and 0.15% to 2.5% of the trading volume of SPY, over the same period.
We now turn to a discussion of the results tabulated in Table 13, Table 14, Table 15, Table 16, Table 17, Table 18 and Table 19. In these tables, jump tests results based on the examination of long time spans are reported for the ASJ, CSS1, CSS 1 ˜ and CSS tests. In these tables, tabulated entries are test statistics, and those entries with *, **, and *** indicate rejections of the “no-jump” null at 0.10, 0.05, and 0.01 significance levels, respectively. In these tables, the “long span” considered is one year for the ASJ, CSS1 and CSS 1 ˜ , corresponding to the period of time for which annual jump-day ratios were reported in Figure 1 and Figure 2, and is one quarter for the CSS test. Consider first the results of the ASJ test reported in Table 13 for our ETFs. Interestingly, there are various ETFs for which no jumps are found. For example, for XLE, no jumps are found in 2006, 2008, 2010, and 2011. In 2011, no jumps are found for seven of 10 ETFs. Still, in 2007, jumps are found for all nine ETFs, and in 2008, jumps are found for seven ETFs. Thus, the evidence concerning jumps appears much more nuanced when the ASJ tests are utilized using long time spans. Of course, however, the discussion above concerning trading volume effects during the early years of our sample still applies. Thus, it is difficult to be sure whether the increase in the frequency of jumps found in earlier years for our sector ETFs is an indicator of the ensuing financial collapse of 2008, or whether this finding is simply an artifact of the data. We leave further investigation of this issue to future research.
Turning to Table 14 and Table 15, which again report on ETFs, note that these tables include results for the CSS1 (Table 14) and CSS 1 ˜ (Table 15) tests. As expected, given our Monte Carlo findings, and assuming the presence of leverage, rejections based on the CSS1 test are not only frequent, but are actually more frequent than rejections based on the ASJ test. Indeed, given the presence of leverage, these results carry little weight. However, we know that the CSS 1 ˜ performs adequately, given the presence of leverage. It is perhaps not surprising, then, that the number of years for which jumps are found decreases substantially when CSS 1 ˜ is used, relative to when testing using CSS1. Indeed, in Table 15, note that there are many ETFs for which no jumps are found across multiple different years. Still, it should be stressed that while CSS 1 ˜ is robust to the presence of leverage, the cost of making it thus is a reduction in power, as discussed in the previous sections of this paper. Thus, our conjecture is that the “truth” likely lies somewhere between the results reported based on application of the ASJ and the CSS 1 ˜ tests. Still, either way, it is clear that application of long time span tests results in fewer findings of jumps. It is this feature of the tests that is most intriguing, given its implications on the specification and estimation of diffusion models.
Table 16, Table 17 and Table 18 contain results that are analogous to those reported in Table 13, Table 14 and Table 15, except that individual stocks are analyzed. Interestingly, the test rejection patterns that appear upon inspection of the entire in these tables confirms our above discussion based on ETF analysis. Namely, there are various years for which no jumps are found based on application of the ASJ test, and this incidence of “non-rejections” increases when one utilizes the CSS 1 ˜ test.
Finally, Table 19 includes results based on the application of the CSS test.15 We observe more rejections in this table, which is not surprising, given the results of our Monte Carlo experiments, which indicate that the CSS test is much more powerful than the CSS 1 ˜ test, even when jumps are infrequent and weak. However, there is a possibility that some of these rejections are spurious, due to the fact that when there is leverage, the test bias associated with the CSS test increases with T, for a fixed sampling frequency, at a faster rate than does the bias associated with the CSS 1 ˜ test. Regardless, when the CSS is used, we once again find multiple quarters that exhibit no evidence of jumps.
In summary, we conclude that the usual “toolbox” used by financial econometricians might be usefully augmented by including in it long time span jump tests. If application of the CSS 1 ˜ test results in rejection of the no-jumps null hypothesis, then we have very strong evidence of jumps in the DGPs. If application of the CSS 1 ˜ does not result in rejection, then it is advisable to check this result by applying the ASJ test and CSS tests, which are more powerful.

8. Concluding Remarks

In this paper, we carry out a Monte Carlo investigation of long time span jump tests, which are designed to indicate whether the jump intensity in the underlying DGP is identically zero. The finite sample performance of these tests is compared with that of various fixed time span jump tests. We find that the long time span tests have good finite sample properties. However, we also find that fixed time span tests suffer not only from sequential bias (as is well documented), but are also severely over-sized when they are directly used to test for jumps with long time spans of data. These results confirm the findings of Huang and Tauchen (2005) that using asymptotic approximations associated with finite time span tests in order to study long time spans of data can lead to test failure. The exception to these findings is the ASJ of Aït-Sahalia and Jacod (2009), which performs favorably, when compared with Corradi et al. (2014, 2018) type long time span jump tests. In an empirical illustration, we show that all of the jump tests that are designed to be consistent, for T , find lower prevalence of jumps than when fixed time span jump tests are applied using daily data.

Author Contributions

The two authors contributed equally to this work.

Funding

This research received no external funding.

Acknowledgments

We are grateful to Valentina Corradi, Mervyn Silvapulle, George Tauchen, and Xiye Yang for useful comments. We have also benefited from comments made at various seminars, including ones given at the National University of Singapore, Notre Dame University, the University of York, and Pompeo Fabra University. Finally, we are grateful to two anonymous referees for providing useful comments and suggestions on earlier versions of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aït-Sahalia, Yacine. 2002a. Maximum likelihood estimation of discretely sampled diffusions: A closed-form approximation approach. Econometrica 70: 223–62. [Google Scholar] [CrossRef]
  2. Aït-Sahalia, Yacine. 2002b. Telling from discrete data whether the underlying continuous–time model is a diffusion. The Journal of Finance 57: 2075–112. [Google Scholar] [CrossRef]
  3. Aït-Sahalia, Yacine, and Jean Jacod. 2009. Testing for jumps in a discretely observed process. The Annals of Statistics 37: 184–222. [Google Scholar] [CrossRef]
  4. Aït-Sahalia, Yacine, and Jean Jacod. 2011. Testing whether jumps have finite or infinite activity. The Annals of Statistics 39: 1689–719. [Google Scholar] [CrossRef]
  5. Aït-Sahalia, Yacine, Jean Jacod, and Jia Li. 2012. Testing for jumps in noisy high frequency data. Journal of Econometrics 168: 207–22. [Google Scholar] [CrossRef]
  6. Aït-Sahalia, Yacine, Julio Cacho-Diaz, and Roger J. A. Laeven. 2015. Modeling financial contagion using mutually exciting jump processes. Journal of Financial Economics 117: 585–606. [Google Scholar] [CrossRef] [Green Version]
  7. Aït-Sahalia, Yacine, Jianqing Fan, Roger J. A. Laeven, Christina Dan Wang, and Xiye Yang. 2017. Estimation of the continuous and discontinuous leverage effects. Journal of the American Statistical Association 112: 1744–58. [Google Scholar] [CrossRef]
  8. Andersen, Torben G., Tim Bollerslev, and Francis X. Diebold. 2007a. Roughing it up: Including jump components in the measurement, modeling, and forecasting of return volatility. The Review of Economics and Statistics 89: 701–20. [Google Scholar] [CrossRef]
  9. Andersen, Torben G., Tim Bollerslev, and Dobrislav Dobrev. 2007b. No-arbitrage semi-martingale restrictions for continuous-time volatility models subject to leverage effects, jumps and iid noise: Theory and testable distributional implications. Journal of Econometrics 138: 125–80. [Google Scholar] [CrossRef]
  10. Andrews, Donald W. K., and Xu Cheng. 2012. Estimation and inference with weak, semi-strong, and strong identification. Econometrica 80: 2153–211. [Google Scholar] [CrossRef]
  11. Barndorff-Nielsen, Ole E. 2002. Econometric analysis of realized volatility and its use in estimating stochastic volatility models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 64: 253–80. [Google Scholar] [CrossRef]
  12. Barndorff-Nielsen, Ole E., and Neil Shephard. 2003. Realised power variation and stochastic volatility variance. Bernoulli 9: 243–65. [Google Scholar] [CrossRef]
  13. Barndorff-Nielsen, Ole E., and Neil Shephard. 2004. Power and bipower variation with stochastic volatility and jumps. Journal of Financial Econometrics 2: 1–37. [Google Scholar] [CrossRef]
  14. Barndorff-Nielsen, Ole E., and Neil Shephard. 2006. Econometrics of testing for jumps in financial economics using bipower variation. Journal of Financial Econometrics 4: 1–30. [Google Scholar] [CrossRef]
  15. Benjamini, Yoav, and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological) 57: 289–300. [Google Scholar] [CrossRef]
  16. Bowsher, Clive G. 2007. Modelling security market events in continuous time: Intensity based, multivariate point process models. Journal of Econometrics 141: 876–912. [Google Scholar] [CrossRef] [Green Version]
  17. Chernov, Mikhail, Andrew Ronald Gallant, Eric Ghysels, and George Tauchen. 2003. Alternative models for stock price dynamics. Journal of Econometrics 116: 225–57. [Google Scholar] [CrossRef]
  18. Christensen, Kim, Roel C. A. Oomen, and Mark Podolskij. 2014. Fact or friction: Jumps at ultra high frequency. Journal of Financial Economics 114: 576–99. [Google Scholar] [CrossRef]
  19. Corradi, Valentina, Mervyn J. Silvapulle, and Norman R. Swanson. 2014. Consistent Pretesting for Jumps. Working Paper. New Brunswick: Rutgers University. [Google Scholar]
  20. Corradi, Valentina, Mervyn J. Silvapulle, and Norman R. Swanson. 2018. Testing for jumps and jump intensity path dependence. Journal of Econometrics 204: 248–67. [Google Scholar] [CrossRef]
  21. Corsi, Fulvio, Davide Pirino, and Roberto Reno. 2009. Volatility Forecasting: The Jumps Do Matter. Working Paper. Siena: University of Siena. [Google Scholar]
  22. Corsi, Fulvio, Davide Pirino, and Roberto Reno. 2010. Threshold bipower variation and the impact of jumps on volatility forecasting. Journal of Econometrics 159: 276–88. [Google Scholar] [CrossRef]
  23. Dumitru, Ana-Maria, and Giovanni Urga. 2012. Identifying jumps in financial assets: A comparison between nonparametric jump tests. Journal of Business & Economic Statistics 30: 242–55. [Google Scholar]
  24. Holm, Sture. 1979. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6: 65–70. [Google Scholar]
  25. Huang, Xin, and George Tauchen. 2005. The relative contribution of jumps to total price variance. Journal of Financial Econometrics 3: 456–99. [Google Scholar] [CrossRef]
  26. Kalnina, Ilze, and Dacheng Xiu. 2017. Nonparametric estimation of the leverage effect: A trade-off between robustness and efficiency. Journal of the American Statistical Association 112: 384–96. [Google Scholar] [CrossRef]
  27. Lee, Suzanne S., and Per A. Mykland. 2008. Jumps in financial markets: A new nonparametric test and jump dynamics. Review of Financial Studies 21: 2535–63. [Google Scholar] [CrossRef]
  28. Mancini, Cecilia. 2009. Non-parametric threshold estimation for models with stochastic diffusion coefficient and jumps. Scandinavian Journal of Statistics 36: 270–96. [Google Scholar] [CrossRef]
  29. Patton, Andrew J., and Kevin Sheppard. 2015. Good volatility, bad volatility: Signed jumps and the persistence of volatility. Review of Economics and Statistics 97: 683–97. [Google Scholar] [CrossRef]
  30. Podolskij, Mark, and Mathias Vetter. 2009a. Bipower-type estimation in a noisy diffusion setting. Stochastic Processes and Their Applications 119: 2803–31. [Google Scholar] [CrossRef]
  31. Podolskij, Mark, and Mathias Vetter. 2009b. Estimation of volatility functionals in the simultaneous presence of microstructure noise and jumps. Bernoulli 15: 634–58. [Google Scholar] [CrossRef]
  32. Podolskij, Mark, and Daniel Ziggel. 2010. New tests for jumps in semimartingale models. Statistical Inference for Stochastic Processes 13: 15–41. [Google Scholar] [CrossRef]
  33. Romano, Joseph P., and Michael Wolf. 2005. Stepwise multiple testing as formalized data snooping. Econometrica 73: 1237–82. [Google Scholar] [CrossRef]
  34. Storey, John D. 2003. The positive false discovery rate: A bayesian interpretation and the q-value. The Annals of Statistics 31: 2013–35. [Google Scholar] [CrossRef]
  35. Theodosiou, Marina, and Filip Zikes. 2011. A Comprehensive Comparison of Alternative Tests for Jumps in Asset Prices. Working Paper. Nicosia: Central Bank of Cyprus. [Google Scholar]
  36. Todorov, Viktor. 2015. Jump activity estimation for pure-jump semimartingales via self-normalized statistics. The Annals of Statistics 43: 1831–64. [Google Scholar] [CrossRef]
  37. Todorov, Viktor, and George Tauchen. 2011. Volatility jumps. Journal of Business & Economic Statistics 29: 356–71. [Google Scholar]
  38. White, Halbert. 2000. A reality check for data snooping. Econometrica 68: 1097–126. [Google Scholar] [CrossRef]
1
In risk management and financial engineering, investors and researchers often require knowledge of the data generating process (DGP) that governs asset price movements. For example, asset prices are frequently modeled as continuous-time processes, such as (Itô-)semimartingales (see, e.g., Aït-Sahalia (2002a, 2002b); Chernov et al. (2003); and Andersen et al. (2007b)). At the same time, investors and researchers are also interested in nonparametrically estimable quantities such as spot/integrated volatility (see, e.g., Barndorff-Nielsen (2002); Barndorff-Neilsen and Shephard (2003); Todorov and Tauchen (2011); and Patton and Sheppard (2015)), jump variation (see, e.g., Barndorff-Nielsen and Shephard (2004); Andersen et al. (2007a); and Corsi et al. (2009)), leverage effect (see, e.g., Kalnina and Xiu (2017) and Aït-Sahalia et al. (2017)), and jump activity (see e.g., Aït-Sahalia and Jacod (2011) and Todorov (2015)).
2
Specifically, when focusing on long time span tests, we evaluate the CSS test of Corradi et al. (2018), and the CSS1 test of Corradi et al. (2014).
3
In an interesting paper related to this paper, Huang and Tauchen (2005) discuss issues associated with applying asymptotic approximations used in fixed time span jump tests over an (long time span) entire sample. They suggest that an appropriate way to solve both inconsistency and size distortion problems associated with fixed time span jump tests is to use test statistics that are asymptotically valid under a double asymptotic scheme where both T and Δ 0 . This is the approach taken by Corradi et al. (2018).
4
It is important to note that these tests have a different null hypotheses than fixed time span tests. However, our objective in this paper is not only to examine the finite sample properties of both types of test, but also to compare and contrast the two classes of tests, since both are often used as pre-tests, prior to specifying and estimating D G P s involving jumps.
5
For a detailed comparison of more fixed span tests, refer to Theodosiou and Zikes (2011) and Dumitru and Urga (2012). These authors concisely summarize and compare a large group of existing jump tests via extensive Monte Carlo experiments.
6
The CSS 1 ˜ test is a “leverage robust” variant of the C S S 1 test, and is discussed in detail in Section 4.
7
Note that λ = ( > ) 0 if and only if E λ t = ( > ) 0.
8
The key difference between the C S S and C S S 1 tests is that the former utilizes thresholding, while the latter does not.
9
Note that in all of our experiments, we do not consider jumps in the volatility process, as mentioned in Section 2.
10
In all experiments, we first simulate asset log-prices using a Milstein approximation scheme, with a finer interval h = 1/312. We then construct 5-min returns, i.e., with Δ = 1/78. Other numerical simulation strategies can also be used. For example, one can first simulate 1-s frequency data, then sample the data at a 5-min frequency, as in Christensen et al. (2014). However, considering the great computational burden associated with simulating long-span data, this is left to future research.
11
It would be interesting to analyze results for the case where fixed time span tests are sequentially conducted, but the FWER is controlled for, as discussed in Section 3. This is left to future research.
12
We assume that the sample over an increasing time span, T + used in the construction of the test, is observed and we examine a subsample with time span, T. For more details, see Corradi et al. (2018).
13
We only report results associated with less frequent and weak jumps (i.e., λ = 0.1 and 0.4) and for jump sizes that are i.i.d normally distributed with μ = 0, σ J = 0.25 and 1.25 (i.e., small and symmetric jumps, corresponding to the “worst” alternatives considered). Complete results are available upon request.
14
It will be interesting to use data sampled at a higher frequency, as suggested in Christensen et al. (2014), and to carry out inference on the relative discovery rate of the jump tests, as recommended by a referee. However, given that the tests considered in this paper do not address microstructure noise, this is left to future research.
15
Only a small subset of our empirical findings are included in the table. These findings are illustrative of those based on examination of our complete set of findings, which are available upon request. In addition, note that for the CSS test, T and Δ should be carefully chosen to guarantee good finite sample properties, as shown in Section 6. Thus, when our data are sampled at 5-min frequency, we are mostly interested in the CSS test results based on the quarterly data.
Figure 1. Annual Ratios of Jump Days for ETFs. Entries in the above charts denote annual ratios of detected jump days, based on daily applications of ASJ, BNS and PZ fixed time span jump tests. See Section 5 and Section 7 for complete details.
Figure 1. Annual Ratios of Jump Days for ETFs. Entries in the above charts denote annual ratios of detected jump days, based on daily applications of ASJ, BNS and PZ fixed time span jump tests. See Section 5 and Section 7 for complete details.
Econometrics 07 00013 g001
Figure 2. Annual Ratios of Jump Days for Individual Stocks. For more details, see Figure 1.
Figure 2. Annual Ratios of Jump Days for Individual Stocks. For more details, see Figure 1.
Econometrics 07 00013 g002
Table 1. Data Generating Processes (DGPs) used in Monte Carlo experiments.
Table 1. Data Generating Processes (DGPs) used in Monte Carlo experiments.
Panel A: Parameter Values
κ σ , σ ¯ , ζ = {5, 0.12, 0.5}
μ = 0.05
Δ = 1/78
ρ = {0, −0.5}
λ = {0.1, 0.4, 0.8}
J i i . i . d N ( μ J , σ J 2 ) , { μ J , σ J } = {0, 0.25}, { μ J , σ J } = {0, 5 × 0 . 25 }
{ μ J , σ J } = { 0 . 5 , 0.25}, { μ J , σ J } = { 2 . 5 × 0 . 5 , 5 × 0 . 25 }
Panel B: Data Generating Processes
DGP 1: Equation (25) with μ = 0.05, ρ = 0, κ σ = 5, σ ¯ = 0.12, ζ = 0.5
DGP 2: Equation (25) with μ = 0.05, ρ = −0.5, κ σ = 5, σ ¯ = 0.12, ζ = 0.5
DGP 3: DGP 1 + J i i . i . d N ( 0 , 0 . 25 2 )
DGP 4: DGP 2 + J i i . i . d N ( 0 , 0 . 25 2 )
DGP 5: DGP 1 + J i i . i . d N ( 0 , ( 5 × 0 . 25 ) 2 )
DGP 6: DGP 2 + J i i . i . d N ( 0 , ( 5 × 0 . 25 ) 2 )
DGP 7: DGP 1 + J i i . i . d N ( 0 . 5 , 0 . 25 2 )
DGP 8: DGP 2 + J i i . i . d N ( 0 . 5 , 0 . 25 2 )
DGP 9: DGP 1 + J i i . i . d N ( 2 . 5 × 0 . 5 , ( 5 × 0 . 25 ) 2 )
DGP 10: DGP 2 + J i i . i . d N ( 2 . 5 × 0 . 5 , ( 5 × 0 . 25 ) 2 )
* Notes: DGP 1 is a continuous process without leverage effect and DGP 2 is a continuous process with leverage effect. DGPs 3–10 are continuous processes with or without leverage effect plus jumps characterized by various jump size densities. See Section 6 for complete details.
Table 2. Empirical Size of Daily Fixed Time Span Tests and Sequential Testing.
Table 2. Empirical Size of Daily Fixed Time Span Tests and Sequential Testing.
TestSubjectT = 1T = 5T = 50T = 150T = 300T = 500
DGP 1
Jump Days0.112
0.058
0.115
0.058
0.118
0.059
0.120
0.059
0.119
0.059
0.120
0.060
ASJSequential
Testing Bias
0.112
0.058
0.458
0.256
0.999
0.953
1.000
1.000
1.000
1.000
1.000
1.000
DGP 2
Jump Days0.100
0.052
0.116
0.057
0.121
0.059
0.120
0.059
0.120
0.059
0.120
0.059
Sequential
Testing Bias
0.100
0.052
0.458
0.256
0.998
0.950
1.000
1.000
1.000
1.000
1.000
1.000
DGP 1
Jump Days0.122
0.071
0.150
0.094
0.155
0.095
0.152
0.093
0.153
0.094
0.153
0.095
BNSSequential
Testing Bias
0.122
0.071
0.557
0.380
1.000
0.992
1.000
1.000
1.000
1.000
1.000
1.000
DGP 2
Jump Days0.141
0.098
0.153
0.094
0.156
0.096
0.154
0.095
0.154
0.095
0.154
0.095
Sequential
Testing Bias
0.141
0.098
0.571
0.398
1.000
0.993
1.000
1.000
1.000
1.000
1.000
1.000
DGP 1
Jump Days0.120
0.043
0.081
0.031
0.123
0.050
0.113
0.049
0.110
0.046
0.108
0.045
PZSequential
Testing Bias
0.120
0.043
0.347
0.146
0.999
0.921
1.000
0.999
1.000
1.000
1.000
1.000
DGP 2
Jump Days0.105
0.046
0.075
0.029
0.122
0.048
0.113
0.049
0.110
0.046
0.108
0.045
Sequential
Testing Bias
0.105
0.046
0.331
0.140
0.998
0.916
1.000
0.999
1.000
1.000
1.000
1.000
* Notes: Entries in this table denote rejection frequencies based on applications of ASJ, BNS and PZ daily fixed time span jump tests. Results for 0.1 (row 1) and 0.05 (row 2) significance levels are reported. T denotes the number of days for which daily fixed time span jump tests are applied. “Jump Days” shows the average percentage of detected jump days at 0.1 and 0.05 significance levels, respectively. “Sequential Testing Bias” shows probability of finding at least one jump at 0.1 and 0.05 significance levels, respectively. See Section 5 and Section 6 for complete details.
Table 3. Empirical Power of Daily Fixed Time Span Tests.
Table 3. Empirical Power of Daily Fixed Time Span Tests.
Jump IntensityDGP 3DGP 4DGP 5DGP 6DGP 7DGP 8DGP 9DGP 10
ASJ
λ = 0.10.127
0.068
0.124
0.067
0.143
0.077
0.135
0.077
0.160
0.093
0.160
0.093
0.186
0.118
0.185
0.125
λ = 0.40.183
0.104
0.170
0.103
0.244
0.150
0.232
0.143
0.285
0.167
0.284
0.172
0.353
0.262
0.353
0.262
λ = 0.80.232
0.137
0.242
0.138
0.345
0.210
0.356
0.219
0.415
0.240
0.431
0.252
0.548
0.411
0.533
0.425
BNS
λ = 0 . 1 0.201
0.154
0.185
0.124
0.222
0.179
0.208
0.150
0.247
0.208
0.230
0.177
0.250
0.211
0.232
0.179
λ = 0 . 4 0.315
0.264
0.274
0.225
0.379
0.339
0.359
0.313
0.432
0.403
0.404
0.371
0.441
0.413
0.411
0.378
λ = 0 . 8 0.427
0.377
0.392
0.336
0.560
0.526
0.534
0.499
0.625
0.600
0.611
0.583
0.633
0.612
0.615
0.588
PZ
λ = 0.10.191
0.107
0.182
0.101
0.218
0.136
0.211
0.131
0.239
0.159
0.234
0.157
0.241
0.163
0.235
0.158
λ = 0.40.295
0.217
0.287
0.210
0.377
0.311
0.369
0.298
0.425
0.366
0.416
0.352
0.431
0.372
0.424
0.360
λ = 0.80.424
0.357
0.397
0.339
0.560
0.510
0.548
0.498
0.634
0.589
0.624
0.587
0.640
0.595
0.628
0.591
* Notes: See notes to Table 2. Rejection frequencies are given based on repeated daily applications of jump tests across T = 500 days, for each Monte Carlo replication. Thus, one can think of these experiments as reporting rejection frequencies of 500,000 tests (since T = 500 and there are 1000 Monte Carlo replications). Note that DGPs 3–10 are all jump diffusions, i.e., with λ > 0 . However, as λ is finite, jumps may not occur on some days. For more details, see Section 6.
Table 4. Empirical size of Fixed Time Span tests when utilized using Long Samples.
Table 4. Empirical size of Fixed Time Span tests when utilized using Long Samples.
TestT = 5T = 25T = 50T = 150T = 300T = 500
DGP 1
ASJ0.113
0.051
0.109
0.057
0.119
0.067
0.114
0.066
0.135
0.072
0.147
0.072
DGP 2
0.106
0.045
0.131
0.075
0.148
0.069
0.136
0.065
0.132
0.072
0.145
0.080
DGP 1
BNS0.132
0.070
0.136
0.071
0.142
0.075
0.150
0.081
0.194
0.109
0.215
0.127
DGP 2
0.127
0.081
0.122
0.065
0.142
0.080
0.162
0.095
0.192
0.104
0.227
0.132
DGP 1
PZ0.116
0.069
0.288
0.279
0.504
0.484
0.849
0.846
0.962
0.950
0.994
0.993
DGP 2
0.119
0.071
0.290
0.265
0.504
0.482
0.869
0.856
0.974
0.964
0.995
0.995
* Notes: See notes to Table 2. Entries are rejection frequencies based on a single application of the ASJ, BNS and PZ tests using long time span samples with T days, for each Monte Carlo replication. For all values of T, 1000 replications are run.
Table 5. Empirical Power of the ASJ Jump Test when utilized using Long Samples.
Table 5. Empirical Power of the ASJ Jump Test when utilized using Long Samples.
Jump IntensityDGP 3DGP 4DGP 5DGP 6DGP 7DGP 8DGP 9DGP 10
T = 5
λ = 0.10.214
0.151
0.227
0.169
0.341
0.285
0.352
0.286
0.409
0.345
0.408
0.351
0.436
0.395
0.447
0.405
λ = 0.40.498
0.401
0.482
0.400
0.750
0.678
0.731
0.668
0.838
0.769
0.846
0.784
0.876
0.856
0.872
0.854
λ = 0.80.670
0.565
0.653
0.563
0.901
0.839
0.899
0.833
0.941
0.897
0.936
0.895
0.947
0.934
0.939
0.923
T = 25
λ = 0.10.544
0.480
0.510
0.454
0.835
0.812
0.803
0.779
0.913
0.907
0.913
0.900
0.924
0.921
0.924
0.918
λ = 0.40.908
0.872
0.898
0.870
0.993
0.993
0.996
0.994
0.992
0.991
0.991
0.991
0.986
0.986
0.984
0.984
λ = 0.80.988
0.985
0.988
0.977
0.995
0.995
0.995
0.994
0.979
0.978
0.984
0.980
0.989
0.986
0.985
0.978
T = 50
λ = 0.10.711
0.663
0.714
0.655
0.960
0.954
0.956
0.949
0.989
0.985
0.986
0.985
0.991
0.990
0.990
0.990
λ = 0.40.987
0.979
0.989
0.983
0.997
0.997
0.998
0.997
0.992
0.992
0.996
0.994
0.993
0.993
0.994
0.992
λ = 0.80.995
0.995
0.997
0.994
0.996
0.995
0.996
0.995
0.990
0.989
0.990
0.988
0.997
0.995
0.996
0.994
T = 150
λ = 0.10.965
0.954
0.961
0.947
0.999
0.999
0.999
0.999
0.998
0.997
0.997
0.997
0.996
0.996
0.996
0.996
λ = 0.40.998
0.998
1.000
0.999
1.000
1.000
1.000
1.000
0.998
0.998
0.999
0.998
1.000
1.000
1.000
1.000
λ = 0.80.999
0.999
0.999
0.999
0.999
0.999
0.999
0.999
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 300
λ = 0.10.996
0.995
0.996
0.995
0.999
0.999
0.999
0.999
0.998
0.998
0.998
0.998
0.999
0.999
1.000
1.000
λ = 0.41.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.81.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 500
λ = 0.10.998
0.998
0.999
0.999
0.999
0.999
0.999
0.999
0.999
0.999
0.999
0.999
1.000
1.000
1.000
1.000
λ = 0.41.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.81.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
* Notes: See notes to Table 3 and Table 4.
Table 6. Empirical Power of the BNS Jump Test when utilized using Long Samples.
Table 6. Empirical Power of the BNS Jump Test when utilized using Long Samples.
Jump IntensityDGP 3DGP 4DGP 5DGP 6DGP 7DGP 8DGP 9DGP 10
T = 5
λ = 0.10.264
0.197
0.270
0.207
0.396
0.340
0.390
0.344
0.452
0.408
0.446
0.410
0.464
0.423
0.458
0.423
λ = 0.40.588
0.533
0.599
0.540
0.802
0.771
0.798
0.768
0.885
0.871
0.883
0.872
0.889
0.880
0.887
0.878
λ = 0.80.817
0.768
0.815
0.776
0.948
0.942
0.961
0.953
0.983
0.979
0.981
0.978
0.986
0.984
0.984
0.981
T = 25
λ = 0.10.537
0.471
0.550
0.476
0.824
0.796
0.833
0.804
0.915
0.906
0.917
0.907
0.928
0.923
0.930
0.923
λ = 0.40.945
0.923
0.943
0.928
0.998
0.998
0.997
0.997
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.81.000
1.000
0.999
0.999
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 50
λ = 0.10.707
0.625
0.720
0.650
0.950
0.946
0.958
0.944
0.987
0.983
0.988
0.985
0.993
0.993
0.995
0.994
λ = 0.40.997
0.994
0.996
0.995
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.81.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 150
λ = 0.10.947
0.917
0.958
0.934
1.000
0.999
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.41.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.81.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 300
λ = 0.10.994
0.993
0.996
0.994
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.41.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.81.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 500
λ = 0.11.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.41.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.81.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
* Notes: See notes to Table 3 and Table 4.
Table 7. Empirical Power of the PZ Jump Test when utilized using Long Samples.
Table 7. Empirical Power of the PZ Jump Test when utilized using Long Samples.
Jump IntensityDGP 3DGP 4DGP 5DGP 6DGP 7DGP 8DGP 9DGP 10
T = 5
λ = 0.10.316
0.277
0.317
0.284
0.418
0.385
0.413
0.387
0.465
0.433
0.456
0.433
0.471
0.439
0.464
0.441
λ = 0.40.682
0.656
0.661
0.645
0.829
0.818
0.810
0.801
0.888
0.881
0.879
0.874
0.890
0.883
0.880
0.875
λ = 0.80.874
0.864
0.865
0.851
0.962
0.960
0.965
0.960
0.981
0.979
0.984
0.980
0.982
0.980
0.985
0.981
T = 25
λ = 0.10.788
0.780
0.777
0.766
0.911
0.907
0.900
0.896
0.936
0.934
0.935
0.933
0.939
0.937
0.939
0.937
λ = 0.40.993
0.993
0.992
0.992
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.81.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 50
λ = 0.10.960
0.956
0.954
0.950
0.991
0.991
0.987
0.987
0.997
0.997
0.993
0.993
0.998
0.998
0.994
0.994
λ = 0.41.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.81.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 150
λ = 0.11.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.41.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.81.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 300
λ = 0.11.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.41.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.81.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 500
λ = 0.11.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.41.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.81.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
* Notes: See notes to Table 3 and Table 4.
Table 8. Empirical Size of the CSS1 and CSS 1 ˜ Jump Tests.
Table 8. Empirical Size of the CSS1 and CSS 1 ˜ Jump Tests.
Test StatisticLeverageT = 5T = 25T = 50T = 150T = 300T = 500
CSS10.202
0.145
0.151
0.094
0.129
0.072
0.123
0.076
0.106
0.055
0.112
0.057
0.264
0.185
0.316
0.221
0.461
0.341
0.858
0.775
0.987
0.968
0.999
0.997
CSS 1 ˜ 0.075
0.048
0.007
0.003
0.001
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.089
0.053
0.002
0.001
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
* Notes: As in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7, jump test rejection frequencies are reported. As discussed in Section 6, the subsampling interval, Δ ˜ , used in constructing critical values for the tests has been selected using a simple rule. Namely, {T = 25 and 50, Δ ˜ n 1 = 1/3, Δ ˜ n 2 = 1/26}; {T = 150, Δ ˜ n 1 = 1, Δ ˜ n 2 = 1/13}; {T = 300, Δ ˜ n 1 = 2, Δ ˜ n 2 = 1/13}; {T = 500, Δ ˜ n 1 = 3.2, Δ ˜ n 2 = 1/10}, where Δ ˜ n 1 is the subsampling interval used in bootstrapping critical values for CSS 1 ˜ and Δ ˜ n 2 is the subsampling interval used in bootstrapping critical values for CSS1.
Table 9. Empirical size of the C S S Jump Test.
Table 9. Empirical size of the C S S Jump Test.
T = 40T = 50T = 60T = 70T = 80T = 90
No Leverage
κ = 20.049
0.019
0.057
0.032
0.114
0.057
0.105
0.060
0.147
0.079
0.169
0.103
Δ κ = 50.055
0.025
0.078
0.042
0.089
0.048
0.117
0.053
0.111
0.059
0.153
0.097
κ = 100.061
0.021
0.076
0.033
0.091
0.042
0.113
0.067
0.123
0.067
0.152
0.089
Leverage
κ = 20.047
0.021
0.117
0.059
0.176
0.099
0.269
0.189
0.341
0.242
0.459
0.361
κ = 50.085
0.046
0.136
0.077
0.203
0.128
0.275
0.182
0.350
0.249
0.437
0.330
κ = 100.085
0.046
0.128
0.062
0.180
0.116
0.284
0.171
0.386
0.253
0.436
0.332
T = 120T = 140T = 160T = 180T = 200T = 220
No Leverage
κ = 20.030
0.008
0.024
0.004
0.046
0.015
0.057
0.020
0.068
0.033
0.083
0.045
κ = 50.033
0.015
0.047
0.020
0.038
0.021
0.068
0.029
0.070
0.037
0.083
0.040
κ = 100.046
0.015
0.056
0.022
0.067
0.028
0.075
0.032
0.079
0.038
0.089
0.048
Δ / 2 Leverage
κ = 20.041
0.017
0.075
0.042
0.123
0.066
0.160
0.086
0.234
0.151
0.270
0.174
κ = 50.062
0.032
0.087
0.043
0.127
0.064
0.170
0.086
0.227
0.139
0.290
0.185
κ = 100.071
0.027
0.101
0.050
0.156
0.091
0.167
0.084
0.246
0.138
0.283
0.187
* Notes: Rejection frequencies are reported based on application of CSS test. Sampling intervals are either Δ = 1/78 or Δ / 2 = 1/156. κ = T + /T, where T + is the longer time span. Different combinations of Δ , T and T + are specified in order to find a suitable set of parameters for which adequate finite sample properties characterize the test. For complete details, see Section 6.
Table 10. Empirical Power of CSS1 Jump Test.
Table 10. Empirical Power of CSS1 Jump Test.
Jump IntensityDGP 3DGP 4DGP 5DGP 6DGP 7DGP 8DGP 9DGP 10
T = 5
λ = 0.10.346
0.274
0.361
0.291
0.435
0.374
0.463
0.396
0.487
0.436
0.517
0.463
0.507
0.461
0.540
0.491
λ = 0.40.572
0.498
0.585
0.514
0.717
0.665
0.712
0.650
0.895
0.879
0.883
0.865
0.901
0.890
0.895
0.887
λ = 0.80.664
0.600
0.677
0.615
0.790
0.752
0.784
0.737
0.968
0.962
0.973
0.964
0.969
0.964
0.976
0.970
T = 25
λ = 0.10.472
0.400
0.535
0.447
0.713
0.656
0.733
0.670
0.913
0.895
0.921
0.896
0.926
0.917
0.941
0.928
λ = 0.40.630
0.560
0.634
0.537
0.690
0.604
0.676
0.598
0.997
0.996
0.998
0.997
1.000
0.998
1.000
1.000
λ = 0.80.650
0.583
0.645
0.569
0.647
0.580
0.650
0.572
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 50
λ = 0.10.559
0.459
0.587
0.492
0.719
0.644
0.736
0.657
0.982
0.977
0.981
0.975
0.990
0.989
0.990
0.988
λ = 0.40.641
0.557
0.656
0.561
0.642
0.548
0.628
0.559
1.000
1.000
1.000
1.000
0.999
0.999
0.999
0.999
λ = 0.80.623
0.534
0.604
0.527
0.628
0.531
0.627
0.539
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 150
λ = 0.10.763
0.722
0.786
0.732
0.847
0.804
0.828
0.788
1.000
1.000
1.000
0.999
1.000
1.000
1.000
1.000
λ = 0.40.762
0.716
0.768
0.717
0.807
0.759
0.802
0.764
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.80.761
0.715
0.750
0.704
0.805
0.756
0.795
0.751
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 300
λ = 0.10.766
0.709
0.784
0.738
0.819
0.784
0.825
0.786
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.40.789
0.742
0.768
0.713
0.780
0.733
0.791
0.755
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.80.759
0.711
0.756
0.702
0.808
0.764
0.802
0.757
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 500
λ = 0.10.791
0.755
0.819
0.790
0.865
0.833
0.855
0.825
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.40.819
0.771
0.808
0.774
0.838
0.804
0.827
0.795
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.80.786
0.753
0.776
0.739
0.823
0.784
0.836
0.795
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
* Notes: See notes to Table 3, Table 4 and Table 8.
Table 11. Empirical Power of CSS 1 ˜ Jump Test.
Table 11. Empirical Power of CSS 1 ˜ Jump Test.
Jump IntensityDGP 3DGP 4DGP 5DGP 6DGP 7DGP 8DGP 9DGP 10
T = 5
λ = 0.10.181
0.145
0.221
0.162
0.300
0.272
0.337
0.287
0.374
0.348
0.401
0.355
0.406
0.390
0.438
0.407
λ = 0.40.428
0.371
0.443
0.388
0.651
0.597
0.642
0.585
0.856
0.835
0.837
0.819
0.871
0.865
0.866
0.863
λ = 0.80.596
0.515
0.586
0.513
0.761
0.717
0.738
0.710
0.950
0.943
0.951
0.944
0.951
0.948
0.952
0.948
T = 25
λ = 0.10.262
0.228
0.278
0.233
0.670
0.644
0.680
0.656
0.887
0.875
0.879
0.863
0.915
0.915
0.918
0.915
λ = 0.40.567
0.505
0.565
0.492
0.824
0.798
0.811
0.783
0.998
0.996
0.999
0.998
0.998
0.998
0.998
0.998
λ = 0.80.646
0.602
0.643
0.579
0.793
0.766
0.801
0.766
0.997
0.997
0.998
0.996
1.000
1.000
1.000
1.000
T = 50
λ = 0.10.169
0.129
0.192
0.149
0.694
0.630
0.687
0.624
0.946
0.933
0.959
0.947
0.983
0.978
0.983
0.978
λ = 0.40.436
0.349
0.416
0.342
0.708
0.632
0.696
0.634
0.998
0.998
0.999
0.997
0.998
0.995
0.997
0.996
λ = 0.80.489
0.411
0.469
0.386
0.654
0.584
0.635
0.586
0.996
0.995
0.993
0.990
0.999
0.997
0.999
0.997
T = 150
λ = 0.10.163
0.120
0.149
0.099
0.766
0.726
0.740
0.695
0.997
0.996
1.000
0.998
1.000
1.000
1.000
1.000
λ = 0.40.320
0.245
0.307
0.215
0.743
0.701
0.742
0.709
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.80.390
0.311
0.353
0.286
0.709
0.659
0.712
0.657
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 300
λ = 0.10.090
0.058
0.093
0.064
0.769
0.728
0.753
0.719
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.40.223
0.158
0.216
0.154
0.733
0.696
0.744
0.708
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.80.274
0.199
0.248
0.188
0.702
0.659
0.708
0.646
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
T = 500
λ = 0.10.054
0.026
0.063
0.038
0.757
0.710
0.751
0.700
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.40.153
0.091
0.158
0.106
0.717
0.650
0.700
0.651
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
λ = 0.80.188
0.126
0.171
0.129
0.667
0.607
0.649
0.585
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
* Notes: See notes to Table 3, Table 4 and Table 8.
Table 12. Empirical Power of the C S S Jump Test.
Table 12. Empirical Power of the C S S Jump Test.
T = 40T = 50T = 60T = 70T = 80T = 90
λ = 0.1
DGP 30.450
0.411
0.473
0.429
0.514
0.455
0.571
0.513
0.633
0.565
0.617
0.559
Δ DGP 40.465
0.412
0.485
0.438
0.550
0.499
0.647
0.594
0.652
0.593
0.702
0.642
DGP 50.821
0.810
0.872
0.863
0.922
0.917
0.952
0.947
0.965
0.964
0.975
0.968
DGP 60.829
0.817
0.876
0.867
0.928
0.918
0.953
0.948
0.974
0.968
0.978
0.972
λ = 0.4
DGP 30.806
0.762
0.835
0.812
0.843
0.822
0.847
0.819
0.868
0.846
0.872
0.859
DGP 40.792
0.753
0.831
0.807
0.828
0.804
0.862
0.829
0.866
0.831
0.887
0.865
DGP 50.997
0.996
0.999
0.999
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
DGP 60.996
0.994
0.997
0.997
0.997
0.997
1.000
1.000
1.000
1.000
1.000
1.000
T = 120T = 140T = 160T = 180T = 200T = 220
λ = 0.1
DGP 30.863
0.838
0.890
0.872
0.884
0.862
0.899
0.889
0.919
0.905
0.917
0.894
Δ /2DGP 40.865
0.835
0.892
0.869
0.907
0.895
0.903
0.885
0.913
0.898
0.918
0.903
DGP 51.000
1.000
0.999
0.999
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
DGP 60.997
0.997
1.000
1.000
0.999
0.998
0.998
0.998
0.998
0.998
0.999
0.999
λ = 0.4
DGP 30.946
0.939
0.943
0.938
0.944
0.938
0.958
0.950
0.962
0.954
0.965
0.959
DGP 40.964
0.958
0.954
0.949
0.961
0.952
0.957
0.949
0.959
0.948
0.967
0.958
DGP 51.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
DGP 61.000
1.000
0.999
0.999
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
* Notes: See notes to Table 9.
Table 13. ASJ Jump Test Results for ETFs.
Table 13. ASJ Jump Test Results for ETFs.
20062007200820092010201120122013
SPY3.264 (***)1.694 (**)0.0022.579 (***)5.213 (***)0.7450.8741.745 (**)
XLB5.011 (***)2.528 (***)1.941 (**)3.207 (***)4.161 (***)0.8702.682 (***)0.986
XLE0.9524.862 (***)0.0197.023 (***)1.0610.0585.665 (***)1.772 (**)
XLF3.207 (***)4.128 (***)1.825 (**)1.774 (**)0.8430.8221.286 (*)1.663 (**)
XLI4.233 (***)5.903 (***)1.625 (*)1.827 (**)7.367 (***)1.486 (*)0.9511.813 (**)
XLK7.909 (***)4.214 (***)0.9911.766 (**)1.384 (*)0.7590.9222.492 (***)
XLP3.180 (***)8.996 (***)7.979 (***)4.212 (***)0.3731.493 (*)6.078 (***)1.565 (*)
XLU7.066 (***)2.730 (***)1.481 (*)10.000 (***)0.5280.6422.769 (***)3.324 (***)
XLV7.030 (***)6.084 (***)1.828 (**)2.386 (***)2.352 (***)1.729 (**)0.8662.530 (***)
XLY3.368 (***)1.845 (**)3.279 (***)4.399 (***)0.4570.7350.0222.721 (***)
* Notes: See notes to Table 4 and Table 5. Entries are jump test statistics, and (***), (**), and (*) indicate rejections of the “no jump” null hypothesis at 0.01, 0.05 and 0.1 significance levels, respectively.
Table 14. CSS1 Jump Test Results for ETFs.
Table 14. CSS1 Jump Test Results for ETFs.
20062007200820092010201120122013
SPY2.04 × 10 7 −3.88 × 10 6 4.87 × 10 4 (***)2.65 × 10 5 1.65 × 10 6 −1.19 × 10 5 −3.15 × 10 7 −2.90 × 10 6 (**)
XLB−1.40 × 10 5 −8.45 × 10 5 (***)1.35 × 10 3 (***)−1.90 × 10 4 (***)−9.21 × 10 5 (***)9.39 × 10 6 −1.33 × 10 6 1.87 × 10 6
XLE2.38 × 10 5 (***)−2.43 × 10 5 (**)1.07 × 10 3 (**)−9.77 × 10 5 (**)4.74 × 10 5 (***)−4.76 × 10 5 −6.35 × 10 6 (**)−4.45 × 10 5
XLF−1.11 × 10 5 (***)−2.58 × 10 4 (***)2.00 × 10 3 (***)−2.26 × 10 6 −5.04 × 10 5 (**)−3.69 × 10 5 3.76 × 10 6 −6.79 × 10 6 (**)
XLI−2.03 × 10 5 (***)9.96 × 10 5 (***)7.23 × 10 4 (***)−9.39 × 10 5 (**)3.24 × 10 4 (***)9.48 × 10 6 1.98 × 10 6 −2.93 × 10 6 (**)
XLK−1.99 × 10 4 (***)−6.68 × 10 5 (***)1.86 × 10 3 (***)−2.68 × 10 5 −1.45 × 10 4 (***)−9.44 × 10 6 −2.63 × 10 6 −3.39 × 10 6 (***)
XLP5.92 × 10 6 (***)1.83 × 10 5 (***)−2.02 × 10 3 (***)−5.91 × 10 5 (***)−2.19 × 10 5 (**)−4.69 × 10 6 −4.30 × 10 5 (***)8.25 × 10 7
XLU−9.42 × 10 5 (***)1.67 × 10 4 (***)−8.17 × 10 5 −1.04 × 10 1 (***)1.75 × 10 4 (***)−1.91 × 10 5 7.89 × 10 6 (***)−7.47 × 10 6
XLV−1.17 × 10 4 (***)9.52 × 10 5 (***)4.08 × 10 4 (***)−4.11 × 10 5 (***)−1.71 × 10 4 (***)1.24 × 10 5 −8.03 × 10 7 −2.53 × 10 6
XLY−8.05 × 10 5 (***)−1.06 × 10 4 (***)−2.97 × 10 3 (***)−2.72 × 10 4 (***)1.62 × 10 5 −1.56 × 10 5 −5.33 × 10 6 (**)2.57 × 10 7
* Notes: See notes to Table 8 and Table 13.
Table 15. CSS 1 ˜ Jump Test Results for ETFs.
Table 15. CSS 1 ˜ Jump Test Results for ETFs.
20062007200820092010201120122013
SPY1.28 × 10 8 −2.43 × 10 7 (*)3.05 × 10 5 1.66 × 10 6 1.03 × 10 7 −7.46 × 10 7 −1.98 × 10 8 (*)−1.82 × 10 7
XLB−8.78 × 10 7 (*)−5.31 × 10 6 (**)8.41 × 10 5 −1.19 × 10 5 −5.77 × 10 6 5.88 × 10 7 −8.38 × 10 8 (*)1.17 × 10 7
XLE1.49 × 10 6 −1.53 × 10 6 6.71 × 10 5 −6.12 × 10 6 2.97 × 10 6 −2.98 × 10 6 −3.99 × 10 7 −2.79 × 10 7
XLF−6.97 × 10 7 −1.62 × 10 5 (**)1.25 × 10 4 −1.42 × 10 6 −3.16 × 10 6 −2.31 × 10 6 2.36 × 10 7 (*)−4.26 × 10 7
XLI−1.27 × 10 6 6.25 × 10 6 4.52 × 10 5 −5.88 × 10 6 2.03 × 10 5 (***)5.94 × 10 7 1.24 × 10 7 (*)−1.83 × 10 7
XLK−1.25 × 10 5 (***)−4.19 × 10 6 (**)1.16 × 10 4 −1.68 × 10 6 −9.06 × 10 6 (**)−5.91 × 10 7 −1.66 × 10 7 −2.12 × 10 7
XLP3.71 × 10 7 (*)1.15 × 10 6 −1.26 × 10 4 (***)−3.70 × 10 6 (**)−1.37 × 10 6 (*)−2.94 × 10 7 −2.70 × 10 6 (***)5.17 × 10 8
XLU−5.92 × 10 6 1.05 × 10 5 (***)−5.11 × 10 6 (*)−6.49 × 10 3 (***)1.10 × 10 5 (***)−1.19 × 10 6 4.96 × 10 7 −4.68 × 10 7
XLV−7.37 × 10 6 (***)5.98 × 10 6 2.55 × 10 5 −2.57 × 10 6 −1.07 × 10 5 (***)7.76 × 10 7 −5.05 × 10 8 (*)−1.58 × 10 7
XLY−5.05 × 10 6 −6.68 × 10 6 (**)−1.86 × 10 4 (**)−1.71 × 10 5 1.01 × 10 6 −9.79 × 10 7 −3.35 × 10 7 (*)1.61 × 10 8
* Notes: See notes to Table 8 and Table 13.
Table 16. ASJ Jump Test Results for Individual Stocks.
Table 16. ASJ Jump Test Results for Individual Stocks.
20062007200820092010201120122013
American Express3.115 (***)4.248 (***)2.044 (**)2.672 (***)0.9962.343 (***)3.555 (***)2.464 (***)
Bank of America2.914 (***)3.014 (***)1.529 (*)3.576 (***)0.8193.171 (***)1.784 (**)0.803
Cisco3.811 (***)5.997 (***)0.1801.818 (**)2.865 (***)1.665 (**)2.581 (***)0.922
Citigroup3.215 (***)0.8020.5203.302 (***)6.023 (***)0.7520.1970.895
Coca-Cola8.039 (***)10.005 (***)6.134 (***)4.563 (***)0.8261.774 (**)2.551 (***)3.476 (***)
Intel2.632 (***)4.142 (***)3.332 (***)0.9145.627 (***)6.425 (***)1.821 (**)1.772 (**)
JPMorgan3.446 (***)2.532 (***)3.232 (***)0.9621.733 (**)3.461 (***)0.9870.983
Merck & Co.5.700 (***)8.016 (***)1.909 (**)3.184 (***)0.0511.559 (*)2.648 (***)0.246
Microsoft2.982 (***)6.997 (***)0.9090.8110.6523.434 (***)4.579 (***)0.370
Procter & Gamble3.285 (***)4.933 (***)1.814 (**)9.998 (***)2.387 (***)3.218 (***)9.003 (***)1.745 (**)
Pfizer2.356 (***)4.024 (***)0.8450.8907.436 (***)1.960 (**)1.740 (**)2.516 (***)
Wal-Mart0.8234.011 (***)1.1871.730 (**)0.7992.439 (***)0.1313.461 (***)
* Notes: See notes to Table 8 and Table 13.
Table 17. CSS1 Jump Test Results for Individual Stocks.
Table 17. CSS1 Jump Test Results for Individual Stocks.
20062007200820092010201120122013
American Express1.45 × 10 4 (***)−8.03 × 10 5 (***)4.57 × 10 3 (***)1.30 × 10 3 (**)−7.80 × 10 5 −1.34 × 10 4 (**)−6.61 × 10 5 (***)6.24 × 10 5 (***)
Bank of America−8.20 × 10 4 (***)−1.58 × 10 4 (***)4.16 × 10 3 (***)−2.26 × 10 2 (**)−1.29 × 10 4 −1.39 × 10 3 (***)4.47 × 10 5 −3.16 × 10 5
Cisco6.94 × 10 5 (***)−7.64 × 10 2 (***)9.32 × 10 4 (***)9.61 × 10 5 −2.02 × 10 4 (***)1.31 × 10 4 (***)−1.85 × 10 5 8.56 × 10 6
Citigroup−3.93 × 10 5 (***)−7.93 × 10 5 −1.13 × 10 2 −1.60 × 10 2 −1.64 × 10 3 (***)−2.61 × 10 4 6.52 × 10 5 (**)−2.26 × 10 5
Coca-Cola8.49 × 10 5 (***)5.56 × 10 5 (***)−2.05 × 10 3 (***)1.06 × 10 4 (***)−3.79 × 10 5 −5.11 × 10 6 −2.31 × 10 5 (***)−1.46 × 10 5 (***)
Intel5.55 × 10 5 (***)−1.49 × 10 4 (***)1.57 × 10 3 (***)2.10 × 10 4 (**)1.72 × 10 4 (***)−3.69 × 10 5 −1.74 × 10 5 4.64 × 10 5 (***)
JPMorgan4.15 × 10 5 −1.83 × 10 4 (***)1.98 × 10 3 −2.43 × 10 4 3.88 × 10 5 3.54 × 10 4 (***)7.16 × 10 5 1.96 × 10 5
Merck & Co.1.39 × 10 4 (***)2.12 × 10 4 (***)−6.85 × 10 3 (***)−5.17 × 10 4 (***)3.81 × 10 4 (***)4.54 × 10 6 2.80 × 10 5 (***)6.49 × 10 6
Microsoft9.93 × 10 6 (**)−7.46 × 10 2 (***)1.76 × 10 4 6.35 × 10 5 −4.09 × 10 5 −1.23 × 10 5 −6.93 × 10 5 (***)1.34 × 10 5
Procter & Gamble2.76 × 10 5 (***)8.05 × 10 5 (***)1.37 × 10 4 −1.29 × 10 3 (***)2.33 × 10 3 (***)−2.88 × 10 5 (***)2.47 × 10 5 (***)5.72 × 10 6
Pfizer2.08 × 10 4 (***)−2.74 × 10 3 (***)1.11 × 10 4 6.97 × 10 5 3.96 × 10 6 8.69 × 10 5 (**)9.01 × 10 6 1.78 × 10 5 (***)
Wal-Mart6.75 × 10 5 8.67 × 10 5 (***)8.60 × 10 4 (***)5.27 × 10 5 (***)−9.57 × 10 6 3.19 × 10 5 (**)6.87 × 10 6 −9.51 × 10 6
* Notes: See notes to Table 8 and Table 13.
Table 18. CSS 1 ˜ Jump Test Results for Individual Stocks.
Table 18. CSS 1 ˜ Jump Test Results for Individual Stocks.
20062007200820092010201120122013
American Express9.09 × 10 6 (***)−5.04 × 10 6 (*)2.86 × 10 4 8.13 × 10 5 −4.88 × 10 6 −8.42 × 10 6 −4.16 × 10 6 3.91 × 10 6 (**)
Bank of America−5.15 × 10 5 (***)−9.94 × 10 6 (**)2.60 × 10 4 (*)−1.42 × 10 3 −8.10 × 10 6 −8.69 × 10 5 2.81 × 10 6 (**)−1.98 × 10 6
Cisco4.36 × 10 6 (*)−4.79 × 10 1 (***)5.82 × 10 5 6.02 × 10 6 −1.27 × 10 5 (*)8.23 × 10 6 −1.17 × 10 6 5.36 × 10 7
Citigroup−2.47 × 10 6 −4.98 × 10 6 (*)−7.08 × 10 4 (**)−1.00 × 10 3 −1.03 × 10 4 (***)−1.64 × 10 5 4.10 × 10 6 (*)−1.42 × 10 6
Coca-Cola5.33 × 10 6 (***)3.49 × 10 6 (**)−1.28 × 10 4 (**)6.64 × 10 6 −2.37 × 10 6 (**)−3.20 × 10 7 −1.45 × 10 6 (**)−9.15 × 10 7
Intel3.48 × 10 6 (*)−9.33 × 10 6 9.83 × 10 5 (**)1.32 × 10 5 1.08 × 10 5 (*)−2.31 × 10 6 −1.10 × 10 6 2.91 × 10 6
JPMorgan2.61 × 10 6 (*)−1.15 × 10 5 (**)1.24 × 10 4 −1.52 × 10 5 2.43 × 10 6 2.22 × 10 5 (*)4.50 × 10 6 (*)1.23 × 10 6
Merck & Co.8.72 × 10 6 (*)1.33 × 10 5 (***)−4.28 × 10 4 (***)−3.24 × 10 5 2.39 × 10 5 (***)2.84 × 10 7 1.76 × 10 6 (**)4.07 × 10 7
Microsoft6.23 × 10 7 (*)−4.68 × 10 1 (***)1.10 × 10 5 (*)3.98 × 10 6 −2.56 × 10 6 −7.71 × 10 7 −4.36 × 10 6 (**)8.42 × 10 7
Procter & Gamble1.73 × 10 6 (*)5.05 × 10 6 (*)8.58 × 10 6 −8.06 × 10 5 (***)1.46 × 10 4 (***)−1.81 × 10 6 1.55 × 10 6 (**)3.58 × 10 7
Pfizer1.30 × 10 5 (**)−1.72 × 10 4 (***)6.94 × 10 6 (*)4.37 × 10 6 2.48 × 10 7 5.44 × 10 6 5.67 × 10 7 (*)1.11 × 10 6
Wal-Mart4.24 × 10 6 (**)5.44 × 10 6 5.38 × 10 5 (***)3.30 × 10 6 −6.00 × 10 7 2.00 × 10 6 4.32 × 10 7 (*)−5.96 × 10 7
* Notes: See notes to Table 8 and Table 13.
Table 19. CSS Jump Test Results for Individual Stocks and The Market ETF.
Table 19. CSS Jump Test Results for Individual Stocks and The Market ETF.
Bank of AmericaCoca-ColaIntelJPMorganMerck & Co.PfizerWal-MartSPY
2007-Q110.877 ***23.301 ***3.148 ***−2.86 ***26.875 ***−831.206 ***16.069 ***1.169
2007-Q2−14.713 ***79.895 ***−1.112−6.327 ***17.705 ***0.06533.324 ***−2.671 ***
2007-Q314.278 ***−12.575 ***9.296 ***2.648 ***−5.986 ***−2.877 ***5.97 ***5.733 ***
2007-Q4−7.383 ***−6.614 ***10.763 ***−7.326 ***14.327 ***2.291 **4.186 ***−7.235 ***
2008-Q111.617 ***−7.086 ***4.034 ***5.499 ***−279.804 ***8.011 ***9.386 ***3.354 ***
2008-Q20.412−65.435 ***−0.9651.722 *−3.815 ***−11.948 ***4.083 ***−0.623
2008-Q33.122 ***−171.387 ***2.249 **1.184−41.319 ***1.0160.694−2.105 **
2008-Q45.546 ***−15.662 ***−2.057 **4.072 ***11.769 ***1.692 *11.442 ***8.688 ***
2012-Q11.875 *−6.077 ***−2.785 ***8.731 ***−9.687 ***−0.516−1.131−2.851 ***
2012-Q2−2.154 **−10.595 ***−4.255 ***1.5478.977 ***8.513 ***19.704 ***0.5
2012-Q32.442 **−7.287 ***6.378 ***6.126 ***26.943 ***3.99 ***6.503 ***2.551 **
2012-Q43.051 ***−6.527 ***−7.281 ***−2.976 ***−0.216−7.034 ***−13.85 ***−0.427
2013-Q1−5.007 ***−2.223 **0.194−1.537−24.992 ***5.515 ***−11.386 ***−3.096 ***
2013-Q2−4.247 ***−0.08411.553 ***−0.6714.179 ***0.318−3.669 ***−6.975 ***
2013-Q30.853−1.13−1.0094.574 ***17.83 ***1.873 *3.665 ***0.826
2013-Q48.199 ***−12.131 ***4.526 ***11.233 ***−0.14110.736 ***8.329 ***6.538 ***
* Notes: See notes to Table 9 and Table 13.

Share and Cite

MDPI and ACS Style

Cheng, M.; Swanson, N.R. Fixed and Long Time Span Jump Tests: New Monte Carlo and Empirical Evidence. Econometrics 2019, 7, 13. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics7010013

AMA Style

Cheng M, Swanson NR. Fixed and Long Time Span Jump Tests: New Monte Carlo and Empirical Evidence. Econometrics. 2019; 7(1):13. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics7010013

Chicago/Turabian Style

Cheng, Mingmian, and Norman R. Swanson. 2019. "Fixed and Long Time Span Jump Tests: New Monte Carlo and Empirical Evidence" Econometrics 7, no. 1: 13. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics7010013

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop