Next Article in Journal
On the Asymptotic Distribution of Ridge Regression Estimators Using Training and Test Samples
Next Article in Special Issue
A Parameterization of Models for Unit Root Processes: Structure Theory and Hypothesis Testing
Previous Article in Journal
Long-Lasting Economic Effects of Pandemics:Evidence on Growth and Unemployment
Previous Article in Special Issue
The Discovery of Long-Run Causal Order: A Preliminary Investigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling I(2) Processes Using Vector Autoregressions Where the Lag Length Increases with the Sample Size

Faculty of Business Administration and Economics, Bielefeld University, Universitätsstrasse 25, D-33615 Bielefeld, Germany
*
Author to whom correspondence should be addressed.
Submission received: 4 May 2018 / Revised: 4 September 2020 / Accepted: 8 September 2020 / Published: 17 September 2020
(This article belongs to the Special Issue Celebrated Econometricians: Katarina Juselius and Søren Johansen)

Abstract

:
In this paper the theory on the estimation of vector autoregressive (VAR) models for I(2) processes is extended to the case of long VAR approximation of more general processes. Hereby the order of the autoregression is allowed to tend to infinity at a certain rate depending on the sample size. We deal with unrestricted OLS estimators (in the model formulated in levels as well as in vector error correction form) as well as with two stage estimation (2SI2) in the vector error correction model (VECM) formulation. Our main results are analogous to the I(1) case: We show that the long VAR approximation leads to consistent estimates of the long and short run dynamics. Furthermore, tests on the autoregressive coefficients follow standard asymptotics. The pseudo likelihood ratio tests on the cointegrating ranks (using the Gaussian likelihood) used in the 2SI2 algorithm show under the null hypothesis the same distributions as in the case of data generating processes following finite order VARs. The same holds true for the asymptotic distribution of the long run dynamics both in the unrestricted VECM estimation and the reduced rank regression in the 2SI2 algorithm. Building on these results we show that if the data is generated by an invertible VARMA process, the VAR approximation can be used in order to derive a consistent initial estimator for subsequent pseudo likelihood optimization in the VARMA model.

1. Introduction

Many macroeconomic variables have been found to exhibit trend-like behaviour that can be modelled by using vector autoregressions (VARs). Katarina Juselius (2006) states that empirical modelling led to the development of I(1) and I(2) models since certain features of the datasets considered required including first and second differences in order to obtain stationary time series. Additionally cointegrating relations were found in the corresponding analyses. Similar findings have reoccurred numerous times in the literature for example related to money demand Johansen (1992b); Juselius (1994), inflation Banerjee et al. (2001); Georgoutsos and Kouretas (2004), interest rates and real exchange rates Johansen et al. (2007); Juselius and Assenmacher (2017); Juselius and Stillwagon (2018); Stillwagon (2018) to mention only a few sources.
The predominant methodological approach to model integration and cointegration in the I(1) and the I(2) case in the vector autoregressive (VAR) framework has been established mainly by Søren Johansen and Katarina Juselius together with a number of coauthors (see the lists of references in Johansen (1995); Juselius (2006) for details) building on vector error correction models (see Engle and Granger (1987) for early comments on the history of using error correction models for co-integrated processes). Extending the main ideas for cointegration modeling for the I(1) setting Johansen (1997) see, e.g., Johansen (1992a) suggested a representation for the I(2) case. Johansen (1997) established asymptotic distributions for the suggested two step I(2) estimator (2SI2) as an approximation to pseudo maximum likelihood estimation involving numerical optimization. Asymptotics for the corresponding likelihood ratio tests has been developed in Paruolo (1994, 1996), its asymptotic equivalence to pseudo likelihood (using the Gaussian distribution) optimization (and hence in a certain sense statistical efficiency) is shown in Paruolo (2000). However, Nielsen and Rahbek (2007) shows that in finite samples the likelihood ratio test has size advantages. The testing of restrictions on the parameters has been investigated by Boswijk and Doornik (2004); Boswijk and Paruolo (2017); Johansen and Lütkepohl (2005). Due to the implicit vector error correction (VECM) modeling, deterministic terms in the VECM produce complex deterministic terms in the solutions processes. In the I(2) context Nielsen and Rahbek (2007); Paruolo (1994, 2006); Rahbek et al. (1999); Kurita et al. (2011) discuss the impacts of deterministic terms.
As the VECM representation includes the representation of reduced rank matrices by a product of two matrices, identification conditions are of particular importance, see Juselius (2006); Mosconi and Paruolo (2013, 2017). In this context also weak exogeneity has been studied Kurita (2012); Paruolo and Rahbek (1999).
The main idea underlying the VECM approach for estimating VAR models in the I(2) context is to reparameterize the problem such that integration and cointegration properties relate to the rank of two matrices. Assuming the data generating process to be a VAR of known finite order, the rank of matrices can be tested using (pseudo) likelihood ratio tests.
Sometimes the assumption of known order is not justified. For example it is known that a subset of variables that are generated using a finite order VAR cannot be described by a finite order VAR, but instead requires a vector autoregressive moving average (VARMA) model. However, the class of VARs provides flexibility in the sense that a VAR of infinite order can represent a large set of linear dynamical systems including all invertible VARMA systems. For stationary processes Berk (1974) and Lewis and Reinsel (1985) show that by letting the order of the VAR tend to infinity at a suitable function of the sample size, consistent estimation of the underlying transfer function can be achieved for data generating processes that can be described by a VAR() subject to mild assumptions on the summability of the VAR coefficients. Additionally Lewis and Reinsel (1985) also establishes asymptotic normality (in a very specific sense) of linear combinations of the estimated autoregressive coefficients. Hannan and Deistler (1988) make the concepts operational by showing that in the case of a VARMA process generating the dataset the required rate of letting the order tend to infinity can be estimated using BIC model selection.
In the case of I(1) processes the estimation theory for long VAR approximations to VARMA processes has been extended based on the techniques in the stationary case of Lewis and Reinsel in a series of papers by Saikkonen and coauthors Saikkonen (1991, 1992); Lütkepohl and Saikkonen (1997); Saikkonen and Lütkepohl (1996); Saikkonen and Luukkonen (1997). Additionally also the Johansen framework of rank restricted estimation in the VECM model has been extended to the long VAR approximations by Saikkonen and Luukkonen (1997). Bauer and Wagner (2004) provide extensions to the multi frequency I(1) case where unit roots may occur at the seasonal frequencies.
For the I(2) case no such extensions are currently known. This is the research gap this paper tries to fill: First we establish consistency and asymptotic normality of estimated autoregressive coefficients (in the sense of Lewis and Reinsel) for unrestricted ordinary least squares (OLS) estimation in the VECM representation. This can be used in order to derive Wald type tests of linear restrictions on the autoregressive parameters. Secondly, we extend the rank restricted regression techniques in the I(2) case to the long VAR approximations showing that the asymptotics (for estimated cointegrating relations, likelihood ratio tests and the two step estimation procedures) are identical in the case of long VAR approximations and VARs of finite known order. Third, we show that if the data generating process is an invertible VARMA process the long VAR system estimator can be used in order to obtain consistent initial estimators for subsequent pseudo likelihood maximization in the VARMA model class. In all results we limit ourselves to the case of no deterministic terms being included in the VECM representation. The inclusion of deterministic terms requires changing the test distribution, compare the theory contained for example in Rahbek et al. (1999).
The paper is organized as follows: In the next section the data generating process and the main assumptions are described. Section 3 then provides the results for the unrestricted estimation. Section 4 deals with rank restricted regression in the 2SI2 procedure, while Section 5 investigates the initial guess in the VARMA setting for subsequent pseudo likelihood maximization. Finally Section 6 concludes the paper. Proofs are relegated to an appendix.
Throughout the paper we will use the notation introduced by Johansen (1997): For a matrix C R p × s , s < p , of full column rank we use the notation C ¯ = C ( C C ) 1 . Furthermore, C denotes a full column rank matrix of dimension p × ( p s ) such that C C = 0 . Whenever this notation is used the particular choice of C is not of importance. For a matrix C = ( C i , j ) R p × s we let C denote the Frobenius norm C = i = 1 p j = 1 s C i , j 2 .

2. Data Generating Process and Assumptions

In this paper we use the following assumptions on the data generating process:
Assumption 1 (DGP).
The process ( y t ) t Z , y t R p , is generated from the difference equation for t Z :
Δ 2 y t = α β y t 1 + Γ Δ y t 1 + j = 1 Π j Δ 2 y t j + ε t
where α , β R p × r , 0 r < p are full column rank matrices, Δ = ( 1 L ) with L denoting the backward shift operator such that L ( y t ) t Z = ( y t 1 ) t Z . The matrix function A ( z ) = ( 1 z ) 2 I p α β z Γ z ( 1 z ) j = 1 Π j ( 1 z ) 2 z j fulfills the special marginal stability condition that
| A ( z ) | = 0 i m p l i e s   t h a t | z | > 1 o r z = 1 .
Furthermore, there exists a real δ > 0 such that the power series defining A ( z ) converges absolutely for | z | < 1 + δ . Define β 2 = β η , α 2 = α ζ where α Γ β = ζ η , η , ζ R ( p r ) × s are of full column rank s < p r . Then it is assumed that the matrix
α 2 ( I p + Γ β ¯ α ¯ Γ j = 1 Π j ) β 2
is nonsingular.
Furthermore, the process ( ε t ) t Z denotes independent identically distributed (iid) white noise with mean zero and variance Σ ϵ > 0 .
It is well known that the conditions (2) and  (3) are necessary and sufficient for the existence of solutions to the difference equation that are I(2) processes, see for example Johansen (1992a). Moreover, note that the assumption of absolute convergence of A ( z ) for | z | < 1 + δ implies that j = 0 j k Π j < for every k N . In particular j = 0 j 2 Π j < follows as will be used frequently below.
Every vector autoregressive function A ( z ) corresponding to the autoregression A ( L ) y t = ε t , that fulfills Assumption 1, allows a representation as A ( z ) = ( 1 z ) 2 I p α β z Γ z ( 1 z ) j = 1 Π j ( 1 z ) 2 z j = g ˜ ( z ) B ˜ ( z ) , B ˜ ( z ) = ( 1 z ) 2 I p Π ˜ z Γ ˜ z ( 1 z ) , g ˜ ( z ) = I p + j = 1 G j z j . This can be seen as follows:
ε t = A ( L ) y t = ( A ( 1 ) A ˙ ( 1 ) Δ + A * ( L ) Δ 2 ) y t = ( A ( 1 ) A ˙ ( 1 ) Δ + A * ( L ) Δ 2 ) B B y t = [ α , 0 , 0 ] + [ α , 0 , 0 ] Δ Γ B Δ + A * ( L ) B Δ 2 B y t = [ α , Γ β 1 , Γ β 2 ] + [ α Γ β , A 1 * ( L ) , A 2 * ( L ) ] Δ + A 0 * ( L ) , 0 , 0 Δ 2 β β 1 Δ β 2 Δ y t = [ α , Γ β 1 , Γ β 2 + α α ¯ Γ β 2 ] + [ α Γ β , A 1 * ( L ) , A ˜ 2 * ( L ) ] Δ + A 0 * ( L ) , 0 , A 0 * ( L ) α ¯ Γ β 2 Δ 2 β + α ¯ Γ β 2 β 2 Δ β 1 Δ β 2 Δ y t = [ α , Γ β 1 , A ˜ 2 * ( L ) ] + [ α Γ β , A 1 * ( L ) , A 0 * ( L ) α ¯ Γ β 2 ] Δ + A 0 * ( L ) , 0 , 0 Δ 2 β + α ¯ Γ β 2 β 2 Δ β 1 Δ β 2 Δ 2 y t = g ( L ) B ( L ) y t
where B = [ β , β 1 , β 2 ] , β 1 = β η , is without restriction of generality assumed to be an orthonormal matrix, A * ( L ) B = [ A 0 * ( L ) , A 1 * ( L ) , A 2 * ( L ) ] , A ( 1 ) = α β , A ˙ ( 1 ) = α β + Γ and where we use that
Γ β 2 α α ¯ Γ β 2 = ( I p α α ¯ ) Γ β 2 = α ¯ α Γ β η = 0 .
Here
B ( L ) = β + α ¯ Γ β 2 β 2 Δ β 1 Δ β 2 Δ 2 .
In this representation
g ( 1 ) = α , Γ β 1 , A ˜ 2 * ( 1 )
is nonsingular due to assumption (3). Furthermore, g ( z ) = j = 0 G j z j is a transfer function with j = 0 G j j 2 < since j = 1 Π j j 2 < and thus the same holds for the power series coefficients A * ( L ) . Since | B ( z ) | 0 , z 1 it follows that | g ( z ) | 0 , | z | 1 . Therefore
B ( L ) y t = u t , g ( L ) u t = ε t
is a VAR process. Note, however, that g ( 0 ) = G 0 I p in general. This constitutes a triangular representation of the process denoting y 1 , t = β y t R p 1 , y 2 , t = β 1 y t R p 2 , y 3 , t = β 2 y t R p 3 such that
y 1 , t = α ¯ Γ β 2 Δ y 3 , t + u 1 , t = A Δ y 3 , t + u 1 , t A : p 1 × p 3 Δ y 2 , t = u 2 , t , Δ 2 y 3 , t = u 3 , t
where u t = [ u 1 , t , u 2 , t , u 3 , t ] has a VAR() representation. Furthermore, defining
B ˜ ( L ) = B I p 1 0 α ¯ Γ β 2 0 I p 2 0 0 0 I p 3 B ( L ) = Δ 2 I p + β β L + ( β β + β α ¯ Γ β 2 β 2 + β 1 β 1 ) L Δ , g ˜ ( L ) = g ( L ) B I p 1 0 α ¯ Γ β 2 0 I p 2 0 0 0 I p 3 1
we obtain A ( L ) = g ( L ) B ( L ) = g ˜ ( L ) B ˜ ( L ) such that
B ˜ ( L ) y t = Δ 2 y t + Π ˜ y t 1 + Γ ˜ Δ y t 1 = v t , g ˜ ( L ) v t = ε t
is another representation of the process ( y t ) t Z with B ˜ ( 0 ) = I p . It follows that the triangular representation can be seen as a special case where one has partial information on the matrices β , β 1 , β 2 . For estimation the VECM representation is approximated using a finite order h:
Δ 2 y t = Φ y t 1 + Ψ Δ y t 1 + j = 1 h 2 Π j Δ 2 y t j + e t
where e t = ε t + e 1 t , e 1 t = j = h 1 Π j Δ 2 y t j . As in the VECM representation the dimensions of β , β 1 , β 2 are linked to the rank of the matrices Φ and α Ψ β . Restricting these matrices to be of particular rank is simpler than imposing the equivalent restrictions in the VAR(h) representation directly.
In the following we will first investigate the unrestricted ordinary least squares estimator in the VECM representation without taking rank restrictions into account. In the second step the 2SI2 procedure as presented in Paruolo (2000) for imposing the two rank restrictions in two steps is investigated.
For both procedures the selection of the order h is of importance. In this respect the following assumption will be used:
Assumption 2 (Lag order h).
The order h is chosen subject to the following restrictions:
  • h = o ( T 1 / 5 ) .
  • T 1 / 2 j = h + 1 Π j 0 as T , h .
This condition defines an upper bound for the order which is usually directly assured during order selection using for example information criteria. The upper bound is smaller than the usual rate T 1 / 3 for technical reasons. The stronger bound is not needed for all results. However, the implications for practical applications are minor as for example in the range 1 T 950 we have 2.5 T 1 / 5 > T 1 / 3 . The second condition of Assumption 2 implies a lower bound for the increase of h as a function of the sample size. Clearly j = h + 1 Π j 0 for h . The bound implies that for h = h ( T ) this convergence needs to be fast enough such that T 1 / 2 j = h ( T ) + 1 Π j still converges to zero. The lower bound depends on the underlying true parameters. For invertible VARMA processes – which can be seen as the leading case – Π j C ρ 0 j for some 0 ρ 0 < 1 . Hannan and Deistler (1988) show that for an invertible stationary VARMA process the lower bound (in this case proportional to log T ) can be achieved asymptotically by using BIC as the order selection procedure. Thus in this case also the stronger condition ( h = o ( T 1 / 5 ) ) is satisfied. Bauer and Wagner (2004) extend this result to the multi frequency I(1) setting. For the I(2) case no analogous result is known, although the developments of Bauer and Wagner (2004) suggest that a similar result holds also there. This is left for future research.
Therefore the difference between the ’usual’ rates and the ones assumed above are deemed to be of minor practical consequences. Thus we are not explicit in the main text as to which results hold true under the less restrictive set of results and which do not. In the appendix, we will comment on this point, however.

3. Unrestricted Estimation

In this section the results of Lewis and Reinsel (1985) and Saikkonen and Lütkepohl (1996) are extended to the I(2) case. To simplify notation define a t , b t = t = h + 1 T a t b t for sequences a t , b t , t = 1 , , T .1 Then the unrestricted least squares estimator in the finite VECM model uses the regressor vector Z t , h = [ y t 1 , Δ y t 1 , Δ 2 y t 1 , , Δ 2 y t h + 2 ] R p h . The corresponding ordinary least squares estimator is given as
Φ ^ , Ψ ^ , Π ^ 1 , , Π ^ h 2 = Δ 2 y t , y t 1 , Δ 2 y t , Δ y t 1 , Δ 2 y t , Δ 2 y t 1 , , Δ 2 y t , Δ 2 y t h + 2 Z t , h , Z t , h 1 = Δ 2 y t , Z t , h Z t , h , Z t , h 1 .
The noise covariance is estimated from the residuals as usual as
Σ ^ ϵ = N 1 e ^ t , e ^ t , e ^ t = Δ 2 y t Φ ^ y t 1 Ψ ^ Δ y t 1 j = 1 h 2 Π ^ j Δ 2 y t j
where N = T h denotes the effective sample size.

3.1. Estimation in the Triangular VECM Representation

As typical for the cointegration framework, analysis is easier in the triangular representation which separates stationary components from I(1) and I(2) processes: Let y t = [ y 1 , t , y 2 , t , y 3 , t ] R p where y i , t R p i is such that
y 1 , t = A Δ y 3 , t + u 1 , t , Δ y 2 , t = u 2 , t , Δ 2 y 3 , t = u 3 , t
where u t = [ u 1 , t , u 2 , t , u 3 , t ] has a VAR() representation g ( L ) u t = ε t where
g ( 0 ) = I 0 A 0 I 0 0 0 I .
Note, however, that using the triangular representation implies that the matrix B ( L ) is known up the value of the matrix A. For applications this is the case only seldom.
Thus letting g ( z ) = g ( 1 ) + g * ( z ) Δ we obtain
ε t = g ( L ) y 1 , t A Δ y 3 , t Δ y 2 , t Δ 2 y 3 , t = g ( L ) Δ 2 y 1 , t + Δ y 1 , t 1 + y 1 , t 1 A Δ 2 y 3 , t A Δ y 3 , t 1 Δ 2 y 2 , t + Δ y 2 , t 1 Δ 2 y 3 , t = g ( L ) I 0 A 0 I 0 0 0 I Δ 2 y t + g ( L ) y 1 , t 1 0 0 + g ( L ) Δ y 1 , t 1 A Δ y 3 , t 1 Δ y 2 , t 1 0 = g ˜ ( L ) Δ 2 y t + [ g ( 1 ) + g * ( L ) Δ ] y 1 , t 1 0 0 + g ( 1 ) Δ y 1 , t 1 A Δ y 3 , t 1 Δ y 2 , t 1 0 = π ( L ) Δ 2 y t + g ( 1 ) y 1 , t 1 0 0 + G 1 + G 1 * G 2 G 1 A Δ y 1 , t 1 Δ y 2 , t 1 Δ y 3 , t 1 = π ( L ) Δ 2 y t + G 1 0 0 y t 1 + G 1 + G 1 * G 2 G 1 A Δ y t 1
with π ( L ) = I p j = 1 Π j L j leads to the corresponding VECM representation:
Δ 2 y t = Φ y t 1 + Ψ Δ y t 1 + j = 1 Π j Δ 2 y t j + ε t .
Here G : = g ( 1 ) = j = 0 G j = [ G 1 , G 2 , G 3 ] , where G i is p × p i for i = 1 , 2 , 3 : Similarly, G * : = g * ( 1 ) = j = 0 j G j = [ G 1 * , G 2 * , G 3 * ] , where G i * is p × p i for i = 1 , 2 , 3 . The sums exists since j = 1 G j j 2 < by assumption. Similarly, we partition Φ , Ψ and Π j into [ Φ 1 , Φ 2 , Φ 3 ] , [ Ψ 1 , Ψ 2 , Ψ 3 ] and [ Π j 1 , Π j 2 , Π j 3 ] , respectively. The analogous partitioning is used for estimates.
Then Φ = [ G 1 , 0 , 0 ] , Ψ = [ G 1 * G 1 , G 2 , G 1 A ] . Therefore Ψ 3 = Φ 1 A . Note that in this notation the I(2) components on the right hand side are y t 1 , 3 , the I(1) components are y t 1 , 1 , y t 1 , 2 , Δ y t 1 , 3 , where y t 1 , 1 A Δ y t 1 , 3 is stationary. Thus in order to separate regressors of different integration orders in the proof (as is usually done in the literature) we use a transformation using the unknown matrix A such that the regressor y t 1 , 1 is replaced by y t 1 , 1 A Δ y t 1 , 3 . Consequently the estimate Ψ ^ 3 of Ψ 3 is replaced by the estimate Θ ^ = Ψ ^ 3 + Φ ^ 1 A of Θ = Ψ 3 + Φ 1 A = 0 .
Based on the estimates Ψ ^ and Φ ^ then A can be estimated as
A ^ = ( Φ ^ 1 Σ ^ ϵ 1 Φ ^ 1 ) 1 Φ ^ 1 Σ ^ ϵ 1 Ψ ^ 3 .
Here the insertion of Σ ^ ϵ 1 appears somewhat arbitrary. A motivation for this choice in the I(1) case can be found in Saikkonen (1992) equation (12). However, any other positive definite matrix could be used as well. Currently there is no knowledge on the optimality of the choice suggested above.
In the asymptotic distribution of the estimation error Brownian motions occur relating to the process ( u t ) t Z : Under Assumption 1 we have
1 T t = 1 r T u t B ( r ) = [ B 1 ( r ) , B c ( r ) ] = [ B 1 ( r ) , B 2 ( r ) , B 3 ( r ) ]
where B ( r ) , 0 r 1 , denotes a Brownian motion with corresponding variance
Ω = Ω 11 Ω 1 c Ω c 1 Ω c c = Ω 11 Ω 12 Ω 13 Ω 21 Ω 22 Ω 23 Ω 31 Ω 32 Ω 33 = g ( 1 ) 1 Σ ϵ ( g ( 1 ) ) 1 ,
where B 1 . c ( r ) = B 1 ( r ) Ω 1 c Ω c c 1 B c ( r ) is a p 1 -dimensional Brownian motion, which is independent of B c ( r ) , with covariance
Ω 1 . c = Ω 11 Ω 1 c Ω c c 1 Ω c 1 .
An estimator of Ω 1 . c is given by2
Ω ^ 1 . c = ( Φ ^ 1 Σ ^ ϵ 1 Φ ^ 1 ) 1 .
With these definitions we can state our first result of the paper (which is proved in Appendix B):
Theorem 1.
Under Assumptions 1 and 2 for the triangular VECM representation we have:
(A) Consistency:
( i ) Φ ^ p Φ ; ( i i ) Σ ^ ϵ p Σ ϵ ; ( i i i ) Ω ^ 1 . c p Ω 1 . c ; ( i v ) Ψ ^ p Ψ ; ( v ) Θ ^ p 0 ; ( v i ) A ^ p A .
(B) Asymptotic distribution of coefficients to nonstationary regressors: Under Assumptions 1 and 2 we have ( N = T h ):
( i ) [ N Φ ^ 2 , N Θ ^ , N 2 Φ ^ 3 ] d g ( 1 ) 0 1 d B F 0 1 F F 1 , ( i i ) N ( A ^ A ) d 0 1 d B 1 . c L 0 1 L L 1
where F ( u ) = B c ( u ) 0 u B 3 ( v ) d v , F a ( u ) = B 2 ( u ) 0 u B 3 ( v ) d v and L ( u ) = B 3 ( u ) 0 1 B 3 F a ( 0 1 F a F a ) 1 F a ( u ) .
(C) Asymptotic distribution of coefficients to stationary regressors: Let L h be a sequence of ( p 2 ( h 2 ) + p ( 2 p 1 + p 2 ) ) × J matrices such that L h ( Γ E C M 1 Σ ϵ ) L h M > 0 where Γ E C M = E ( X t X t ) with X t : = u 1 , t 1 , Δ y 1 , t 1 , Δ y 2 , t 1 , Δ 2 y t 1 , , Δ 2 y t h + 2 .
Let
Π ̲ = Φ 1 Ψ 1 Ψ 2 Π 1 Π h 2 .
Then
N 1 2 L h v e c ( Π ^ Π ̲ ) d N ( 0 , M ) .
(D) Asymptotic distribution on Wald type tests: Finally letting
Γ ^ E C M = N 1 ( X ˜ t , X ˜ t X ˜ t , Δ y 3 , t 1 Δ y 3 , t 1 , Δ y 3 , t 1 1 Δ y 3 , t 1 , X ˜ t )
where X ˜ t = y 1 , t 1 , Δ y 1 , t 1 , Δ y 2 , t 1 , Δ 2 y t 1 , , Δ 2 y t h + 2 , the Wald test for the null hypothesis H 0 : L h v e c ( Π ̲ ) = l h is given by
λ ^ W a l d = N L h v e c ( Π ^ ) l h ( L h ( Γ ^ E C M 1 Σ ^ ϵ ) L h ) 1 L h v e c ( Π ^ ) l h .
Then if L h is such that L h ( Γ E C M 1 Σ ϵ ) L h M > 0 , under the null hypothesis λ ^ W a l d d χ 2 ( J ) .
The theorem provides the asymptotic distributions of the OLS estimates in the triangular system. Note that in this somewhat special case the properties of the regressor components (stationary or not) are known such that for each entry the convergence speed is known. Correspondingly the definition of the regressor vector X ˜ t involves only lags of y t but omits all nonstationary regressors except the ones cointegrated with Δ y 3 , t 1 .
The assumptions on L h are more restrictive than needed. Lewis and Reinsel (1985) and Saikkonen and Lütkepohl (1996) only require that L h has full column rank when deriving the normalized convergence to normal distribution with unit variance as the limit for
N 1 2 ( L h ( Γ E C M 1 Σ ϵ ) L h ) 1 / 2 L h v e c ( Π ^ Π ̲ ) .
Similar arguments could be used here.

3.2. Estimation in the General VECM Representation

The previous section dealt with the special case that a triangular representation is used and hence knowledge on the matrices [ β , β 1 , β 2 ] is given. This section provides a result for the general case, which, however, is limited to the coefficients to the stationary components. Since a general process generated according to Assumption 1 can be rewritten into a triangular representation using the knowledge of [ β , β 1 , β 2 ] , some asymptotic properties of the unrestricted OLS estimators can be derived from Theorem 1 for the general case (which is proved in Appendix C):
Theorem 2.
Let the regressor vector Z t , h = [ y t 1 , Δ y t 1 , Δ 2 y t 1 , , Δ 2 y t h + 2 ] and define
Λ ̲ = Φ Ψ Π 1 Π h 2 , Λ ˜ = Δ 2 y t , Z t , h Z t , h , Z t , h 1 , Γ ˜ E C M = N 1 Z t , h , Z t , h .
Then under Assumptions 1 and 2 it follows that Λ ˜ Λ ̲ = o P ( 1 ) .
Furthermore, let L h R p 2 ( h + 2 ) × J be such that L h ( Γ ˜ E C M 1 Σ ϵ ) L h M > 0 . Then
N 1 2 L h v e c ( Λ ˜ Λ ̲ ) d N ( 0 , M ) .
Beside consistency the theorem implies that linear combination of OLS estimators show asymptotic normality and hence standard inference, if the asymptotic variance is nonsingular. One application of such results consists in the so called ‘surplus lag’ formulation in the context of Granger causality testing, see Bauer and Maynard (2012); Dolado and Lütkepohl (1996).
Finally note that this section does not contain results with regard to the cointegrating rank or the cointegrating space. The theorem above merely allows to test coefficients corresponding to stationary regressors. Therefore the usage is limited to somewhat special situations like the surplus-lag causality tests. However, it is also relevant for impulse response analysis, compare Inoue and Kilian (2020).

4. Rank Restricted Regression

The previous sections show that for the estimators discussed in that sections full inference on all coefficients is only possible when information on the matrices β , β 1 and β 2 exists. The dimensions of the matrices relate to the ranks of the matrices Φ = α β and, conditional on this, to the rank of α ¯ Ψ β ¯ . The two rank restrictions make estimation and specification more complex than in the I(1) case.
Johansen (1995) provides the two-step approach 2SI2 that can be used for estimation and specification of the two integer valued parameters p 1 and p 2 . Paruolo and Rahbek (1999) extend the 2SI2 procedure suggested in section 8 of Johansen (1997). Paruolo (2000) shows that this 2SI2 procedure achieves the same asymptotic distribution as pseudo maximum likelihood estimation which could be performed subsequent to 2SI2 estimation. This makes the procedure attractive from a practical point of view. In this section we show that these approaches extend naturally to the long VAR case. The main focus here lies on the derivation of the asymptotic properties of the rank tests.
Recall the long VAR approximation given as
Δ 2 y t = Φ y t 1 + Ψ Δ y t 1 + j = 1 h 2 Π j Δ 2 y t j + e t
where Φ = α β has reduced rank r < p and α ¯ Ψ β ¯ = ζ η has reduced rank s < p r . In this notation the 2SI2 procedure works as follows: In the first step the rank constraint on α ¯ Ψ β ¯ is neglected estimating α and β by using reduced-rank regression (RRR). Then in the second step the reduced rank of α ¯ Ψ β ¯ is imposed using RRR in a transformed equation.
In more detail using the Johansen notation we denote with R 0 t , R 1 t and R 2 t the residuals of regressing Δ 2 y t , Δ y t 1 and y t 1 on Δ 2 y t 1 , , Δ 2 y t h + 2 , respectively; then we can rewrite (9) as
R 0 t = α β R 2 t + Ψ R 1 t + e ˜ t .
Concentrating out R 1 t and denoting the residuals as R 0.1 t and R 2.1 t we obtain with S i j . 1 = R i t , R j t R i t , R 1 t R 1 t , R 1 t 1 R 1 t , R j t the solution to the RRR problem from solving the eigenvalue problem
| λ S 22.1 S 20.1 S 00.1 1 S 02.1 | = 0 ,
with solutions 1 > λ ^ 1 λ ^ p > 0 ordered with decreasing size and corresponding vectors V = ( v 1 , , v p ) . Then as usual the trace statistic of testing the model H r with rank ( Φ ) r , r < p , in the model H p with rank ( Φ ) p , is given as
Q r = 2 log Q H r | H p = T i = r + 1 p log ( 1 λ ^ i ) .
The optimizers for α , β are given by
β ^ = ( v 1 , , v r ) , α ^ = S 02.1 β ^ , Σ ^ ϵ = S 00.1 α ^ α ^ .
In the second step, given α and β known, we can obtain by multiplying (10) by α ¯ that
α ¯ R 0 t = α ¯ Ψ ( β ¯ β + β ¯ β ) R 1 t + α ¯ e ˜ t = ζ η ( β R 1 t ) + C ( β R 1 t ) + α ¯ e ˜ t .
Note that β R 1 t is stationary. Thus concentrating out C and denoting the residuals as R α ¯ . β , t and R β . β , t , respectively, we can define S a b . β : = R a . β , t , R b . β , t , for a , b = α ¯ or β . Then the likelihood ratio test of the model H r , s with rank ( ζ η ) s , s < p r in the model H r 0 with rank ( α ¯ Ψ β ¯ ) = p r is given by
Q r , s = 2 log Q H r , s | H r 0 = T i = s + 1 p r log ( 1 ρ ^ i ) .
where 1 > ρ ^ 1 ρ ^ p r > 0 are the solutions of the eigenvalue problem
| ρ S β β . β S β α ̲ ¯ . β S α ̲ ¯ α ¯ . β 1 S α ¯ β . β | = 0 ,
and the corresponding eigenvectors are W = ( w 1 , , w p r ) . Estimators of ζ and η are given by
η ^ = ( w 1 , , w s ) , ζ ^ = S α ¯ β . β η ^ .
For the 2SI2 procedure in this second step the first step estimates α ^ and β ^ are used in place of the unknown true quantities. Then we obtain the following analogon to the results in the finite order VAR framework (the proof is given in Appendix D):
Theorem 3.
Let the data be generated according to Assumption 1 and let the VAR order fulfill Assumption 2. Then the following asymptotic results hold:
(A) The asymptotic distribution of the likelihood ratio statistic Q r under the null hypothesis H r is given by
Q r d t r 0 1 d W F 0 1 F F d u 1 0 1 F d W .
where W = ( α Σ ϵ α ) 1 / 2 α W , F a ( u ) = B 2 ( u ) 0 u B 3 ( v ) d v and F ( u ) = F a ( u ) 0 1 F a B 3 ( 0 1 B 3 B 3 ) 1 B 3 ( u ) . This is identical to the distribution achieved in the finite VAR case.
(B) The asymptotic distribution of the likelihood ratio statistic Q r , s under the null hypothesis H r , s is given by
Q r , s d t r 0 1 d W 2 B 3 0 1 B 3 B 3 d u 1 0 1 B 3 d W 2 .
where W 2 ( u ) = ( α 2 Σ ϵ α 2 ) 1 / 2 α 2 W ( u ) .
(C) The asymptotic distribution of the test statistic S r , s = Q r + Q r , s under the null hypothesis H r , s is given by
S r , s d t r 0 1 d W F 0 1 F F d u 1 0 1 F d W + t r 0 1 d W 2 B 3 0 1 B 3 B 3 d u 1 0 1 B 3 d W 2 .
(D) Using suitable normalizations all estimators are consistent: α ^ ( c α α ^ ) 1 p α , β ^ ( c β β ^ ) 1 p β , ζ ^ ( c ζ ζ ^ ) 1 p ζ , η ^ ( c η η ^ ) 1 p η , Ψ ^ p Ψ , Π ^ j p Π j where for example c α α = I r .
(E) The asymptotic distributions of the coefficients to the nonstationary regressors are identical to the ones in the finite order VAR case stated in Paruolo (2000). The asymptotic distribution of the coefficients Π ^ j are identical to the ones in Theorem 1.
The main message of the theorem is that the 2SI2 procedure shows the same asymptotic properties including the rank tests as in the finite order VAR case. As usual also restricting the coefficients for the non-stationary regressors does not influence the asymptotics for the coefficients corresponding to the stationary regressors.
Note that Paruolo (2000) shows that in the finite VAR case 2SI2 estimates have the same asymptotic distribution as pseudo maximum likelihood (pML) estimates maximizing the Gaussian likelihood. The first order conditions for the pML estimates of the coefficients to the non-stationary regressors provided in the first display on p. 548 in Paruolo (2000) depend on the data only via the matrices S i j defined above. These matrices depend on the lag length of the VECM only via the concentration step. The proof of our Theorem 3 shows that these terms have the same asymptotic distributions for the finite order VAR and the long VAR. Theorem 4.3 of Paruolo (2000) shows that the asymptotic distribution of the coefficients due to stationary regressors does not depend on the distribution of the coefficients corresponding to the non-stationary regressors as long as they are estimated super-consistently. Thus our results imply that also in the long VAR case the asymptotic distribution of all estimates for the 2SI2 and the pML approach is identical.

5. Initial Guess for VARMA Estimation

One usage of long VAR approximations is as preliminary estimate for VARMA model estimation. Hannan and Kavalieris (1986) provide properties of such an approach in the stationary case, Lütkepohl and Claessen (1997) extend the procedure to the I(1) case. Here we extend this idea to the I(2) case.
The goal is to provide a consistent initial guess for the estimation of a VARMA model for I(2) processes. In this respect we assume the following data generating process:
Assumption 3 (VARMA dgp).
The process ( y t ) t Z is generated as the solution to the state space equations
y t = C x t + ε t , x t + 1 = A x t + B ε t
where ( ε t ) t Z denotes white noise subject to the same assumptions as in Assumption 1.
Here x t R n is the unobserved state process. The system ( A , B , C ) is assumed to be minimal and in the canonical form of Bauer and Wagner (2012), that is
A = I c I c 0 0 0 I c 0 0 0 0 I d 0 0 0 0 A , B = B 1 B 2 B 3 B , C = C 1 C 2 C 3 C ,
where | λ m a x ( A ) | < 1 (the matrix A is stable), C 1 C 1 = I c , C 3 C 3 = I d , C 1 C 3 = 0 , C 1 C 2 = 0 , C 2 C 3 = 0 . Furthermore, the system is strictly minimum-phase, that is ρ 0 = | λ m a x ( A B C ) | < 1 . Finally the matrix A ¯ = A B C is nonsingular.
At time t = 0 the state x 0 = [ x 0 , u , x ] , x 0 , u R 2 c + d , is such that x 0 , u is deterministic and x 0 , = j = 1 A j 1 B ε j denotes the stationary solution to the stable part of the system.
In this situation it follows that ( y t ) t Z is an I(2) process in the definition of Bauer and Wagner (2012), that is its second difference is a stationary VARMA process. The integers c and d are connected to the integers p 1 , p 2 , p 3 via c = p 3 , d = p 2 such that p 1 = p c d . It can furthermore be shown that a process generated using Assumption 3 possesses a VAR(h) approximation:
y t + j = 1 h A j y t j = ε t + C ( A B C ) h x t h
where A j = C ( A B C ) j 1 B , A j μ ρ j ( 0 ρ 0 < ρ < 1 ) converges to zero exponentially fast for j due to the strict minimum-phase condition. Letting h then implies the existence of a VAR() representation. It follows that for such systems A ( z ) converges absolutely for | z | < ρ 1 where 1 < ρ 1 .
From the autoregressive representation the VECM representation can be obtained:
a ( z ) = I p + j = 1 A j z j = I p j = 1 C A ¯ j 1 B z j = ( 1 z ) 2 I p Φ z Ψ z ( 1 z ) ( 1 z ) 2 j = 1 Π j z j
where A ¯ = A B C such that
Δ 2 y t = Φ y t 1 + Ψ Δ y t 1 + j = 1 Π j Δ 2 y t j + ε t .
A comparison of power series coefficients provides the identities:
Φ = I p + C ( I A ¯ ) 1 B , Ψ = I p C ( I A ¯ ) 2 A ¯ B , Π j = [ C A ¯ 2 ( I A ¯ ) 2 ] A ¯ j 1 B = D A ¯ j 1 B , j = 1 , 2 ,
It follows that the coefficients Π j , j = 1 , 2 , form the impulse response of a rational transfer function of order smaller or equal to n. If A ¯ is nonsingular then the order equals n and the system ( A ¯ , B , D ) is minimal. Furthermore, it follows that for arbitrary Φ and Ψ the transfer function
a ( z ) = ( 1 z ) 2 I p Φ z Ψ z ( 1 z ) ( 1 z ) 2 z D ( I z A ¯ ) 1 B
is a rational transfer function with the additional property that
a ( 1 ) = Φ = α β , α ¯ a ˙ ( 1 ) β ¯ = α ¯ ( Φ + Ψ ) β ¯ = α ¯ C ( I A ¯ ) 2 B β ¯ = ζ η .
Consequently Φ and Ψ determine the integration properties of processes generated using a ( z ) .
Conversely whenever the constraints
I p + C ( I A ¯ ) 1 B = α β , α ¯ C ( I A ¯ ) 2 B β ¯ = ζ η
hold the corresponding triple ( A , B , C ) corresponds to an I(2) process (if the eigenvalues of A are in the closed unit disc). Defining C * = α ¯ C , C = α ¯ C we obtain
α ¯ + C * ( I A ¯ ) 1 B = β , α ¯ + C ( I A ¯ ) 1 B = 0 , C ( I A ¯ ) 2 B β ¯ = ζ η .
The third equation does not have a solution for fixed B β ¯ , ζ , η , if the row space of B β ¯ does not contain the space spanned by the rows of η . In this case row-wise projection of η onto the space spanned by the rows of B β ¯ allows for (not necessarily unique) solutions in C . In the limit no projection is needed. Consequently for large enough T the projected matrix will have full row rank. The second equation then determines α ¯ which in turn determines α ¯ up to the choice of the basis such that α ¯ = T C α o ¯ for some full row rank matrix α o ¯ R r × p , α o ¯ α ¯ = 0 . The first equation then can be rewritten as
T C , C * α o ¯ ( I A ¯ ) 1 B R 1 = β .
The second equation shows that the row space of ( I A ¯ ) 1 B contains the row space of α ¯ . Thus the matrix R 1 has full row rank. It follows that this equation has solutions.
Having obtained a solution for C * , C , α ¯ , α ¯ then C is obtained from
C = α α C * C .
A unique solution then can be obtained from adding the restrictions Π j = C ( I A ¯ ) 2 A ¯ j + 1 B , j = 1 , 2 , , 2 n which for the estimates are to be solved in a least squares sense among all solutions to equations (22).
It then follows that for the true matrices Φ , Ψ , Π j the only solution for given A ¯ , B consists in the corresponding true C. These facts therefore can be used in order to develop an initial guess for subsequent pseudo likelihood maximization using the parameterization of I(2) processes in state space representation: Given the integer valued parameters n , c and d:
  • Obtain a long VAR approximation Φ ^ , Ψ ^ , Π ^ j , j = 1 , 2 , , including Φ ^ = α ^ β ^ and ζ ^ η ^ = α ¯ ^ Ψ ^ β ¯ ^ using the 2SI2 approach.
  • Choose the integer f n . Use the algorithm described in Appendix F to obtain estimates ( A ¯ ^ , B ^ , D ^ ) realizing the impulse response Π ^ j , j = 1 , , 2 f from the Hankel matrix with f block columns and f block rows.
  • Project rows of η ^ onto the space spanned by the rows of B ^ β ¯ ^ to obtain η ˜ .
  • Obtain a unique solution C ^ solving (22) such that the matrices Π ˜ j = C ^ ( I A ¯ ^ ) 2 A ¯ ^ j + 1 B ^ , j = 1 , 2 , , 2 n have minimal Euclidean distance to Π ^ j , j = 1 , 2 , , 2 n .
  • Transform the corresponding system ( A ¯ ^ + B ^ C ^ , B ^ , C ^ ) to the canonical form of Bauer and Wagner (2012) to obtain the estimate ( A ˜ , B ˜ , C ˜ ) .
The algorithm obtains a minimal state space system of order n in the canonical form for I(2) processes given in Bauer and Wagner (2012) and hence can be used as an initial guess for subsequent pseudo-likelihood optimization in the set M n ( r , s ) of all order n rational transfer functions corresponding to I(2) processes with state space unit root structure ( ( 0 , ( c , c + d ) ) ) .
Theorem 4 (Consistent initial guess).
Let ( y t ) t Z denote a process generated using the system ( A 0 , B 0 , C 0 ) according to Assumption 3 and let the system ( A ˜ , B ˜ , C ˜ ) be estimated based on the long VAR approximation with lag order chosen according to Assumption 2. Then ( A ˜ , B ˜ , C ˜ ) is a weakly consistent estimator of the data generating system ( A 0 , B 0 , C 0 ) in the sense that C ˜ A ˜ j B ˜ p C 0 A 0 j B 0 , j = 0 , 1 , and hence the corresponding transfer functions converge in pointwise topology.
The proof of this theorem can be found in Appendix E.

6. Conclusions

In this paper the theory on long VAR approximation of general linear dynamical processes is extended to the case of I(2) processes. We find that we need slightly narrower upper and lower bounds in the approximations. The tighter bounds are not needed for all results and appear not very restrictive for applications.
The main results are completely analogous to the I(1) case: The asymptotics in many respects is identical to the finite order VAR case. Asymptotic distributions for the coefficients to non- stationary variables are the same as in the finite order VAR case. This holds true both for unrestricted OLS estimates as well as the 2SI2 approach in the Johansen framework. Tests on cointegrating ranks show identical asymptotic distributions under the null as in the finite order VAR case and hence do not require other tables. In this respect the main conclusion is that the usual procedure of estimating the lag order in the first step and then applying the Johansen procedure for estimated lag order is justified also for processes generated from a VAR() that is approximated with a choice of the lag order lying within the prescribed bounds.
Additionally in the VARMA case the long VAR approximation can be used in order to derive consistent initial guesses that can be used in subsequent pseudo likelihood estimation.
Thus the paper provides both a full extension of results that have been achieved in the I(1) case as well as a useful starting point for subsequent VARMA modeling which might be preferable in situations which require a high VAR order or show a large number of variables to be modeled, a situation where VARMA models can be more parsimonious than VAR models.

Author Contributions

The two authors of the paper have contributed equally, via joint efforts, regarding both ideas, research, and writing. Conceptualization, Y.L. and D.B.; methodology, Y.L. and D.B.; software, not applicable; validation, not applicable; formal analysis, Y.L. and D.B.; investigation, Y.L. and D.B.; resources, not applicable; writing–original draft preparation, Y.L. and D.B.; writing–review and editing, Y.L. and D.B.; visualization, not applicable; supervision, not applicable; project administration, D.B.; funding acquisition, D.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation —Projektnummer 276051388) which is gratefully acknowledged. We acknowledge support for the publication costs by the Deutsche Forschungsgemeinschaft and the Open Access Publication Fund of Bielefeld University.

Acknowledgments

The reviewers and in particular the two guest editors provided significant comments that helped in improving the paper, which is gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Preliminaries

The theory in this paper follows closely the arguments in Lewis and Reinsel (1985) and its extension to the I(1) case in Saikkonen and Lütkepohl (1996). To this end consider the finite order VECM approximation:
Δ 2 y t = Φ y t 1 + Ψ Δ y t 1 + j = 1 h Π j Δ 2 y t j + e t .
The properties of the various estimators heavily use the following rewriting of the approximation using the triangular representation of y t :
Δ 2 y t = [ Φ 1 , Φ 2 , Φ 3 ] A Δ y 3 , t 1 + u 1 , t 1 y 2 , t 1 y 3 , t 1 + [ Ψ 1 , Ψ 2 , Ψ 3 ] A u 3 , t 1 + Δ u 1 , t 1 u 2 , t 1 Δ y 3 , t 1 + j = 1 h [ Π j , 1 , Π j , 2 , Π j , 3 ] A Δ u 3 , t j + Δ 2 u 1 , t j Δ u 2 , t j u 3 , t j + e t = Φ 2 y 2 , t 1 + Φ 3 y 3 , t 1 + Θ Δ y 3 , t 1 + j = 1 h Ξ j u t j + [ Ξ h + 1 , 1 , Ξ h + 1 , 2 ] u 1 , t h 1 u 2 , t h 1 + Ξ h + 2 , 1 u ˜ 1 , t h 2 + e t ,
where u ˜ 1 , t h 2 : = u 1 , t h 2 A u 3 , t h 1 and Φ 2 = Φ 3 = 0 , Θ = Φ 1 A + Ψ 3 = 0 , and
Ξ 1 = [ Φ 1 + Ψ 1 + Π 1 , 1 , Ψ 2 + Π 1 , 2 , ( Ψ 1 + Π 1 , 1 ) A + Π 1 , 3 ] , Ξ 2 = [ Ψ 1 + Π 2 , 1 2 Π 1 , 1 , Π 2 , 2 Π 1 , 2 , ( Π 2 , 1 Π 1 , 1 ) A + Π 2 , 3 ] , Ξ j = [ Π j , 1 2 Π j 1 , 1 + Π j 2 , 1 , Π j , 2 Π j 1 , 2 , ( Π j , 1 Π j 1 , 1 ) A + Π j , 3 ] , j = 3 , , h , Ξ h + 1 , 1 = 2 Π h , 1 + Π h 1 , 1 , Ξ h + 1 , 2 = Π h , 2 , Ξ h + 2 , 1 = Π h , 1 .
Furthermore, we can see that j = 1 h + 2 Ξ j , 1 = Φ 1 , j = 1 h + 1 Ξ j , 2 = Ψ 2 , and j = 1 h Ξ j , 3 = Ψ 1 A + j = 1 h Π j , 3 . Finally Ψ 1 = j = 2 h + 2 ( j 1 ) Ξ j , 1 .
Note that in the reparametrization (A2), the I(1) components, y c , t : = ( y 2 , t , Δ y 3 , t ) , as well as the I(2) components, y 3 , t 1 , are isolated from the stationary ones, u t j , and have coefficients equal to zero, which facilitates the derivation of the asymptotic properties.
In the reparameterized setting define 3 Ξ ̲ : = [ Ξ 1 , , Ξ h , Ξ h + 1 , 1 , Ξ h + 1 , 2 , Ξ h + 2 , 1 ] , p × ( p h + 2 p 1 + p 2 ) ,
U t : = [ u t 1 , , u t h , u 1 , t h 1 , u 2 , t h 1 , u ˜ 1 , t h 2 ] , ( p h + 2 p 1 + p 2 ) × 1 ,
Λ ̲ : = [ Ξ ̲ , Φ 2 , Θ , Φ 3 ] = [ Ξ ̲ , 0 ] , p × p ( h + 2 ) ,
W t : = [ U t , y c , t 1 , y 3 , t 1 ] , p ( h + 2 ) × 1 .
we have
Δ 2 y t = Λ ̲ W t + e t ,
and correspondingly,
Δ 2 y t = Λ ^ W t + e ˜ t
where
Λ ^ = [ Ξ ^ , Φ ^ 2 , Θ ^ , Φ ^ 3 ] = Δ 2 y t , W t W t , W t 1
is the OLS estimator of Λ ̲ . Here X t , Z t : = t = h + 3 T X t Z t .
Note that W t and the regressors in (A1) are in one-one correspondence. In the original Equation (A1) beside the nonstationary regressors y c , t 1 and y 3 , t 1 the regressor vector
X ˜ t = [ y 1 , t 1 , Δ y 1 , t 1 , u 2 , t 1 , Δ 2 y t 1 , , Δ 2 y t h ] R 2 p 1 + p 2 + p h
occurs which cointegrates with Δ y 3 , t 1 such that
X t = X ˜ t [ A , 0 ] Δ y 3 , t 1 = T h U t
is stationary. Here the nonsingular matrix T h R ( p h + 2 p 1 + p 2 ) × ( p h + 2 p 1 + p 2 ) is defined as:
I p 1 I p 1 A I p 1 I p 2 I p 1 A 2 I p 1 A I p 1 I p 2 I p 2 I p 3 I p 1 A 2 I p 1 A I p 1 I p 2 I p 2 I p 3 I p 1 A 2 I p 1 A I p 1 I p 2 I p 2 I p 3 I p 1 A 2 I p 1 I p 1 I p 2 I p 2 I p 3
Let Π ̲ : = [ Φ 1 , Ψ 1 , Ψ 2 , : Π 1 : Π 2 : : Π h ] , so that we have
Ξ ̲ = Π ̲ T h .
It can be verified that T h is invertible. The asymptotic properties of Λ ^ Λ ̲ are clarified in the next lemma:
Lemma A1.
Under the assumptions of Theorem 1 using N = T h 2 as the effective sample size
N 1 2 ( Ξ ^ Ξ ̲ ) = N 1 2 ε t , U t ( E U t U t ) 1 + o P ( h 1 2 ) , [ N Φ ^ 2 , N Θ ^ , N 2 Φ ^ 3 ] g ( 1 ) 0 1 d B B c 0 1 d B H 3 0 1 B c B c 0 1 B c H 3 0 1 H 3 B c 0 1 H 3 H 3 1
where H 3 ( u ) = 0 u B 3 ( s ) d s .
Proof. 
The proof essentially shows that the coefficients corresponding to the stationary regressors and the ones corresponding to the integrated regressors asymptotically can be dealt with separately. Let D T : = diag [ N 1 2 I p h + 2 p 1 + p 2 , N 1 I p 2 + p 3 , N 2 I p 3 ] . Note that N 1 2 ( Ξ ^ Ξ ̲ ) , N [ Φ ^ 2 , Θ ^ ] , and N 2 Φ ^ 3 are the 1st, 2nd and 3rd column blocks of ( Λ ^ Λ ̲ ) D T 1 , respectively. Moreover, we have
( Λ ^ Λ ̲ ) D T 1 = e t , W t D T D T W t , W t D T 1 .
Let R ^ : = D T W t , W t D T , and define R : = diag Γ u , R 2 , where Γ u = E [ U t U t ] , and
R 2 : = N 2 y c , t 1 , y c , t 1 N 3 y c , t 1 , y 3 , t 1 N 3 y 3 , t 1 , y c , t 1 N 4 y 3 , t 1 , y 3 , t 1 .
Note that each block of the matrix R 2 is of order O p ( 1 ) , and moreover, both R 2 and its limit are almost surely invertible, as there is no cointegration between y c , t 1 and y 3 , t 1 (see Lemma 3.1.1 in Chan and Wei (1988), and Sims et al. (1990)). Note that
( Λ ^ Λ ̲ ) D T 1 ε t , W t D T R 1 = e 1 t , W t D T R 1 = : E 1 + e 1 t , W t D T ( R ^ 1 R 1 ) = : E 2 + ε t , W t D T ( R ^ 1 R 1 ) = : E 3 .
Here ε t , W t D T R 1 has the limits stated in the lemma since:
N 1 ε t , y c , t 1 g ( 1 ) 0 1 d B B c , N 2 ε t , y 3 , t 1 g ( 1 ) 0 1 d B H 3 , N 2 y c , t 1 , y c , t 1 N 3 y c , t 1 , y 3 , t 1 N 3 y 3 , t 1 , y c , t 1 N 4 y 3 , t 1 , y 3 , t 1 0 1 B c B c 0 1 B c H 3 0 1 H 3 B c 0 1 H 3 H 3 .
The lemma therefore holds, if E 1 = [ o P ( h 1 / 2 ) , o P ( 1 ) , o P ( 1 ) ] , E 2 = o P ( 1 ) , E 3 = o P ( 1 ) can be shown (where the blocks in E 1 correspond to the partitioning of W t into stationary, I(1) and I(2) components). For this it is sufficient to show:
(I)
R ^ 1 R 1 1 = O P ( h / N 1 2 )
(II)
e 1 t , W t D T = o P ( h 1 / 2 ) where N 1 e 1 t , y c , t 1 = o P ( 1 ) and N 2 e 1 t , y 3 , t 1 = o P ( 1 )
(III)
ε t , W t D T = O P ( h 1 / 2 ) .
Here . 1 denotes the spectral norm of a matrix while . denotes the Frobenius norm.
(I) To see R ^ 1 R 1 1 = O p ( h / N 1 2 ) , according to Lewis and Reinsel (1985), it is sufficient to show R ^ R 1 = O p ( h / N 1 2 ) , R 1 1 = O p ( 1 ) . Note that
R ^ R = N 1 U t , U t Γ u N 3 2 U t , y c , t 1 N 5 2 U t , y 3 , t 1 N 3 2 y c , t 1 , U t 0 0 N 5 2 y 3 , t 1 , U t 0 0 = : Q ^ P ^ 12 P ^ 13 P ^ 21 0 0 P ^ 31 0 0 ,
then we have E R ^ R 1 2 E R ^ R 2 = E Q ^ 2 + 2 ( E P ^ 12 2 + E P ^ 13 2 ) .
Now let U t o : = [ u t 1 , , u t h 2 ] , then there exists a transformation T u of full row rank, such that U t = T u U t o , where T u is a ( p h + 2 p 1 + p 2 ) × p ( h + 2 ) matrix:
u t 1 u t h u 1 , t h 1 u 2 , t h 1 u ˜ 1 , t h 2 ( p h + 2 p 1 + p 2 ) × 1 U t = I p h + p 1 + p 2 0 0 0 0 A I p 1 0 ( p h + 2 p 1 + p 2 ) × p ( h + 2 ) T u u t 1 u t h u 1 , t h 1 u 2 , t h 1 u 3 , t h 1 u 1 , t h 2 u 2 , t h 2 u 3 , t h 2 p ( h + 2 ) × 1 U t o .
Then, we have Q ^ = T u Q ^ o T u , where Q ^ o = 1 N U t o , U t o E [ U t o U t o ] ; moreover, P ^ 1 i = T u P ^ 1 i o for i = 2 , 3 , where P ^ 12 o = N 3 2 U t o , y c , t 1 , P ^ 13 o = N 5 2 U t o , y 3 , t 1 . Since T u 1 = O ( 1 ) , Q ^ and P ^ 1 i have the same rate of convergence as Q ^ o and P ^ 1 i o , respectively. From Saikkonen (1991) Lemma A.2. we know E Q ^ o 2 = O ( h 2 / N ) and E P ^ 12 o 2 = O ( h / N ) by direct calculation.
For P ^ 13 o note that
E y 3 , t 1 2 = E j = 1 t 1 i = 1 j u 3 , i 2 = E i = 1 t 1 i u 3 , t 1 i 2 = O ( t 3 ) .
Then analogous calculation as for P ^ 12 o show that E P ^ 13 o 2 = O ( h / N ) . Concluding we obtain E R ^ R 1 2 = O ( h 2 / N ) such that R ^ R 1 = O P ( h / N 1 2 ) .
To show R 1 1 = O P ( 1 ) note that R 1 = diag { Γ u 1 , R 2 1 } where Γ u 1 1 = O ( 1 ) (see Lewis and Reinsel (1985), p. 397) and R 2 1 1 = O P ( 1 ) , since R 2 is a.s. invertible and converges in distribution to an almost surely nonsingular random matrix.
(II) With respect to e 1 t , W t D T = o P ( h 1 / 2 ) note that
e 1 t , W t D T N 1 2 e 1 t , U t + N 1 e 1 t , y c , t 1 + N 2 e 1 t , y 3 , t 1 .
From Saikkonen (1991) Lemma A.5 we have N 1 2 e 1 t , U t = o P ( h 1 2 ) , and N 1 e 1 t , y c , t 1 = o P ( 1 ) . Then E y 3 , t 1 2 = O ( t 3 ) and E e 1 t 2 = o ( N 1 ) imply
E N 2 e 1 t , y 3 , t 1 N 2 t = h + 3 T E e 1 t 2 E y 3 , t 1 2 1 2 = o ( N 2 N N 1 / 2 N 3 / 2 ) = o ( 1 ) .
(III) To show ε t , W t D T = O P ( h 1 / 2 ) note that N 1 2 ε t , U t = O P ( h 1 / 2 ) , N 1 ε t , y c , t 1 = O P ( 1 ) according to (A.7) of Saikkonen (1992). Moreover N 2 t = h + 3 T ε t y 3 , t 1 g ( 1 ) 0 1 d B H 3 implies N 2 ε t , y 3 , t 1 = O P ( 1 ) . □
Note that for the lemma to hold we only need h 3 / N 0 and N 1 / 2 j = h + 1 Π j = o ( 1 ) .

Appendix B. Proof of Theorem 1

Appendix B.1. (A) Consistency

(i) Lemma A1 implies Φ ^ 2 0 = Φ 2 , Φ ^ 3 0 = Φ 3 . Furthermore, the reparameterization implies Φ 1 = j = 1 h + 2 Ξ ̲ j 1 and thus Φ ^ 1 = j = 1 h + 2 Ξ ^ j , 1 leading to
Φ ^ 1 Φ ̲ 1 j = 1 h + 2 Ξ ^ j , 1 j = 1 h + 2 Ξ ̲ j , 1 j = 1 h + 2 Ξ ^ j , 1 Ξ ̲ j , 1 Ξ ^ Ξ ̲ =   O P ( h 3 / 2 / N 1 / 2 )
where the last inequality holds due to ε t , u t j = O P ( N 1 / 2 ) in combination with Lemma A1.
(ii) Note that
Σ ^ ϵ = N 1 Δ 2 y t Λ ^ W t , Δ 2 y t Λ ^ W t = N 1 e t + ( Λ ̲ Λ ^ ) W t , e t + ( Λ ̲ Λ ^ ) W t .
Now
( Λ ̲ Λ ^ ) W t , ( Λ ̲ Λ ^ ) W t = ( Λ ̲ Λ ^ ) D T 1 D T W t , W t D T D T 1 ( Λ ̲ Λ ^ )
where R ^ = D T W t , W t D T such that R ^ 1 = O P ( 1 ) and ( Λ ̲ Λ ^ ) D T 1 = O P ( h 1 / 2 ) . Consequently
N 1 ( Λ ̲ Λ ^ ) W t , ( Λ ̲ Λ ^ ) W t = O P ( h / N ) 0 .
Next, from the definition of e t , we can show that
N 1 ε t + e 1 t , ε t + e 1 t = N 1 ε t , ε t + o P ( 1 ) = Σ ϵ + o P ( 1 ) ,
where the last equality follows the law of large numbers and the first equality is implied by the fact that e 1 t 2 = o P ( T 1 ) and ε t 2 = O P ( 1 ) .
(iii) From (i) and (ii), Ω ^ 1 . c = ( Φ ^ 1 Σ ^ ϵ 1 Φ ^ 1 ) 1 = ( Φ 1 Σ ϵ 1 Φ 1 ) 1 + o P ( 1 ) = Ω 1 . c + o P ( 1 ) directly follows.
(iv) With respect to Ψ ^ recall that
Ψ 1 = j = 2 h + 2 ( j 1 ) Ξ j , 1 , Ψ 2 = j = 1 h + 1 Ξ j , 2 .
Then Lemma A1 shows that each entry of Ξ ^ Ξ ̲ is of order O P ( h 1 / 2 / N 1 / 2 ) . Then
Ψ ^ 1 Ψ 1 j = 2 h + 2 ( j 1 ) Ξ ^ j , 1 Ξ j , 1 = O P ( j = 2 h + 2 ( j 1 ) h 1 / 2 / N 1 / 2 ) = O P ( h 5 / 2 / N 1 / 2 )
which converges to zero for h 5 / T 0 . Similarly Ψ ^ 2 Ψ 2 = O P ( h 3 / 2 / N 1 / 2 ) .
For Ψ ^ 3 note that Θ = Φ 1 A + Ψ 3 . Thus Ψ ^ 3 = Θ ^ Φ ^ 1 A such that Ψ ^ 3 Ψ 3 from (i) and Lemma A1.
(v) is contained in Lemma A1.
(vi) From (6), and the definition Ω ^ 1 . c = ( Φ ^ 1 Σ ^ ϵ 1 Φ ^ 1 ) 1 , we have
A ^ A = ( Φ ^ 1 Σ ^ ϵ 1 Φ ^ 1 ) 1 Φ ^ 1 Σ ^ ϵ 1 Ψ ^ 3 A = Ω ^ 1 . c Φ ^ 1 Σ ^ ϵ 1 Ψ ^ 3 Ω ^ 1 . c Ω ^ 1 . c 1 A = Ω ^ 1 . c Φ ^ 1 Σ ^ ϵ 1 Ψ ^ 3 Ω ^ 1 . c Φ ^ 1 Σ ^ ϵ 1 Φ ^ 1 A = Ω ^ 1 . c Φ ^ 1 Σ ^ ϵ 1 ( Ψ ^ 3 + Φ ^ 1 A ) = Ω ^ 1 . c Φ ^ 1 Σ ^ ϵ 1 Θ ^ .
Then (i-iii, v) show the result.

Appendix B.2. (B) Asymptotic Distribution of Coefficients to Nonstationary Regressors

(i) The distribution of the coefficients due to the nonstationary components is contained in Lemma A1.
(ii) With respect to the cointegrating relation note that from the proof of Theorem 1 we have
N ( A ^ A ) = N Ω ^ 1 . c Φ ^ 1 Σ ^ ϵ 1 Θ ^ = Ω 1 . c Φ 1 Σ ϵ 1 · N Θ ^ + o P ( 1 ) .
Note that N Θ ^ = [ N Φ ^ 2 , N Θ ^ , N 2 Φ ^ 3 ] η , where η = [ 0 p 3 × p 2 , I p 3 , 0 p 3 × p 3 ] . Then by Lemma A1, we have
N ( A ^ A ) Ω 1 . c Φ 1 Σ ϵ 1 · g ( 1 ) 0 1 d B F 0 1 F F 1 η = Ω 1 . c Φ 1 Σ ϵ 1 · g ( 1 ) 0 1 d B L 0 1 L L 1 .
Note that Φ 1 = g ( 1 ) α , and by definition Ω = Ω 11 Ω 1 c Ω c 1 Ω c c = g ( 1 ) 1 Σ ϵ g ( 1 ) 1 , we have
Ω 1 . c Φ 1 Σ ϵ 1 g ( 1 ) B = Ω 1 . c α g ( 1 ) Σ ϵ 1 g ( 1 ) B = Ω 1 . c [ I p 1 0 ] Ω 1 B = Ω 1 . c [ ( Ω 1 ) 11 ( Ω 1 ) 1 c ] B = Ω 1 . c [ Ω 1 . c 1 Ω 1 . c 1 Ω 1 c Ω c c 1 ] B = [ I p 1 Ω 1 c Ω c c 1 ] B 1 B c = B 1 Ω 1 c Ω c c 1 B c = B 1 . c .
Therefore, we have
N ( A ^ A ) 0 1 d B 1 . c L 0 1 L L 1 .

Appendix B.3. (C) Asymptotic Distribution of Coefficients to Stationary Regressors

Since the regressor vector U t is stationary, the asymptotic distribution of N 1 / 2 L h v e c ( Ξ ^ Ξ ̲ ) follows from Lewis and Reinsel (1985) in combination with uniform boundedness of the maximal and the minimal eigenvalue of Γ u = E U t U t , see above. Analogously the result for the coefficients corresponding to the regressor vector X t are shown as X t = T h U t for nonsingular matrix T h .

Appendix B.4. (D) Asymptotic Distribution of Wald Type Tests

For the Wald test in addition to (C) note that the variance Γ E C M is replaced by an estimate Γ ^ E C M . For
L h ( Γ E C M 1 Σ ϵ ) L h L h ( Γ ^ E C M 1 Σ ^ ϵ ) L h
note that Σ ^ ϵ Σ ϵ = o P ( 1 ) due to (A) (ii). The regressor vectors X ˜ t and X t differ only in the first block where y 1 , t 1 = u 1 , t 1 + A Δ y 3 , t 1 replaces u 1 , t 1 . Regressing out Δ y 3 , t 1 eliminates this difference. Then Γ ^ E C M Γ E C M 1 = O P ( h / N 1 / 2 ) according to (Saikkonen and Lütkepohl 1996, p. 835, l. 3). There also invertibility of Γ E C M is shown. Using Lemma A.2 of Saikkonen and Lütkepohl (1996) this implies Γ ^ E C M 1 Γ E C M 1 1 = O P ( h / N 1 / 2 ) .
The rest then follows as the proof of Theorem 4 in Saikkonen and Lütkepohl (1996).

Appendix C. Proof of Theorem 2

Consistency follows directly from Theorem 1 as the general representation can be transformed into a triangular representation using the matrix B = [ β , β 1 , β 2 ] , see (4).
With respect to the asymptotic distribution following the proof of Theorem 1 there exists a nonsingular transformation matrix S h such that W t = S h Z t , h . From R ^ 1 R 1 = O P ( h / N 1 / 2 ) it follows that
( N 1 W t , W t ) 1 = ( Γ u ) 1 0 0 0 + o P ( h / N 1 / 2 ) .
Therefore it follows that the blocks corresponding to the nonstationary regressors do not contribute to the asymptotic distribution. Then standard arguments for the stationary part of the regressor vector can be used.

Appendix D. Proofs for Theorem 3

The proof combines the ideas of Saikkonen and Luukkonen (1997) (in the following S&L) with the asymptotics of 2SI2 of Paruolo (2000) (in the following P). In the proof we will work without restriction of generality with the triangular representation.
The key to the asymptotic properties of the estimators obtained from the 2SI2 algorithm lies in the results of P Lemma A.4 and Lemma A.5 in the appendix. These lemmas deal with the limits of various moment matrices of the form N a R i t , R j t corrected for the stationary components Δ 2 y t j , j = 1 , , h 2 . The correction involves a regressor vector growing in dimension with sample size. This is dealt with in S&L.
In this respect let S t = [ Δ 2 y t 1 , , Δ 2 y t h + 2 ] which according to (A4) is a linear function of U t such that S t = T s U t . The definition of U t implies Q ^ = N 1 U t , U t E U t U t = O P ( h / N 1 / 2 ) . On p. 543 in P the matrices Σ i j , i , j { Y , U , 0 } are defined as limits of second moment matrices. Here U refers to β 1 Δ y t 1 = u 2 , t 1 in the triangular representation, Y refers to β y t 1 + δ β 2 Δ y t 1 = y 1 , t 1 A Δ y 3 , t 1 = u 1 , t 1 and 0 refers to Δ 2 y t . These are all stationary processes and linear functions of u t , u t 1 , u t 2 . Additional to S t also β Δ y t 1 = Δ u 1 , t 1 + A u 3 , t 1 is corrected for in the second stage.
The arguments on p. 114 and 115 of S&L deal with terms of the form
N 1 u 1 , t 1 , u 1 , t 1 N 1 u 1 , t 1 , S t S t , S t 1 S t , u 1 , t 1 .
Analogous arguments to S&L(A.12) show that this equals (up to terms of order o P ( 1 ) )
C 11 = E u 1 , t 1 u 1 , t 1 E u 1 , t 1 S t ( E S t S t ) 1 E S t u 1 , t 1 .
S&L state that this is bounded from above and bounded away from zero. The second claim actually is wrong. If ( u 1 , t ) t Z is univariate white noise with unit variance then C 11 = 1 h is achieved by predicting u 1 , t 1 by
j = 1 h h j h Δ u 1 , t j = u 1 , t 1 1 h j = 1 h u 1 , t j
including integration of the regressors in the form of the summation. This does not change the remaining arguments in S&L, it only implies that the separation of the eigenvalues corresponding to the stationary regressors and the ones corresponding to the non-stationary ones is weaker.
In the current case one can show that for
N 1 u 1 , t 1 , u 1 , t 1 N 1 u 1 , t 1 , S t S t , S t 1 S t , u 1 , t 1
where S t contains Δ u 1 , t 1 and Δ 2 u 1 , t j , j = 1 , , h for the corresponding limit C 11 the lower bound h C 11 c I holds for some 0 < c . The order of the lower bound is achieved by including a double integration of the regressors. For
N 1 Δ u 1 , t 1 , Δ u t , t 1 N 1 Δ u 1 , t 1 , S t S t , S t 1 S t , Δ u 1 , t 1 = C Δ Δ + o p ( 1 )
we have h 3 C Δ Δ c I . Here the arguments from above can be applied to the process ( Δ u t ) t Z . For a differenced process the smallest eigenvalue of the matrix
E δ U t δ U t , δ U t = [ Δ u t , Δ u t 1 , , Δ u t h ]
is of order h 2 , compare Theorem 2 of Palma and Bondon (2003).
Since N 1 S t , y c , t 1 = O P ( h 1 / 2 ) and N 2 S t , y 3 , t 1 = O P ( h 1 / 2 ) it follows that
N 1 ( u 1 , t 1 , y c , t 1 u 1 , t 1 , S t S t , S t 1 S t , y c , t 1 ) = O P ( h 1 / 2 ) , N 2 ( y c , t 1 , y c , t 1 y c , t 1 , S t S t , S t 1 S t , y c , t 1 ) = N 2 y c , t 1 , y c , t 1 + o P ( ( h / N ) 1 / 2 )
as well as
N 2 ( u 1 , t 1 , y 3 , t 1 u 1 , t 1 , S t S t , S t 1 S t , y 3 , t 1 ) = O P ( h 1 / 2 ) , N 3 ( y c , t 1 , y 3 , t 1 y c , t 1 , S t S t , S t 1 S t , y 3 , t 1 ) = N 3 y c , t 1 , y 3 , t 1 + o P ( ( h / N ) 1 / 2 ) , N 4 ( y 3 , t 1 , y 3 , t 1 y 3 , t 1 , S t S t , S t 1 S t , y 3 , t 1 ) = N 4 y 3 , t 1 , y 3 , t 1 + o P ( ( h / N ) 1 / 2 ) .
Therefore the limits of the moment matrices M i j are not affected by the correction using stationary terms even if h except for the terms involving the orders O P ( h 1 / 2 ) . For all stationary terms we find convergence to the corresponding limits denoted Σ i j in P.
The first step in the 2SI2 procedure then uses RRR in the equation
Δ 2 y t = Ψ Δ y t 1 + α β y t 1 + Π ̲ S t + e t .
Then R 0 t denotes Δ 2 y t corrected for S t , R 1 , t denotes Δ y t 1 corrected for S t and R 2 , t denotes y t 1 corrected for S t . Lemma A.4 of P derives the limits of different directions of M i j . k defined as
M i j . k = M i j M i k M k k 1 M k j , M i j = N 1 R i , t , R j , t
where i , j { 0 , 1 , 2 , ε , β } . Here R ε , t equals e t correct for S t and R β , t = β R 1 , t . Further P uses the notation A T = [ β ¯ 1 , T 1 β ¯ 2 ] and β ¯ 2 , T = β ¯ 2 . Here and below we assume without restriction of generality that [ β , β 1 , β 2 ] is an orthonormal matrix. Consequently β ¯ = β , β 1 ¯ = β 1 , β ¯ 2 = β 2 . Then the results above imply all results of Lemma A.4. of P except that now A T M 20.1 = O P ( h 1 / 2 ) .
In particular we obtain the following limits:
A T M 2 ε . 1 d 0 1 F ( d W ) , A T M 22.1 A T d 0 1 F F , β 2 M 1 ε . β d 0 1 B 3 ( d W ) , T 1 β 2 M 11 . β β 2 d 0 1 B 3 B 3 , β 2 M 1 ε . b d 0 1 L ( d W ) , T 1 β 2 M 11 . b β 2 d 0 1 L L .
Here W = g ( 1 ) B denotes the Brownian motion corresponding to ( ε t ) t Z , F denotes the Brownian motion corresponding to R 2 t (equaling y t 1 corrected for S t ) corrected for R 1 t ( Δ y t 1 whose only nonstationary component equals Δ y 3 , t 1 with corresponding Brownian motion B 3 ). Thus we obtain the following definitions (where L is as in Theorem 1):
F a ( u ) = B 2 ( u ) 0 u B 3 ( v ) d v , F ( u ) = F a ( u ) 0 1 F a B 3 ( 0 1 B 3 B 3 ) 1 B 3 ( u ) , L ( u ) = B 3 ( u ) 0 1 B 3 F a ( 0 1 F a F a ) 1 F a ( u ) .
The above arguments show that in the current setting U t 1 = u 2 , t 1 and Y t 1 = u 1 , t 1 are contained in the space spanned by S t for h . Therefore Σ i j = 0 for i , j { U , Y } . The subscript ’b’ refers to correcting for β R 2 t used in the second stage of 2SI2.
Let Σ ˜ Y Y denote the limit of h Y t 1 , Y t 1 and analogously define Σ ˜ Y U , Σ ˜ U U , Σ ˜ 0 Y and Σ ˜ 0 U . For the latter two note that Σ ˜ 0 Y denotes the limit of
h Δ 2 y t , Y t 1 = h α Y t 1 , Y t 1 + h ζ U t 1 , Y t 1 + h ζ 2 β Δ y t 1 , Y t 1 + h Π ̲ S t , Y t 1 + h e t , Y t 1
corrected for S t and β Δ y t 1 . Since Y t 1 is stationary the last term is of order O P ( ( h 3 / N ) 1 / 2 ) = o P ( 1 ) . Therefore it follows that Σ ˜ 0 Y = α Σ ˜ Y Y + ζ Σ ˜ U Y . Then the results of Lemma A.5 of P hold where in (A.11) and (A.14) Σ i j can be replaced by Σ ˜ i j .
The asymptotic analysis below will heavily use the Johansen approach of investigating the solutions to eigenvalue problems in order to maximize the pseudo-likelihood corresponding to the reduced rank regression problem. In order to use the corresponding local analysis one has to first clarify consistency for the various estimators as well as rates of convergence.
The main tool in this respect is Theorem A.1 of Johansen (1997) which establishes in the I(2) setting for the regression y t = θ Z t + ε t ( Z t being composed of stationary, I(1) and I(2) components) where D T Z t , Z t D T = O P ( 1 ) and D T Z t , ε t = o P ( 1 ) that D T 1 ( θ ^ θ ) = o P ( 1 ) where θ ^ denotes the pseudo likelihood estimator over some closed parameter set Θ .
It is straightforward to see that analogous results hold in the present setting when first concentrating out the stationary components: Consider y t = θ 1 z t + θ 2 Z t + e t . Then θ ^ 2 ( θ 1 ) is obtained from the concentration step and the pseudo likelihood involves R t , y θ 1 R t , z , R t , y θ 1 R t , z where again the processes R t , y and R t , z denote the processes y t and z t with the corresponding stationary regressors Z t regressed out. These concentrated quantities now can be used in the proof of Theorem A.1 of Johansen (1997) essentially without changes to show consistency for θ ^ 1 . Consistency of θ ^ 2 ( θ ^ 1 ) then follows from the unrestricted estimation as contained in Theorem 2. As shown above the rates of convergence as well as the limits are unchanged for the coefficients corresponding to the non-stationary components of the regressors for the long VAR case compared to the finite VAR case.
Note that these results hold for general closed parameter space Θ , thus including the unrestricted as well as the rank-reduced problem. This shows that we can always reduce the asymptotic analysis of the eigenvalue problems to a neighborhood of the true value as is done in P.
The first step in the proof of Theorem 4.1. of P consists in the investigation of the solutions to the equation ( β ˜ = β H + β 1 H 1 + β 2 H 2 , letting B T = β T 1 / 2 β 1 T 3 / 2 β 2 )
B T M 22.1 B T H T 1 / 2 H 1 T 3 / 2 H 2 Λ = B T M 20.1 M 00.1 1 M 02.1 B T H T 1 / 2 H 1 T 3 / 2 H 2 .
Now Lemma A.4 implies that the matrix B T M 22.1 B T on the left hand side converges to diag ( Σ Y Y . U , 0 1 F F ) . B T M 20.1 = Σ Y 0 . U 0 + O P ( h 1 / 2 T 1 / 2 ) , M 00.1 = Σ 00 . U + O P ( T 1 / 2 ) . Multiplying the equation by h 2 we obtain the limiting eigenvalue problem
Σ ˜ Y Y . U O P ( T 1 / 2 h 3 / 2 ) O P ( T 1 / 2 h 3 / 2 ) h 0 1 F F H T 1 / 2 H 1 T 3 / 2 H 2 h Λ = Σ ˜ Y 0 . U Σ 00 . U 1 Σ ˜ 0 Y . U O P ( T 1 / 2 h 5 / 2 ) O P ( T 1 / 2 h 5 / 2 ) O P ( T 1 h 3 ) H T 1 / 2 H 1 T 3 / 2 H 2 .
equation
Therefore asymptotically the first p r eigenvalues of h Λ are positive, the remaining ones tending to zero. Likewise the eigenvectors converge at the same speed as the matrices. Thus H 1 = O P ( h 5 / 2 / T ) , H 2 = O P ( h 5 / 2 / T 2 ) from which
β M 22.1 β H Λ H 1 = β M 20.1 M 00.1 1 M 02.1 β + O P ( h 4 / T )
and thus using (A.11)
H Λ H 1 = Σ ˜ Y Y . U 1 Σ ˜ Y 0 . U Σ 00 . U 1 Σ ˜ 0 Y . U / h + O P ( h T 1 / 2 ) = α Σ 00 . U 1 α Σ ˜ Y Y . U / h + O P ( h T 1 / 2 )
follows. Then as in P we have4
M 22.1 β ˜ ̲ = M 20.1 ( Σ 00 . U 1 Σ ˜ 0 Y . U ( h H Λ H 1 ) 1 + O P ( h T 1 / 2 ) ) = M 22.1 β + M 2 ε . 1 Σ ϵ 1 α ( α Σ ϵ 1 α ) 1 + a 1
where a 1 = M 20.1 O P ( h 2 T 1 / 2 ) = o P ( 1 ) and β ˜ ̲ = β ˜ H 1 . Then the remaining arguments on p. 546 of P show that the asymptotic distribution of ( T β 1 , T 2 β 2 ) ( β ˜ ̲ β ) is identical for the long VAR case as in the finite VAR case.
From these arguments the distribution of the likelihood ratio test of H r versus H p can be shown: Define S 1 ( λ ) : = λ M 22.1 M 20.1 M 00.1 1 M 02.1 , A T : = ( β 1 , T 1 β 2 ) and B ˜ T : = ( β , A T ) = ( β , β 1 , T 1 β 2 ) . Note that B ˜ T is of full rank, (11) is equivalent to | B ˜ T S 1 ( λ ) B ˜ T | = 0 ; that is,
β β 1 T 1 β 2 S 1 ( λ ) ( β , β 1 , T 1 β 2 ) = β S 1 ( λ ) β · A T S 1 ( λ ) S 1 ( λ ) β ( β S 1 ( λ ) β ) 1 β S 1 ( λ ) A T = 0 .
Let δ 1 = T λ , so that for every δ 1 we have that λ 0 , as T . By the above arguments we have that
h 2 β S 1 ( λ ) β = δ 1 h 2 T β M 22.1 β h 2 β M 20.1 M 00.1 1 M 02.1 β p Σ ˜ Y 0 . U Σ 00 . U 1 Σ ˜ 0 Y . U 0 ,
which has no zero root. Moreover, we have
h A T S 1 ( λ ) β = h λ A T M 22.1 β h A T M 20.1 M 00.1 1 M 02.1 β = A T M 20.1 Σ 00 . U 1 Σ ˜ 0 Y . U + o P ( 1 ) ,
which yields that
A T S 1 ( λ ) S 1 ( λ ) β ( β S 1 ( λ ) β ) 1 β S 1 ( λ ) A T = δ 1 1 T A T M 22.1 A T A T M 20.1 M 00.1 1 M 02.1 A T A T S 1 ( λ ) β β S 1 ( λ ) β 1 β S 1 ( λ ) A T = δ 1 1 T A T M 22.1 A T A T M 20.1 M 00.1 1 Σ 00 . U 1 Σ ˜ 0 Y . U Σ ˜ Y 0 . U Σ 00 . U 1 Σ ˜ 0 Y . U 1 Σ ˜ Y 0 . U Σ 00 . U 1 + o P ( 1 ) M 02.1 A T = δ 1 1 T A T M 22.1 A T A T M 20.1 Σ 00 . U 1 Σ 00 . U 1 α α Σ ϵ 1 α 1 α Σ 00 . U 1 + o P ( 1 ) M 02.1 A T d δ 1 0 1 F F 0 1 F d W α ( α Σ ϵ α ) 1 α 0 1 d W F = δ 1 0 1 F F ( 0 1 F d W ) ( 0 1 d W F )
where W = ( α Σ ε α ) 1 / 2 α W . Thus, the smallest ( p r ) solutions of (11) converge in distribution to the solutions of δ 1 0 1 F F ( 0 1 F d W ) ( 0 1 d W F ) = 0 , which implies that the test statistic Q r has the following limiting distribution,
Q r = i = r + 1 p δ 1 , i + o P ( 1 ) d t r ( 0 1 d W F ) 0 1 F F 1 ( 0 1 F d W ) .
For the second stage the arguments are very similar. The eigenvalue problem solved here is the following:
β ˜ ¯ M 11 . β ˜ β ˜ ¯ η ˜ Y = β ˜ ¯ M 1 α ˜ . β ˜ M α ˜ α ˜ . β ˜ 1 M α ˜ 1 . β ˜ β ˜ ¯ η ˜ .
This formula uses α ˜ , the ortho-complement of
α ˜ = M 02.1 β ˜ ( β ˜ M 22.1 β ˜ ) 1
From the above results noting that h β ˜ M 22.1 β ˜ Σ ˜ Y Y . U and h M 02.1 β ˜ α Σ ˜ Y Y . U according to Lemma A.4 we have α ˜ α . Considering the order of convergence we obtain α ˜ α = O P ( h T 1 / 2 ) . As in P this implies α ˜ α = O P ( h T 1 / 2 ) . Using β ˜ β = O P ( h 5 / 2 / T ) from stage 1 one observes that in the eigenvalue problem estimates can be replaced by true quantities introducing an error of order o P ( h T 1 / 2 ) :
β ¯ M 11 . β β ˜ 1 Y = β ¯ M 1 α . β M α α . β 1 M α 1 . β β ˜ 1 + o P ( h T 1 / 2 ) .
Then as in P consider β ˜ 1 = β H + β 1 H 1 + β ¯ 2 H 2 , reusing the symbols H , H 1 , H 2 here for β ˜ 1 in place of β ˜ as before. Identical arguments as around (A6) show that H 1 = O P ( 1 ) and H 2 = O P ( h 2 / T ) . Then combining the arguments around (A6) with the developments in P, p. 546 and 547 we obtain (A.21) of P:
β ¯ M 11 . β ( β ˜ ̲ 1 β 1 ) = β ¯ M 1 ε . β α Σ α α 1 ζ ( ζ Σ α α 1 ζ ) 1 + o P ( 1 ) .
The rest of the proof of (4.3a) and (4.3b) of P follows as in P.
With respect to the second likelihood ratio test consider
S ˜ 2 ( ρ ) = ρ β ˜ ¯ M 11 . β ˜ β ˜ ¯ β ˜ ¯ M 1 α ˜ . β ˜ M α ˜ α ˜ . β ˜ 1 M α ˜ 1 . β ˜ β ˜ ¯ .
The results above imply that S ˜ 2 ( ρ ) has uniformly in | ρ | < C (for every 0 < C < ) distance to S 2 ( ρ ) of order O P ( h T 1 / 2 ) where
S 2 ( ρ ) = ρ β ¯ M 11 . β β ¯ β ¯ M 1 α . β M α α . β 1 M α 1 . β β ¯ .
Note that since ( η , η ) is of full rank, (16) is equivalent to
η η S 2 ( ρ ) ( η , η ) = η S 2 ( ρ ) η · η S 2 ( ρ ) S 2 ( ρ ) η ( η S 2 ( ρ ) η ) 1 η S 2 ( ρ ) η = 0 .
Let δ 2 = T ρ , so that ρ 0 , as T . As above it can be seen that
h 2 η S 2 ( δ 2 T ) η = h 2 δ 2 T β 1 M 11 . β β 1 β 1 M 1 α ¯ . β M α ¯ α ¯ . β 1 M α ¯ 1 . β β 1 p Σ ˜ U 0 α ( α Σ 00 α ) 1 α Σ ˜ 0 U 0 .
This shows that the s larger roots of S 2 ( ρ ) tend to zero slower than O ( 1 / T ) . Moreover, we have
h η S 2 ( δ 2 T ) η = h δ 2 T β 2 M 11 . β β 1 β 2 M 1 α ¯ . β M α ¯ α ¯ . β 1 M α ¯ 1 . β β 1 = β 2 M 1 α ¯ . β ( α Σ 00 α ) 1 α Σ ˜ 0 U + o P ( 1 ) ,
which yields that (using P M : = ( α Σ 00 α ) 1 α Σ ˜ 0 U ( Σ ˜ U 0 α ( α Σ 00 α ) 1 α Σ ˜ 0 U ) 1 Σ ˜ U 0 α ( α Σ 00 α ) 1 )
η S 2 ( δ 2 T ) S 2 ( δ 2 T ) η ( η S 2 ( δ 2 T ) η ) 1 η S 2 ( δ 2 T ) η = δ 2 1 T β 2 M 11 . β β 2 β 2 M 1 α ¯ . β M α ¯ α ¯ . β 1 M α ¯ 1 . β β 2 h η S 2 ( δ 2 T ) η h 2 η S 2 ( δ 2 T ) η 1 h η S 1 ( δ 2 T ) η = δ 2 1 T β 2 M 11 . β β 2 β 2 M 1 α . β ( α Σ 00 α ) 1 M α 1 . β β 2 + β 2 M 1 α . β P M M α 1 . β β 2 + o P ( 1 ) d δ 2 0 1 B 3 B 3 0 1 B 3 d W α 2 ( α 2 Σ ϵ α 2 ) 1 α 2 0 1 d W B 3 = δ 2 0 1 B 3 B 3 ( 0 1 B 3 d W 2 ) ( 0 1 d W 2 B 3 )
using the results of Lemma A.5 of P. and (A.18) of Paruolo (1996) as an expression for
( α Σ 00 α ) 1 P M
where W 2 = ( α 2 Σ ϵ α 2 ) 1 / 2 α 2 W .
Thus, the smallest ( p r s ) solutions of (16) converge in distribution to the solutions of
δ 2 0 1 B 3 B 3 ( 0 1 B 3 d W 2 ) ( 0 1 d W 2 B 3 ) = 0 ,
which shows that the test statistic Q r , s has the following limiting distribution,
Q r , s = i = s + 1 p r δ 2 , i + o P ( 1 ) d t r 0 1 d W 2 B 3 0 1 B 3 B 3 1 0 1 B 3 d W 2 .
It follows also that the sum S r , s = Q s + Q r , s converges in distribution showing (C).
The rest of the proof of relations (4.3a, b) of P follow exactly as in P. In P (4.4) the order of convergence is replaced by o P ( T 1 ) , in (4.5) the error term can be shown to be o P ( T 1 / 2 ) and in (4.6) instead of the term O P ( T 2 ) we achieve o P ( 1 ) .
These terms show consistency for β ˜ , η ˜ . Using the results of Lemma A.4 of P then consistency for α ˜ , ζ ˜ follow.
Following the proof of Theorem 4.2. on pp. 548+549 of P we can show consistency for ψ ˜ of P. The only changes refer to the orders of convergence where our setting introduces orders of h into the arguments. Jointly this proves consistency of Ψ ˜ and Γ ˜ . Consistency for the coefficients to the stationary terms Δ 2 y t j follows as usual from the consistency of the estimates for the coefficients to non-stationary regressors. This completes the proof of (D).
With respect to (E) note that the results above show that the asymptotics for the two eigenvalue problems to be solved converge to the same quantities as in the finite VAR case. This shows that the results of P in this respect hold also in the case of long VARs.
Finally for the matrices Π j note that Theorem 4.3. of P shows that the asymptotic distribution for all quantities corresponding to stationary regressors are identical for every super-consistent estimator for the coefficients to the non-stationary components.

Appendix E. Proof of Theorem 4

From Theorem 3 it follows that Φ ^ = α ^ β ^ Φ , Ψ ^ Ψ , Π ^ j Π j , j = 1 , 2 , , 2 f 1 . Therefore the Hankel matrix of impulse response coefficients Π ^ j converges to the Hankel matrix corresponding to the Π j s. As ( A ¯ , B ) is controllable, ( A , B , C ) is minimal and A ¯ is nonsingular according to the assumptions, this Hankel matrix has rank n. This implies that the stochastic realisation algorithm of Appendix F provides consistent estimates ( A ¯ ^ , B ^ , D ^ ) ( A ¯ , B , D ) . This implies
a ^ ( z ) = ( 1 z ) 2 I p Φ ^ z Ψ ^ z ( 1 z ) ( 1 z ) 2 z D ^ ( I n z A ¯ ^ ) 1 B ^ a ( z ) .
For details see Appendix F.
a ^ ( z ) does not necessarily correspond to a rational transfer function of order n. It does so, however, if the additional restrictions (22) hold. Step 3 and 4 of the proposed algorithm achieve this. Here step 3 ascertains that solutions to the third equation exist. The second equation explicitly provides a solution α ¯ for given C . This solution not necessarily is of full row rank. As in the limit this is the case, it also holds for large enough T. The first equation always admits solutions. Thus for large enough T the set of all solutions is defined by polynomial restrictions. Adding the least squares distance to the estimated impulse response sequence then leads to a quadratic problem under non-linear differentiable constraints, which in the limit has a unique solution. Thus the solution is unique for large enough T.
Consistency of the estimates in combination with continuity of the solution of step 4 implies consistency for the system ( A ¯ ^ , B ^ , C ^ ) . This implies consistency for the inverse system ( A ^ , B ^ , C ^ ) in the sense of converging impulse response coefficients and hence consistency for the transfer function estimator in the pointwise topology. The fulfillment of restrictions (22) ensures the structure of the corresponding matrix A ^ according to state space unit root structure ( ( 0 , ( c , c + d ) ) ) .

Appendix F. Stochastic Realization Using Overlapping Echelon Forms

This section describes the approximate realization of the first f coefficients G j , j = 1 , , 2 f of an impulse response sequence using a rational transfer function of order n where f n . More details can be found in Section 2.6 of Hannan and Deistler (1988).
Define the Hankel matrix
H f , f = G 1 G 2 G 3 G f G 2 G 3 G 3 G f G f + 1 G 2 f 1 = h ( 1 , 1 ) h ( 1 , 2 ) h ( 1 , p ) h ( 2 , 1 ) h ( f , p ) .
Here h ( i , j ) denotes the j-th row in the i-th block row. Let α = ( n 1 , , n p ) define a nice selection of rows5 of H such that H α R n × f p , the submatrix of H containing the rows h ( i , j ) , i n j , is of full row rank. If the impulse response corresponds to a transfer function of order at least n there exists such a nice selection α . Finally let H α + 1 R n × f p denote the matrix H α shifted down one block row (that is in each row where H α contains h ( i , j ) , H α + 1 contains h ( i + 1 , j ) ).
Then it is derived in Hannan and Deistler (1988), Theorem 2.6.2. that if G j corresponds to a transfer function k ( z ) = j = 1 G j z j of order exactly n such that the corresponding H α is formed using a nice selection, then a system ( A , B , C ) can be defined using the following formulas
A H α = H α + 1 , B = H α I p 0 , C H α = G 1 G 2 G f
such that G j = C A j 1 B , j = 1 , 2 , .
If the order of the transfer function is larger than n, then the equations for A and C can be solved using least squares. If a sequence of impulse responses G ^ j G j , j = 1 , , 2 f 1 , and the limit G j corresponds to a transfer function where the rank of H α equals n, it is obvious that the resulting systems ( A ^ , B ^ , C ^ ) ( A , B , C ) since in this case the least squares solution depends continuously on the matrix H .

References

  1. Banerjee, Anindya, Lynne Cockerell, and Bill Russell. 2001. An I(2) analysis of inflation and the markup. Journal of Applied Econometrics 16: 221–40. [Google Scholar] [CrossRef]
  2. Bauer, Dietmar, and Alex Maynard. 2012. Persistence-robust surplus-lag Granger causality testing. Journal of Econometrics 169: 293–300. [Google Scholar] [CrossRef]
  3. Bauer, Dietmar, and Martin Wagner. 2004. Autoregressive Approximations to MFI(1) Processes. Working Paper No. 174, Reihe Ökonomie/Economics Series, Vienna, Austria: Institut für Höhere Studien (IHS). [Google Scholar]
  4. Bauer, Dietmar, and Martin Wagner. 2012. A state space canonical form for unit root processes. Econometric Theory 28: 1313–49. [Google Scholar] [CrossRef]
  5. Berk, Kenneth N. 1974. Consistent autoregressive spectral estimates. The Annals of Statistics 2: 489–502. [Google Scholar] [CrossRef]
  6. Boswijk, H. Peter, and Jurgen A. Doornik. 2004. Identifying, estimating and testing restricted cointegrated systems: An overview. Statistica Neerlandica 58: 440–65. [Google Scholar] [CrossRef]
  7. Boswijk, H. Peter, and Paolo Paruolo. 2017. Likelihood ratio tests of restrictions on common trends loading matrices in I(2) VAR systems. Econometrics 5: 28. [Google Scholar] [CrossRef] [Green Version]
  8. Chan, Ngai Hang, and Ching Zong Wei. 1988. Limiting distributions of least squares estimates of unstable autoregressive processes. The Annals of Statistics 16: 367–401. [Google Scholar] [CrossRef]
  9. Dolado, Juan J., and Helmut Lütkepohl. 1996. Making Wald tests work for cointegrated VAR systems. Econometric Reviews 15: 369–86. [Google Scholar] [CrossRef]
  10. Engle, Robert F., and Clive W.J. Granger. 1987. Co-integration and error correction: Representation, estimation, and testing. Econometrica 55: 251–76. [Google Scholar] [CrossRef]
  11. Georgoutsos, Dimitris A., and Georgios P. Kouretas. 2004. A Multivariate I (2) cointegration analysis of German hyperinflation. Applied Financial Economics 14: 29–41. [Google Scholar] [CrossRef] [Green Version]
  12. Hannan, Edward James, and Manfred Deistler. 1988. The Statistical Theory of Linear Systems. New York: John Wiley. [Google Scholar]
  13. Hannan, Edward James, and Laimonis Kavalieris. 1986. Regression, autoregression models. Journal of Time Series Analysis 7: 27–49. [Google Scholar] [CrossRef]
  14. Inoue, Atsushi, and Lutz Kilian. 2020. The uniform validity of impulse response inference in autoregressions. Journal of Econometrics 215: 450–72. [Google Scholar] [CrossRef] [Green Version]
  15. Johansen, Søren, and Helmut Lütkepohl. 2005. A note on testing restrictions for the cointegration parameters of a VAR with I(2) variables. Econometric Theory 21: 653–58. [Google Scholar] [CrossRef]
  16. Johansen, Søren, Katarina Juselius, Roman Frydman, and Michael Goldberg. 2007. Testing hypotheses in an I (2) model with applications to the persistent long swings in the Dmk/$ rate. Journal of Econometrics 158: 1–35. [Google Scholar] [CrossRef] [Green Version]
  17. Johansen, Søren. 1992a. A representation of vector autoregresive processes integrated of order 2. Econometric Theory 8: 188–202. [Google Scholar] [CrossRef] [Green Version]
  18. Johansen, Søren. 1992b. Testing weak exogeneity and the order of cointegration in UK money demand data. Journal of Policy Modeling 14: 313–34. [Google Scholar] [CrossRef]
  19. Johansen, Søren. 1995. Likelihood-Based Inference in Cointegrated Vector Auto-Regressive Models. Oxford: Oxford University Press. [Google Scholar]
  20. Johansen, Søren. 1997. Likelihood analysis of the I(2) model. Scandinavian Journal of Statistics 24: 433–62. [Google Scholar] [CrossRef]
  21. Juselius, Katarina, and Katrin Assenmacher. 2017. Real exchange rate persistence and the excess return puzzle: The case of Switzerland versus the US. Journal of Applied Econometrics 32: 1145–55. [Google Scholar] [CrossRef]
  22. Juselius, Katarina, and Josh R. Stillwagon. 2018. Are outcomes driving expectations or the other way around? An I(2) CVAR analysis of interest rate expectations in the dollar/pound market. Journal of International Money and Finance 83: 93–105. [Google Scholar] [CrossRef]
  23. Juselius, Katarina. 1994. On the duality between long-run relations and common trends in the i(1) versus (2) model. an application to aggregate money holdings. Econometric Reviews 13: 151–78. [Google Scholar] [CrossRef]
  24. Juselius, Katarina. 2006. The Cointegrated VAR Model. Oxford: Oxford University Press. [Google Scholar]
  25. Kurita, Takamitsu, Heino Bohn Nielsen, and Anders Rahbek. 2011. An I(2) cointegration model with piecewise linear trends. Econometrics Journal 14: 131–55. [Google Scholar] [CrossRef]
  26. Kurita, Takamitsu. 2012. Likelihood-based inference for weak exogeneity in I(2) cointegrated VAR models. Econometric Reviews 31: 325–60. [Google Scholar] [CrossRef]
  27. Lewis, Richard, and Gregory C. Reinsel. 1985. Prediction of multivariate time series by autoregressive model fitting. Journal of Multivariate Analysis 16: 393–411. [Google Scholar] [CrossRef] [Green Version]
  28. Lütkepohl, Helmut, and Holger Claessen. 1997. Analysis of cointegrated VARMA processes. Journal of Econometrics 80: 223–29. [Google Scholar] [CrossRef]
  29. Lütkepohl, Helmut, and Pentti Saikkonen. 1997. Impulse response analysis in infinite order cointegrated vector autoregressive processes. Journal of Econometrics 81: 127–57. [Google Scholar] [CrossRef]
  30. Mosconi, Rocco, and Paolo Paruolo. 2013. Identification of Cointegrating Relations in I(2) Vector Autoregressive Models. Rome: PRIN Workshop Forecasting Economic and Financial Time Series, Research Publications at Politecnico di Milano, pp. 1–35. [Google Scholar]
  31. Mosconi, Rocco, and Paolo Paruolo. 2017. Identification conditions in simultaneous systems of cointegrating equations with integrated variables of higher order. Journal of Econometrics 198: 271–76. [Google Scholar] [CrossRef]
  32. Nielsen, Heino Bohn, and Anders Rahbek. 2007. The likelihood ratio test for cointegration ranks in the I(2) model. Econometric Theory 23: 615–37. [Google Scholar] [CrossRef]
  33. Palma, Wilfredo, and Pascal Bondon. 2003. On the eigenstructure of generalized fractional processes. Statistics and Probability Letters 65: 93–101. [Google Scholar] [CrossRef]
  34. Paruolo, Paolo, and Anders Rahbek. 1999. Weak exogeneity in I(2) VAR systems. Journal of Econometrics 93: 281–308. [Google Scholar] [CrossRef]
  35. Paruolo, Paolo. 1994. The role of the drift in I(2) systems. Journal of the Italian Statistical Society 3: 93–123. [Google Scholar] [CrossRef]
  36. Paruolo, Paolo. 1996. On the determination of integration indices in I(2) systems. Journal of Econometrics 72: 313–56. [Google Scholar] [CrossRef]
  37. Paruolo, Paolo. 2000. Asymptotic efficiency of the two stage estimator in I(2) systems. Econometric Theory 16: 524–50. [Google Scholar] [CrossRef]
  38. Paruolo, Paolo. 2006. Common trends and cycles in I(2) VAR systems. Journal of Econometrics 132: 143–68. [Google Scholar] [CrossRef]
  39. Rahbek, Anders, Hans Christian Kongsted, and Clara Jørgensen. 1999. Trend stationarity in the I(2) cointegration model. Journal of Econometrics 90: 265–289. [Google Scholar] [CrossRef]
  40. Saikkonen, Pentti, and Helmut Lütkepohl. 1996. Infinite-order cointegrated vector autoregressive processes: Estimation and inference. Econometric Theory 12: 814–44. [Google Scholar] [CrossRef]
  41. Saikkonen, Pentti, and Ritva Luukkonen. 1997. Testing cointegration in infinite order vector autoregressive processes. Journal of Econometrics 81: 93–126. [Google Scholar] [CrossRef]
  42. Saikkonen, Pentti. 1991. Asymptotically efficient estimation of cointegration regressions. Econometric Theory 7: 1–21. [Google Scholar] [CrossRef]
  43. Saikkonen, Pentti. 1992. Estimation and testing of cointegrated systems by an autoregressive approximation. Econometric Theory 8: 1–27. [Google Scholar] [CrossRef]
  44. Sims, Christopher A., James H. Stock, and Mark W. Watson. 1990. Inference in linear time series models with some unit roots. Econometrica 58: 113–44. [Google Scholar] [CrossRef]
  45. Stillwagon, Josh R. 2018. Are risk premia related to real exchange rate swings? Evidence from I(2) CVARs with survey expectations. Macroeconomic Dynamics 22: 255–78. [Google Scholar] [CrossRef]
1.
Here somewhat sloppily we use the same symbols for processes and their realizations.
2.
Note that α = [ I p 1 , 0 ] , and thus Ω 1 . c = ( [ Ω 1 ] 11 ) 1 = ( α Ω 1 α ) 1 = α g ( 1 ) Σ ϵ 1 g ( 1 ) α 1 = ( Φ 1 Σ ϵ 1 Φ 1 ) 1 .
3.
In this appendix processes whose dimension depends on the choice of h are denoted using upper case letters neglecting the dependence on h in the notation otherwise for simplicity.
4.
Contrary to the usual Johansen notation we use Σ ϵ as the noise covariance and Ω as the variance of the Brownian motion corresponding to ( u t ) t Z . Thus some of the formulas in this part show ’unusual’ form.
5.
A nice selection is such that if h ( i , j ) is contained in the selection, then also h ( l , j ) are contained for all 0 < l < i .

Share and Cite

MDPI and ACS Style

Li, Y.; Bauer, D. Modeling I(2) Processes Using Vector Autoregressions Where the Lag Length Increases with the Sample Size. Econometrics 2020, 8, 38. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics8030038

AMA Style

Li Y, Bauer D. Modeling I(2) Processes Using Vector Autoregressions Where the Lag Length Increases with the Sample Size. Econometrics. 2020; 8(3):38. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics8030038

Chicago/Turabian Style

Li, Yuanyuan, and Dietmar Bauer. 2020. "Modeling I(2) Processes Using Vector Autoregressions Where the Lag Length Increases with the Sample Size" Econometrics 8, no. 3: 38. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics8030038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop