Next Article in Journal
Using the SARIMA Model to Forecast the Fourth Global Wave of Cumulative Deaths from COVID-19: Evidence from 12 Hard-Hit Big Countries
Next Article in Special Issue
Algorithmic Modelling of Financial Conditions for Macro Predictive Purposes: Pilot Application to USA Data
Previous Article in Journal / Special Issue
A Theory-Consistent CVAR Scenario for a Monetary Model with Forward-Looking Expectations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Model Validation and DSGE Modeling

1
The School of Arts, Kathmandu University, P. B. No. 6250, Kathmandu 44700, Nepal
2
Department of Economics, Virginia Tech (Virginia Polytechnic Institute and State University), Blacksburg, VA 24060, USA
*
Author to whom correspondence should be addressed.
We are most grateful to our colleagues, Byron Tsang, Chetan Dave, and two anonymous referees for several helpful comments that improved the paper considerably.
Submission received: 30 September 2017 / Revised: 30 November 2020 / Accepted: 25 March 2022 / Published: 7 April 2022
(This article belongs to the Special Issue Celebrated Econometricians: David Hendry)

Abstract

:
The primary objective of this paper is to revisit DSGE models with a view to bringing out their key weaknesses, including statistical misspecification, non-identification of deep parameters, substantive inadequacy, weak forecasting performance, and potentially misleading policy analysis. It is argued that most of these weaknesses stem from failing to distinguish between statistical and substantive adequacy and secure the former before assessing the latter. The paper untangles the statistical from the substantive premises of inference to delineate the above-mentioned issues and propose solutions. The discussion revolves around a typical DSGE model using US quarterly data. It is shown that this model is statistically misspecified, and when respecified to arrive at a statistically adequate model gives rise to the Student’s t VAR model. This statistical model is shown to (i) provide a sound basis for testing the DSGE overidentifying restrictions as well as probing the identifiability of the deep parameters, (ii) suggest ways to meliorate its substantive inadequacy, and (iii) give rise to reliable forecasts and policy simulations.

1. Introduction

The Real Business Cycle (RBC) models proposed by (Kydland and Prescott 1982; Prescott 1986) were heralded by (Wickens 1995) as A Needed Revolution in Macroeconometrics:
“The main failing of most macroeconometric models is in not taking macroeconomic theory seriously enough with the result that little or nothing is learned about key parameter values, a fault no amount of econometric sophistication will compensate for”.
(p. 1637)
The original RBC models were subsequently extended in several directions that eventually led to the broader family of Dynamic Stochastic General Equilibrium (DSGE) models. The DSGE models combined the RBC perspective with (Lucas 1976) call for structural models to be built on sound microfoundations, with the parameters of interest reflecting primarily the preference of the decision-makers as well as the relevant technical and institutional constraints. The DSGE models are built upon an inter-temporal general equilibrium framework with a well-defined long-run structure and intrinsic dynamics. (Lucas 1976) argued that such structural models with deep parameters are likely to be invariant to policy interventions and thus provide a better basis for prediction and policy evaluation. This produced structural models that are founded on the inter-dependence of certain representative rational agents (e.g., household, firm, government, central bank) intertemporal optimization (e.g., the maximization of life-time utility) that integrates their expectations; see (Canova 2007; DeJong and Dave 2011).
From the theory perspective, DSGE modeling has been a success in revolutionizing macroeconomics by providing more cogent microfoundations and introducing intrinsic and extrinsic dynamics through shocks into macroeconomic models. DSGE models are currently dominating both the empirical modeling in macroeconomics as well as the economic policy evaluation; see (Hashimzade and Thornton 2013).
From the empirical perspective, however, DSGE models have been criticized on several grounds. First, DSGE models do not fully account for the probabilistic structure of the data; see (Favero 2001). Second, the use of ‘calibration’ to quantify DSGE models has been called into question; see (Gregory and Smith 1993; Kim and Pagan 1994). Third, the identification of their ‘deep’ parameters remains problematic; see (Canova 2009; Consolo et al. 2009). Fourth, the appropriateness of the Hodrick-Prescott (H-P) filter has been seriously challenged; see (Chang et al. 2007; Harvey and Jaeger 1993; Saijo 2013). Fifth, the forecasting capacity of DSGE models is rather weak; see (Edge and Gurkaynak 2010). In light of that, one can make a case that, despite the current popularity of DSGE models, there is a lot to be done to ensure their empirical adequacy for inference and policy simulation purposes.
On the positive side, there have been several attempts to remedy some of these weaknesses, including a trend toward estimating the parameters of such models (Fernández-Villaverde and Rubio-Ramírez 2007; Ireland 2004; Smets and Wouters 2003), as well as identifying the structural parameters using statistical techniques (Consolo et al. 2009). In addition, questions relating to various forms of possible ‘substantive’ misspecifications of DSGE models have been raised; see (Canova 2009; Del Negro and Schorfeide 2009):
“Over the last 20 years dynamic stochastic general equilibrium (DSGE) models have become more detailed and complex and numerous features have been added to the original real business cycle core. Still, even the best practice DSGE model is likely to be misspecified; either because features, such as heterogeneities in expectations, are missing or because researchers leave out aspects deemed tangential to the issues of interest”.
Unfortunately, the form of expectations and potentially relevant variables omitted from a DSGE model (e.g., Table 1, Table 2 and Table 3) pertain to substantive (structural) misspecifications that will have statistical implications, but the literature has ignored statistical misspecification: invalid probabilistic assumptions imposed on one’s data. The two forms of misspecification are very different and before one can reliably probe for substantive misspecification one needs to secure statistical adequacy to ensure the reliability of the inference procedures used in probing substantive misspecification; see (Spanos 2006a).
The primary aim of the discussion that follows is to propose novel ways to address some of the empirical weaknesses mentioned above by bridging the gap between DSGE models and the relevant data more coherently. In particular, the paper proposes modeling strategies that bring out the statistical model M θ ( z ) implicit in every DSGE model M φ ( z ) , and suggests effective ways to test the validity of the probabilistic assumptions comprising M θ ( z ) vis-a-vis data Z 0 to establish its statistical adequacy. It is argued that a mispecified M θ ( z ) will undermine the reliability of any inference based on the estimated DSGE model, rendering the ensuing evidence untrustworthy. To avoid that one needs to respecify the original M θ ( z ) to account for all the statistical information (the chance regularity patterns) exhibited by data Z 0 . When a statistically adequate model is secured, one could then proceed to probe the substantive adequacy of the DSGE model by testing the validity of its overidentifying restrictions, as well as any other relevant issues in [b]. Section 2 focuses on the importance of separating the statistical ( M θ ( z ) ) from the substantive ( M φ ( z ) ) model by discussing how a statistically M θ ( z ) undermines the reliability of all inferences based on M φ ( z ) . This perspective is then used in Section 3 and Section 4 to revisit the Smets and Wouters (2007) DSGE model to: (a) test the statistical adequacy of its implicit statistical model M θ ( z ) , (b)  respecify M θ ( z ) to attain statistical adequacy, (c) appraise the empirical validity of the DSGE model M φ ( z ) , (d) evaluate the reliability of its forecasting and impulse response analysis, and (e) propose a procedure to probe the identifiability of its structural parameters φ .

2. Empirical Model Validation

2.1. Macroeconometric Models

Arguably, the single most important weakness of the macroeconometric models of the 1970s (Bodkin et al. 1991; McCarthy 1972) was their unreliable inferences, including poor forecasting performance. When these models were compared on forecasting grounds with data-driven single equation AR(p) models, they were found wanting; (Nelson 1972). In retrospect, the poor forecasting performance of these models can be attributed to several different sources.
The new classical macroeconomics of the 1980s blamed their inadequacy as stemming from their ad hoc specification and their lack of proper theoretical microfoundations; see (Lucas 1980). Indeed, these weaknesses are often used to motivate the introduction of calibration in RBC modeling (DeJong and Dave 2011):
“...an important component of Kydland and Prescott’s advocacy of calibration is based on a criticism of the probability approach....In sum, the use of calibration exercises as a means for facilitating the empirical implementation of DSGE models arose in the aftermath of the demise of system of equations analyses”.
(p. 257)
Equally plausible, however, is the argument that traces their predictive failure to their statistical misspecification in the sense that these empirical models did not account for the statistical regularities in the data; see (Spanos 2010b, 2021). As argued by (Granger and Newbold 1986, p. 280), statistically misspecified models are likely to give rise to untrustworthy empirical evidence and poor predictive performance. Lucas (Lucas 1987) called attention to the substantive adequacy of macroeconometric models but ignored their statistical misspecification as a source of untrustworthiness of the ensuing empirical evidence.

2.2. Structural vs. Statistical Models

A strong case can be made that the predictive failure of the empirical macroeconometric models of the 1980s can be traced to the questionable modeling strategy of foisting a substantive (structural) model M φ ( z ) on data z 0 and proceeding to draw inferences. Such a strategy, however, will invariably give rise to an empirical model which is both substantively and statistically misspecified. This stems primarily from the fact that the modeler (a) treats the substantive model as established knowledge, and not as a tentative explanation to be evaluated against the data, and/or (b) largely ignores the validity of the probabilistic assumptions imposed (directly or indirectly) on the data via the error term. This raises a serious philosophical problem known as Duhem’s conundrum where one cannot separate the two sources of misspecification (statistical or substantive) and apportion blame with a view to find ways to address it; see (Mayo 1996).
A case can be made (Spanos 1986) that the key to addressing this conundrum is to untangle the statistical model, M θ ( z ) , that is implicit in every substantive model, M φ ( z ) , whose generic forms are:
M θ ( z ) = { f ( z ; θ ) , θ Θ R m } , z R Z n , M φ ( z ) = { f ( z ; φ ) , φ Φ R p } , z R Z n ,
where p < m < n , f ( z ; θ ) denotes the distribution of the sample Z:=  ( Z 1 , , Z n ) , Θ and R m , the parameter and samples spaces, respectively. Most importantly, the substantive model constitutes a reparametrization/restriction of the statistical model via restrictions that can be generically specified by G ( φ , θ ) = 0 , where φ Φ and θ Θ denote the structural and statistical parameters, respectively. This can be achieved by delimiting the statistical model to comprise solely the probabilistic assumptions imposed on data z 0 , or more accurately on the stochastic process { Z t , t N : = ( 1 , 2 , , n , ) } underlying z 0 , by viewing the statistical model as a particular parametrization of the stochastic process { Z t , t N } without any substantive restrictions imposed; (Spanos 2006a).
Example 1.
Consider the structural model underlying the Simultaneous Equations formulation:
M φ ( z ) : Γ ( φ ) Y t + Δ ( φ ) x t = ε t , ε t N ( 0 , Ω ( φ ) ) ,
that has dominated textbook econometrics since the 1960s. It can be shown that the implicit statistical model, M θ ( z ) , is its (unrestricted) reduced form (Spanos 1986) :
M θ ( z ) : Y t = B ( θ ) x t + u t , u t N ( 0 , Σ ( θ ) ) .
The two models are related via B ( θ ) = Γ ( φ ) 1 Δ ( φ ) and u t = Γ 1 ε t , yielding the identifying restrictions:
G ( φ , θ ) = 0 : B ( θ ) Γ ( φ ) + Δ ( φ ) = 0 , Ω ( φ ) = Γ ( φ ) Σ ( θ ) Γ ( φ ) ,
where the structural parameters φ:= ( Γ , Δ , Ω ) are said to be identified, if, for a given θ:= B , Σ there exists a unique solution of G ( φ , θ ) = 0 for φ. The reduced form in (2), when interpreted as an unrestricted parameterization of the stochastic process { Z t : = ( Y t , X t ) , t N } is the implicit M θ ( z ) , which can be specified in terms of a complete, internally consistent and testable set of probabilistic assumptions [1]–[5] as shown in Table 1; see (Spanos 1990).
This untangling of the statistical and substantive models enables one to distinguish clearly between two different forms of adequacy:
[a] Statistical adequacy: does the statistical model M θ ( z ) account for the chance regularities in z 0 ? Equivalently, does data Z 0 constitute a truly typical realization of the statistical Generating Mechanism (GM) in M θ ( z ) ? The answer to these questions is that the validity of M θ ( z ) can be evaluated using thorough Mis-Specification (M-S) testing; see (Spanos 2006a, 2018).
[b] Substantive adequacy: does the substantive (structural) model M φ ( z ) adequately capture (describes, explains, predicts) the phenomenon of interest? Substantive inadequacy arises from errors in narrowing down the relevant aspects of the phenomenon of interest, flawed ceteris paribus clauses, missing crucial variables and/or confounding factors, etc.; see (Spanos 2006b, 2010b). What renders the inference procedures based on the estimated structural model (1) reliable and the ensuing evidence statistically trustworthy, is the validity of assumptions [1]–[5] for data Z 0 .
In the traditional approach to DSGE modeling, the statistical model M θ ( z ) is specified indirectly by attaching errors (shocks) to the behavioral equations comprising the structural model M φ ( z ) . As a result, the primary concern in the DSGE literature has been on the substantive and not the statistical misspecification; see (Del Negro and Schorfeide 2009; Del Negro et al. 2007). This is an important development but it has a crucial weakness. Probing for substantive misspecifications in M φ ( z ) without ensuring statistical adequacy of M θ ( z ) will undermine any form of substantive probing based on M φ ( z ) ; see (Consolo et al. 2009).
The statistical adequacy of M θ ( z ) needs to be established first because that will ensure the ‘optimality’ and reliability of the inference procedures employed to probe the substantive adequacy of M φ ( z ) . This is because statistical misspecification undermines the optimality and reliability of frequentist inference via:
(i)
Rendering the distribution of the sample f ( z ; θ ) , z R n , as well as the likelihood function L ( θ ; z 0 ) f ( z 0 ; θ ) , θ Θ , erroneous.
(ii)
Distorting the relevant sampling distribution, f ( y n ; θ ) = d F n ( y ) / d y , y n R , of any statistic (estimator, test, predictor), Y n = g ( Z 1 , Z 2 , , Z n ) , that underlies the inference in question since:
F n ( y ) = P ( Y n y ) = · · · { z : g ( z ) y } f ( z ; θ ) d z , y R , .
(iii)
Undermining the reliability of inference procedures by belying their optimality and inducing sizeable discrepancies between the actual and the nominal (the ones assuming the validity of M θ ( z ) ) error probabilities. Applying a 0.05 significance level (nominal) test when the actual type I error is greater than 0.80 will lead an inference astray. This unreliability affects not just testing and estimation but also goodness-of-fit and prediction measures rendering them highly misleading. Statistical adequacy secures the reliability of inference by securing the optimality of inference and ensuring that the actual error probabilities approximate closely the nominal ones. As shown in (Spanos and McGuirk 2001), such discrepancies can easily arise for what are often considered ‘minor’ statistical misspecifications.
In relation to statistical misspecification, it is also important to emphasize that all approaches to inference (frequentist, Bayesian, nonparametric), as well as the Akaike-type model selection procedures, invoke statistical models, and thus they are all vulnerable to statistical misspecification. In the case of Bayesian inference, a misspecified model M θ ( z ) gives rise to a false f ( z ; θ ) , leading to an erroneous likelihood function L ( θ ; z 0 ) f ( z 0 ; θ ) , θ Θ , and that in turn gives rise to an incorrect posterior: π ( θ z 0 ) = π ( θ ) · L ( θ ; z 0 ) , θ Θ , undermining all forms of Bayesian inference. Moreover, no amount of finessing of the prior π ( θ ) can rectify the statistical misspecification problem induced by an invalid L ( θ ; z 0 ) ; see (Spanos 2010a). This is particularly relevant for the recent trend to estimate DSGE models using Bayesian methods; see (Smets and Wouters 2005, 2007; Del Negro et al. 2007; Galí and Wouters 2011).
The importance of the distinction between a substantive (structural), M φ ( z ) , and its implicit statistical model, M θ ( z ) , stems from the fact that the error-reliability of inference stems solely from the validity of the probabilistic assumptions defining M θ ( z ) vis-a-vis data z 0 ; see (Spanos 1986).
At a practical level, one can summarize the proposed modeling process in the form of the following stages.
Stage 1. Untangle the statistical model M θ ( z ) from the substantive model M φ ( z ) , without compromising the integrity of either source of information because the two models are ontologically distinct. From this perspective, the structural model M φ ( z ) derives its statistical meaningfulness from M θ ( z ) and the latter derives its theoretical meaningfulness from the former.
Stage 2. Establish the statistical adequacy of M θ ( z ) using comprehensive M-S testing (Mayo and Spanos 2004; Spanos 2018) by assessing the validity of the probabilistic assumptions comprising M θ ( z ) . Without it one cannot rely on statistical inference to reliably assess any substantive questions of interest, including the adequacy of M φ ( z ) vis-a-vis the phenomenon of interest—the reliability of such inferences will be unknown; see (Spanos 2009b, 2012).
Stage 3. When the original statistical model, M θ ( z ) , is misspecified, one needs to respecify it to account for all the chance (statistical) regularities in the data, i.e., ensure that the respecified model M ϑ ( z ) is statistically adequate for data z 0 .
Stage 4. Armed with a statistically adequate M θ ( z ) , one can proceed to evaluate the substantive adequacy of M φ ( z ) , including testing the overidentifying restrictions stemming from the implicit restrictions G ( φ , θ ) = 0 . If these restrictions do not belie data z 0 , the estimated M φ ( z ) can is empirically valid, but not substantively adequate until further probes vis-a-vis the phenomenon of interest reveal no serious flaws relating to [b] above; see (Spanos 2006b).

3. Revisiting DSGE Modeling

DSGE models aim to describe the behavior of the economy in an equilibrium steady-state stemming from optimal microeconomic decisions associated with several representative agents (households, firms, governments, central banks, etc.). These decisions are based on the intertemporal optimizing the behavior of representative agents, with the first-order conditions of the optimization problem linearized around a constant steady-state using a first-order Taylor approximation; 2nd order terms raise problems beyond the scope of the present paper; see (DeJong and Dave 2011; Heer and Maussner 2009; Klein 2000). After linearization, the model is specified in terms of log difference, which is thought to be substantively more meaningful.

3.1. Smets and Wouters 2007 DSGE Model

Consider the DSGE model proposed by Smets and Wouters (2007).
Exogenous Shocks: There are 7 exogenous shocks.
( η t p , η t a , η t b , η t i , η t w , η t r , η t g ) NIID ( 0 , Ω ) , Ω = diag ( σ p 2 , σ a 2 , σ b 2 , σ i 2 , σ w 2 , σ r 2 , σ g 2 )
Parameters:   φ = ( c y , i y , z y , c 1 , c 2 , c 3 , i 1 , i 2 , q 1 , ϕ p , α , z 1 , k 1 , k 2 , π 1 , π 2 , π 3 , λ , σ l , w 1 , w 2 , w 3 , w 4 , ρ , r π , r Y , r Δ y , ρ g , ρ b , ρ i , ρ a , ρ p , ρ w , ρ r , ρ g a , μ p , μ w , σ p , σ a , σ b , σ i , σ w , σ r , σ g )
Deep Structural Parameters:   ψ : = ( φ , σ c , h , ξ w , σ l , ξ p , ι w , ι p , Ψ , Φ , r π , ρ , r Y , r Δ Y , π ¯ , β , l ¯ , γ ¯ , α , σ a , σ b , σ g , σ I , σ r , σ p , σ w , ρ a , ρ b , ρ g , ρ I , ρ r , ρ p , ρ w , μ p , μ w , ρ g a , δ , λ w , ε p , ε p , g y ) ,
which relate to the following deeper structural parameters:
c y = 1 g y i y , z y = R * k k y , c 1 = λ / γ 1 + ( λ / γ ) , c 2 = ( σ c 1 ) W * h L * / C * σ c ( 1 + ( λ / γ ) ) , c 3 = 1 λ / γ σ c ( 1 + ( λ / γ ) ) ,
i 1 = 1 1 + β γ 1 σ c , i 2 = 1 ( 1 + β γ 1 σ c ) γ 2 φ , q 1 = β γ σ c ( 1 δ ) , z 1 = 1 Ψ Ψ , k 1 = 1 δ γ ,
i y = ( γ 1 + δ ) k y , k 2 = ( 1 ( 1 δ ) / γ ) ( 1 + β γ 1 σ c ) γ 2 φ , π 1 = ι p 1 + β γ 1 σ c ι p , π 2 = β γ 1 σ c 1 + β γ 1 σ c ι p .
π 3 = 1 ( 1 + β γ 1 σ c ι p ) [ ( 1 β γ 1 σ c ξ p ) ( 1 ξ p ) / ξ p ( ( ϕ p 1 ) ϵ p + 1 ) ] .
That is, there are 41 deep structural parameters in the S-W model out of which 5 are calibrated, and the rest are estimated using data Z 0 ; see Smets and Wouters (2007).
After linearization, the DSGE model M ψ ( z ; ξ ; ϵ t ) is expressed in terms of three types of variables (Table 2):
(i)
Observables z t = ( c t , i t , y t , w t , π t , l t , r t ) , c t -consumption, i t -investment, y t -output, l t -labor hour, π t -inflation rate, w t -real wage rate, and r t -interest rate.
(ii)
Latent variables ξ t = ( z t , q t , k t s , k t , r t k , μ t p , μ t w ) , z t -capital utilization rate, q t -current value of the capital stock, k t s -current capital services used in production, k t -installed capital, r t k -rental rate of capital, μ t p -price mark-up, μ t w -wage mark-up.
(iii)
Latent shocks (Table 3): ϵ t = ( η t p , η t a , η t b , η t i , η t w , η t r , η t g ) .
The estimable form of M ψ ( z ; ξ ; ϵ t ) , the structural DSGE model M φ ( z ) , is derived by solving a system of linear expectational difference equations and eliminating certain variables. M φ ( z ) is specified in terms of the observables Z t : = ( c ^ t , i ^ t , y ^ t , w ^ t , π ^ t , l ^ t , r ^ t ) : the log difference of real GDP ( y ^ t = y t y t 1 + γ ¯ ), real consumption ( c ^ t = c t c t 1 + γ ¯ ), real investment ( i ^ t = i t i t 1 + γ ¯ ) and the real wage ( w ^ t = w t w t 1 + γ ¯ ), log hours worked ( l ^ t = l t + l ¯ ), the log difference of the GDP deflator ( π ^ t = π t + π ¯ ), and the federal funds rate ( r ^ t = r t + r ¯ ), where γ ¯ = 100 ( γ 1 ) is the common quarterly growth rate of real GDP, π ¯ = 100 ( Π * 1 ) is the quarterly steady-state inflation rate; and r ¯ = 100 ( β 1 γ σ c Π * 1 ) is the steady-state nominal interest rate and l ¯ is steady-state hours worked, which is normalized to be equal to zero. All the three steady-state values ( γ ¯ , π ¯ and r ¯ ) are evaluated using observed data as a part of the modeling.

3.2. Traditional Model Quantification

Smets and Wouters (2007) use the “Dynare” software in Matlab to estimate and solve the structural model by distinguishing four types of endogenous variables (Blanchard and Kahn 1980):
  • Purely backward (or purely predetermined) variables: Those that appear only at the current and past periods in the model, but not at any future period.
  • Purely forward variables: Those that appear only at the current, and future periods in the model, but not at past periods.
  • Mixed variables: Those that appear at current, past, and future periods in the model.
  • Static variables: Those that appear only at current, not past and future periods in the model.
Using the Dynare software the solution of the structural model (Smets and Wouters 2007) yields the restricted state-space formulation:
X t = A 1 ( ψ ) X t 1 + A 2 ( ψ ) ε t , t N ,
where X t : = s t : x t is a vector of 40 variables consisting of 20 state variables. (14 predetermined variables and 6 mixed variables) as s t and 20 control variables (6 purely forward variables and 14 static variables) as x t . A 1 ( ψ ) is a ( 40 × 40 ) matrix, A 2 ( ψ ) is a ( 20 × 7 ) matrix and ε t is a vector of 7 exogenous shocks. The restricted state space solution (5) of the DSGE model provides the basis for calibration; note that A 1 ( ψ ) and A 2 ( ψ ) are defined by (Blanchard and Kahn 1980) algorithm. The calibration is accomplished using the following steps.
Step 1. Select substantively meaningful values for the structural parameters φ .
Step 2. Select the sample size, say n , and the initial values x 0 .
Step 3. Use the values in steps 1–2, together with Normal pseudo-random numbers for ε t to simulate N samples of size n.
Step 4. After ‘de-trending’ using the Hodrick-Prescott (H-P) filter, use the simulated data Z 0 s to evaluate the first two moment statistics (mean, variances, covariances) of interest for each run of size n, and their empirical distributions for all N.
Step 5. Compare the relevant moments of the simulated data Z 0 s with those of the actual data Z 0 , finessing the original values of φ to ensure that (i) these moments are close to each other using the minimization: min φ Φ C o v ( Z 0 s ; φ ) C o v ( Z 0 ) , as well as (ii) ensuring that the model gives rise to realistic-looking data; the simulated data mimic the actual data.
Calibration. In applying this procedure, five parameters are fixed at specific values: δ = 0.025 , g y = 0.18 , λ w = 1.5 , ε p = ε w = 10 .

3.3. Confronting the DSGE Model with Data

Smets and Wouters (2007) estimate their model using Bayesian statistics, where the reliability of inference depends crucially on the statistical adequacy of the implicit M θ ( z ) , since the posterior: π ( θ z 0 ) = π ( θ ) · L ( θ ; z 0 ) , θ Θ , invokes its validity via the likelihood function L ( θ ; z 0 ) f ( z 0 ; θ ) , θ Θ . The data used for the estimation/calibration of the DSGE model in Table 2 are US quarterly time series for the period 1947:2–2004:4 ( n = 231 ) : the log difference of real GDP ( y ^ t ), real consumption ( c ^ t ), real investment ( i ^ t ), and the real wage ( w ^ t ), log hours worked ( l ^ t ), the log difference of the GDP deflator ( π ^ t ), and the federal funds rate ( r ^ t ).
The validation of the DSGE structural model M φ ( z ) will be achieved in three steps. Step 1. Unveil the statistical model M θ ( z ) implicit in the DSGE model M φ ( z ) . Step 2. Secure the statistical adequacy of M θ ( z ) using M-S testing and respecification. Step 3. Test the overidentifying restrictions in the context of a statistically adequate model secured in Step 2.
An obvious form of potential statistical misspecification stems from the fact that most of the data series exhibit non-stationarity that cannot be fully accounted for using log differences as implicitly assumed. As shown below, one needs to add trends to account for the mean-heterogeneity in the data.
The implicit statistical model M θ ( z ) behind the linearized structural model M φ ( z ) in terms of the observables: Z t : = ( y ^ t , c ^ t , i ^ t , π ^ t , w ^ t , l ^ t , r ^ t ) , is a Normal VAR(p) model (Table 4), with p = 2 . For the details connecting the structural and the statistical model in Table 2 and Table 4, see Appendix A.

3.3.1. Evaluating the Validity of the Implicit Statistical Model

As shown in (Fernández-Villaverde et al. 2007), the solution of the Smets and Wouters (2007) structural model gives rise to a Normal, VARMA(2,1) model. However, since the latter imposes unnecessary statistical restrictions due to the MA(1) component, the implicit statistical model is a VAR(p), p 2 , which imposes no such restrictions, and the value of p will be decided on statistical adequacy grounds.
Although Mis-Specification (M-S) testing can take a variety of forms (Lutkepohl 2005), in the case of the Normal VAR(p) [N-VAR(p)] model, the most coherent procedure is to use joint M-S tests based on auxiliary regressions relating to the first two conditional moments; see Spanos (2018, 2019). The auxiliary regressions for testing the validity of assumptions [1]–[5] are written in terms of the standardized residuals of the seven observable variables. For instance, in the case of a single estimated equation based on Y t whose residuals are denoted by u ^ t , the auxiliary regressions take the generic forms:
u ^ t = b 0 + b 1 y ^ t + b 2 y ^ t 2 + b 3 u ^ t 1 + b 4 u ^ t 2 + b 5 t + b 6 t 2 + v 1 t ,
u ^ t 2 = c 0 + c 1 y ^ t + c 2 y ^ t 2 + c 3 y ^ t 1 2 + c 4 y ^ t 2 2 + c 5 t + c 6 t 2 + v 2 t
The form of the auxiliary regressions being used for joint M-S testing depends on a number of different factors, and the robustness of its results is evaluated by examining several alternative formulations. The hypotheses being tested for different joint M-S tests are given in Table 5. The M-S test for Normality is the (Anderson and Darling 1952) test because it is more robust to a few outliers than the Skewness-Kurtosis or the Kolmogorov test; see (Spanos 1990). The results of the joint M-S tests in Table 5, reported in Table 6 and Table 7 [p-values in square brackets] indicate that the N-VAR(2) and N-VAR(3) models are statistically misspecified; only the assumptions [2] Linearity and [4] Markov (2) dependence are valid for data Z 0 . Also, the decision to leave out the MA component on statistical adequacy grounds, stems from the fact that, after thorough M-S testing, the estimated VAR(2) model fully accounts for the temporal dependence in data Z 0 . If one needed an MA error term to account for that dependence, then the Markov (2) assumption would have been rejected by data Z 0 . As shown below, however, [1] Normality, [3] homoskedasticity and [5] t-invariance are invalid for the VAR(p) model, for both values p = 2 (Table 6) and p = 3 (Table 7), giving rise to statistically misspecified models.
Hence, no reliable inferences can be drawn based on a calibrated or an estimated N-VAR(p), for p = 2 , 3 , which also includes testing the validity of the DSGE restrictions in light of that, any inference, including forecasting, based on the estimated/calibrated DSGE model will give rise to spurious/untrustworthy results.

3.3.2. Respecifying the Implicit Statistical Model

In light of the above detected statistical misspecification, the next step is to respecify the original N-VAR(2) model to account for the statistical information that lingers on in the residuals. The departures indicating non-Normality, Heteroskedasticity, and second-order temporal dependence in conjunction with the validity of the linearity assumption suggest that the best way to respecify the N-VAR(2) model is to replace the Normality with another distribution from the Elliptically Symmetric (ES) family. This family retains the bell-shape symmetry and the linearity of the autoregression, but allows for heteroskedasticity and second-order temporal dependence. This is because within the ES family, homoskedasticity characterizes the Normal distribution; see (Spanos 2019), chp. 7. Hence, an obvious choice is to assume that the process { Z t , t N } is Student’s t, Markov (p), covariance stationary but mean trending. This gives rise to the Student’s t VAR(p) [St-VAR(p)] model with ν degrees of freedom in Table 8.
There are two key differences between the N-VAR(2) (Table 4) and St-VAR(2) (Table 8) models. The first is that the St-VAR(2) allows for trends μ ( t ) to account for the mean heterogeneity in the data series. The second is that the St-VAR(2) is heteroskedastic [ V a r ( Z t σ ( Z t 1 0 ) ) is a function of Z t 1 0 ] and its conditional variance is heterogeneous [ V a r ( Z t σ ( Z t 1 0 ) ) is a function of t via the unconditional mean μ ( t ) ]. In relation to this distinction, it is important to note that (Primiceri and Justiniano 2008) model the volatility in the context of DSGE models using conditional variance heterogeneity and not heteroskedasticity. As shown below, both play a very important role in accounting for the volatility in the data series. It is important to note that to reach a statistically adequate model with p = 2 , and ν = 5 , the interest rate term ( r t ) had to be replaced with ( z t : = Δ ln r t ).
In Table 9 and Table 10, the estimates of the autoregressive functions of the St-VAR(2) and N-VAR(2) models are compared and contrasted. First, there are significant differences between the estimates corresponding to the same coefficients even though their respective autoregressive functions are identical; significant differences are indicated by the sign ( ). Second, the trend polynomials for the St-VAR(2) model are very significant and their absence from the N-VAR(2) model gives rise to misleading results because the coefficients are based on deviations from the ‘wrong’ mean, calling into question the use of the steady-state. In relation to the trend polynomials, it is important to emphasize that filtering the data using the H-P filter does not eliminate potential departures from the t-invariance of the conditional variance. Instead, it distorts the mean heterogeneity as well as the temporal dependence in data Z 0 . Third, the most crucial difference is that the homoskedasticity assumption for the N-VAR(p) model is clearly invalid (see Table 6 and Table 7).
The inappropriateness of the constant conditional variance-covariance associated with the N-VAR(2) model is illustrated in Figure 1 and Figure 2, where the squared residuals from N-VAR(2) that exhibit great volatility are plotted with V a r ^ ( y t | Z t 1 0 ) and V a r ^ ( c t | Z t 1 0 ) based on the St-VAR(2) model (Table 11), indicating that they capture most of the volatility. Note that all seven conditional variances are scaled versions of each other.

3.3.3. Evaluating the Statistical Adequacy of the St-VAR(2) Model

To take into account the heteroskedastic conditional variance-covariance, one needs to reconsider the notion of what constitutes the ‘relevant residuals’ for M-S testing purposes. In the case of the St-VAR(p) model the relevant residuals are the standardized ones defined by:
u ^ t = L t 1 ( Z t Z ^ t )
where L t L t = V a r ^ ( Z t | σ ( Z t 1 0 ) ) , Z ^ t = δ ^ 0 + δ ^ 1 t + A ^ 1 Z t 1 + A ^ 2 Z t 2 . Here, L t is changing with t and Z t 1 0 as opposed to being constant in the N-VAR(2) model. An indicative pair of auxiliary regressions based on these residuals is:
u ^ t = a 0 + a 1 y ^ t + b 3 y ^ t 2 + b 1 u ^ t 1 + b 2 u ^ t 2 + b 4 t 2 + b 5 t 3 + v 1 t , u ^ t 2 = γ 0 + γ 1 σ ^ t 2 + c 1 y ^ t 2 + c 2 σ ^ t 1 2 + c 3 σ ^ t 2 2 + c 4 t + c 5 t 2 + v 2 t ,
where y ^ t denotes the fitted values and σ ^ t 2 = V a r ^ ( y t | σ ( Z t 1 0 ) ) . The hypotheses being tested are directly analogous to those in Table 5 above.
The results of the M-S testing for the estimated Student’s VAR(2) model, reported in Table 12, indicate no departures from its assumptions [1]–[5]. In assessing these results it is important to note that the p-values decrease with the sample size n, implying that one needs to decrease the appropriate threshold as n increases. For n = 231 , a more appropriate threshold is α = 0.01 ; see (Spanos 2014).
The statistical adequacy of the St-VAR(2) is also reflected by the constancy of the variation around a constant mean, exhibited by its residuals in Figure 3.
This should be contrasted with the N-VAR(2) residuals in Figure 4 which seem to indicate a shift in both the mean and variance between the period 1983–2000 and afterward. This, however, is misleading since Figure 4 represents the residuals of a statistically misspecified model. On the other hand, Figure 3 represents the residuals of a statistically adequate model and suggests that the lower volatility arises as an inherent chance regularity stemming from { Z t , t N } being a Student’s t Markov (2) process. Indeed, the sequence of successive periods of large and small volatility represents a chance regularity pattern reflecting second-order temporal dependence, initially noted by (Mandelbrot 1963):
“...large changes tend to be followed by large changes-of either sign-and small changes tend to be followed by small changes”.
(p. 418)
This calls into question the hypothesis known as the ‘great moderation’ (Stock and Watson 2002) based on Figure 4, since the residuals from the N-VAR(2) model do not account for the second-order dependence. That is, the ‘great moderation’ hypothesis stems form an erroneous interpretation based on statistical misspecification. The relevant residuals from a statistically adequate St-VAR(2) model in Figure 3 represent a realization of a Student’s t Martingale Difference process as they should.

4. Evaluating the Smets and Wouters DSGE Model

4.1. Testing the Over-Identifying Restrictions

In light of the fact that the Student’s t VAR(2) is a statistically adequate model, one can proceed to probe the empirical adequacy of the DSGE model, knowing that the actual error probabilities provide a close approximation to the nominal (assumed) ones. This includes testing the DSGE over-identifying restrictions:
H 0 : G ( θ , φ ) = 0 , vs . H 1 : G ( θ , φ ) 0 , for θ Θ , φ Φ ,
The relevant test is based on the likelihood ratio statistic:
λ n ( Z ) = max φ Φ L ( φ ; Z ) max θ Θ L ( θ ; Z ) = L ( φ ˜ ; Z ) L ( θ ^ ; Z ) 2 ln λ n ( Z ) a H 0 χ 2 ( m ) .
For m = 98 , for α = 0.01 , c α = 68.4 , the observed test statistic yields:
2 ln λ n ( Z 0 ) > 50165.3 [ 0.00000000 ] .
The tiny p-value provides indisputably strong evidence against the DSGE model.
Hence, when a DSGE model M φ ( z ) is tested against reliable statistical evidence in the form of a statistically adequate M θ ( z ) [Student’s t VAR(2)], M φ ( z ) is strongly rejected. The natural way forward for DSGE modeling is to find ways to modify DSGE models with a view to account for the statistical regularities in the data brought out by the Student’s t VAR(2). These regularities include the leptokurticity as well as the second-order temporal dependence exemplified by the heteroskedastic V a r ( Y t | Z t 1 0 ) .
It is important to note that the (Del Negro et al. 2007) substantive misspecification analysis based on the DSGE-VAR( λ ) model differs from the above frequentist over-identifying restrictions test; see (Consolo et al. 2009). Apart from the fact that the former uses a Bayesian approach, the key difference is that the probabilistic assumptions underlying the VAR( λ ) specification are presumed valid and are not tested against the data.

4.2. Forecasting Performance

Typical examples of out-of-sample forecasting capacity of both the DSGE and the Student’s t VAR(2) models for 8 periods ahead [2003Q1-2004Q4; estimation period 1947Q2-2002Q4] is shown in Figure 5 and Figure 6 for wages and consumption growth, with the actual data denoted by a solid line.
Figure 5 and Figure 6 are typical cases of the forecasting performance of the Smets and Wouters DSGE model and illustrate a good and a bad case. In general, the forecast line of the DSGE model tends to be over smoothed in a way that largely ignores the systematic temporal dependence/heterogeneity in the data. When the forecast line happens to overlap with the actual data, it appears to track the trend reasonably well but not the cycles; see Figure 6.
When the forecast line misses the actual data line (see Figure 6), the forecasts are particularly bad because the DSGE model over or under predicts systematically, giving rise to systematic (non-white noise) prediction errors. This pattern is symptomatic of statistical msisspecification. In relation to this, it is very important to emphasize that when the forecasts errors are statistically systematic—they exhibit over or under prediction—the Root Mean Square Error (RMSE) can be highly misleading as a measure of forecasting capacity. The RMSE is a reliable measure only when the forecast errors are statistically non-systematic. In that sense, Figure 5 and Figure 6 show that the performance of the St-VAR(2) model is much better than that of the DSGE model, irrespective of the RMSEs.
Note that in the case of the St-VAR model, statistically non-systematic means that its residuals and forecast errors constitute realizations of martingale difference processes.
Interestingly, the poor forecasting performance of DSGE models is well-known, but it is rendered acceptable by comparing it to that of N-VAR models:
“...we find that the benchmark estimated medium scale DSGE model forecasts inflation and GDP growth very poorly, although statistical and judgemental forecasts do equally poorly”.
This claim fails to recognize that the poor forecasting performance stems primarily from the statistical inadequacy of the underlying estimated model; see Table 6 and Table 7.

4.3. Potentially Misleading Impulse Response Analysis

The statistical inadequacy of the underlying statistical model also affects the reliability of its impulse response analysis, giving rise to misleading results about the reaction to exogenous shocks over time. Indeed, the estimated V a r ^ ( y t | Z t 1 0 ) brings out the potential unreliability of any impulse response and variance decomposition analysis based on assuming a constant conditional variance.
Figure 7 compares the impulse responses of a 1 % increase in labor hours ( l t ) on inflation ( π t ) from the Normal and Student’s t VAR models. The statistically adequate St-VAR(2) model produces a sharp and big decline and a slow recovery in the rate of inflation. However, the Normal VAR model produces a different impulse response. The rate of inflation decreases sharply first, and then sharply increases before falling again and rising slowly.
Figure 8 compares the impulse responses of a 1 % increase in labor hours rate ( l t ) on output ( y t ) from the Normal and Student’s t VAR models. The heterogeneous St-VAR model produces a mild decline and a slow recovery in the growth rate of per-capita real GDP. But the effects produced by the stationary Normal VAR model are completely different. The growth rate increases first, falls and recovers slowly.

4.4. Identification of the ‘Deep’ Structural Parameters

A crucial issue raised in the DSGE literature is the identification of the structural parameters; see (Canova 2007; Iskrev 2010). The problem is that there is often no direct way to relate the statistical parameters ( θ ) to the structural parameters ( φ ) because the implicit function G ( θ , φ ) = 0 is not only highly non-linear, but it also involves algorithms like the Schur decomposition of the structural matrices involved.
An indirect way to probe the identification of the above DSGE model is to use the estimated statistical model, St-VAR(2) to generate, say N, faithful (true to the probabilistic structure of Z 0 )replications of the original data Z 0 . The statistical adequacy of the estimated St-VAR(2) ensures that it accounts for the statistical regularities in the data, and thus the simulated data will have the same probabilistic structure as the original observations. This enables the modeler to learn about the identifiability of the deep parameters using these faithful replicas of the original data to estimate the structural DSGE model.
The N simulated data series can be used to estimate the structural parameters ( φ ) using the original ‘quantification’ procedures in (Smets and Wouters 2007). When the histogram of each φ ^ i , for i = 1 , 2 , , p , is concentrated around a particular value, with a narrow interval of support, then φ i can be regarded as identifiable. When the histogram exhibits a large range of values or/and multiple modes, it indicates that the parameter in question is non-identifiable.
Out of 36 parameters, simulation is applied to 27 keeping the rest of the parameters (7 of them are the variance of the shocks) constant. The 27 histograms in Figure 9 and Figure 10 were generated using N = 3000 replications of the original data of sample size n = 230 based on the estimated statistical model, St-VAR(2); increasing N does not change the above results. Looking at these histograms we can distinguish between three different groups of identified/non-identified parameters.
First is the group where the estimated/calibrated value is close to the mode of the histogram. In this case, the parameters π ¯ , μ p , σ c , Φ , ρ p , ρ g are potentially identifiable vis-a-vis the data.
Second is the group where the estimated/calibrated values are significantly different from the mode of the histogram. In such a case, the parameters in question l ¯ , α , φ , h , ι w , ι p , σ l , r π , ρ , ρ a , ρ r , ρ w are likely to be unidentifiable.
Third is the group where the estimated/calibrated value lies outside the actual range of values of the histogram. The parameters in question ρ g a , μ w , ξ w , ξ p , r y , ρ i , ρ b , ρ p are clearly non-identifiable.
Taken together, only six of the twenty-seven parameters have estimated or calibrated values that are potentially identifiable vis-a-vis the data.
This simulation exercise indicates that, in contrast to the statistical parameters of the St-VAR(2) which are inherently identifiable, the identification and constancy of the ‘deep’ DSGE parameters is called into question. The results also question the appropriateness of the ‘estimation’ of these deep parameters using traditional methods such as the method of moments, maximum likelihood, and Bayesian techniques; see (Smets and Wouters 2003, 2005, 2007; Ireland 2004, 2011). A question that needs to be addressed is whether the Bayesian techniques narrow down the range of values of the deep parameters to render them “artificially” identifiable. Indeed, the broader question which naturally arises when one is dealing with a calibrated model is: what is one calibrating/evaluating when the structural parameters are non-identifiable?

4.5. Substantive vs. Statistical Adequacy

Lucas’s (Lucas 1980) argument that: “Any model that is well enough articulated to give clear answers to the questions we put to it will necessarily be artificial, abstract, patently ‘unreal’” (p. 696), is highly misleading because it blurs the distinction between substantive and statistical adequacy. There is nothing wrong with constructing a simple, abstract, and idealized theory-model M φ ( z ) aiming to capture key features of the phenomenon of interest, to shed light on (understand, explain, forecast) economic phenomena of interest, as well as gain insight concerning alternative policies. Unreliability of inference problems arise when the statistical model M θ ( z ) implicitly specified by M φ ( z ) is statistically misspecified, and no attempt is made to reliably assess whether M φ ( z ) does, indeed, capture the key features of the phenomenon of interest; see (Spanos 2009a). That is, the strategy ‘theory or bust’ makes no sense in empirical modeling. As argued by Hendry (Hendry 2009):
“This implication is not a tract for mindless modeling of data in the absence of economic analysis, but instead suggests formulating more general initial models that embed the available economic theory as a special case, consistent with our knowledge of the institutional framework, historical record, and the data properties...Applied econometrics cannot be conducted without an economic theoretical framework to guide its endeavors and help interpret its findings. Nevertheless, since economic theory is not complete, correct, and immutable, and never will be, one also cannot justify an insistence on deriving empirical models from theory alone”.
(pp. 56–57)
Statistical misspecification is not the inevitable result of abstraction and simplification but stems from imposing invalid probabilistic assumptions on the data. Moreover, the latter goes a long way toward explaining the poor forecasting performance of the traditional macroeconometric models in the 1970s (Nelson 1972) and can explain the poor forecasting performance of DSGE models.
Unfortunately, the current literature on DSGE modeling adopts the (Kydland and Prescott 1991) view that misspecification is inevitable. For instance, (Canova 2007) goes further by arguing: “DSGE models are misspecified in the sense that they are, in general, too simple to capture the complex probabilistic nature of the data. Hence, it may be fruitless to compare their outcomes with the data...Both academic economists and policy makers use DSGE models to tell stories about how the economy responds to unexpected movements in the exogenous variables”. (p. 160)
There is nothing complicated about the probabilistic nature of economic time series data. The probabilistic assumptions needed to account for the chance regularity patterns in such data come from three broad categories, (D) Distribution, (M) Dependence, and (H) Heterogeneity, with simple generic ways to account for (M) and (H) using lags and trend polynomials. Moreover, when any of the probabilistic assumptions are found wanting, they can be easily replaced with more appropriate ones (respecification); see Spanos (2019). Regarding the use of empirical modeling as ‘story-telling’, it should noted that when an estimated DSGE model ( M ψ ( z ) ) is statistically misspecified, then the stories based on it have nothing to do with the economy that gave rise to the data since the empirical evidence invoked are untrustworthy. It will be better for the scientific reputation of macroeconometrics to skip the ‘data’ part and just tell the stories associated with simulating M ψ ( z ) using parameter values that seem ‘appropriate’ to a DSGE modeler. Why pretend that these values stem from the data?

5. Summary and Conclusions

The literature on DSGE modeling rightly points out that reliance on chance regularities for statistical inference purposes, as in the case of the traditional VAR(p) model, is not sufficient to represent substantively meaningful (structural) models that can be used to forecast and evaluate different macroeconomic policies. On the other hand, estimating structural models that belie the chance regularities in the data would only give rise to untrustworthy inference results.
Estimating the structural model directly often leads to an impasse since the estimated model is often both statistically and substantively inadequate. This renders any proposed substantive respecifications of the original structural model (Del Negro and Schorfeide 2009; Del Negro et al. 2007) questionable since the respecified model is declared ‘better’ or ‘worse’ on the basis of untrustworthy evidence when the estimated statistical model is misspecified.
A way to address this quandary is to separate, ab initio, the structural model M φ ( z ) from the statistical model M θ ( z ) , and establish statistical adequacy before posing any substantive questions of interest. An estimated DSGE model M φ ( z ) whose statistical premises M θ ( z ) are misspecified constitutes an unreliable basis for any form of inference. From a purely probabilistic perspective, M θ ( z ) is viewed as a parameterization of the process { Z t , t N } underlying data Z 0 , chosen so that it (parametrically) nests  M φ ( z ) via G ( θ , φ ) = 0 . The crucial distinction between statistical and substantive premises suggests that various traditional conundrums, such as theory-driven vs. data-driven, realistic vs. unrealistic, and policy-oriented vs. non-policy-oriented models, are largely false dilemmas. Statistical adequacy of M θ ( z ) is a necessary precondition for securing the reliability of any form of inference.
Using quarterly US data for the period 1947:2–2004:4, the confrontation of the (Smets and Wouters 2007) DSGE model M φ ( z ) with a statistically adequate M θ ( z ) [Student’s t VAR(2)] strongly rejects M φ ( z ) , and calls into question the reliability of any inferences based on it. The Bayesian estimation techniques used by the authors are likely to be equally unreliable because the implicit likelihood function is invalid. Indeed, in light of the unidentifiability of most of the structural parameters shown in Section 4.4, questions arise about the role of the priors in quantifying such parameters. Based on the above discussion, a way forward for DSGE modeling is to engage in the following recommendations.
(a)
The modeler needs to bring out the statistical model M θ ( z ) implicitly specified by the structural model M φ ( z ) , with the former specified in terms of a complete set of testable probabilistic assumptions, as in Table 1, Table 4 and Table 8.
(b)
When a DSGE model M φ ( z ) is estimated directly, the statistical reliability of any inferences drawn is questionable. Before any reliable inferences can be drawn, the modeler needs to test the validity of the assumptions of the statistical model.
(c)
When the statistical model M θ ( z ) is found to be misspecified, the modeler needs to respecify it to account for the statistical information in the data. Only when the statistical adequacy of M θ ( z ) is established, one should proceed to the inference stage.
(d)
The evaluation of the empirical validity of the structural model M φ ( z ) begins with testing the validity of the over-identifying restrictions G ( θ , φ ) = 0 , in the context of a statistically adequate M θ ( z ) .
(e)
In cases where the overidentifying restrictions are rejected, the modeler needs to return to M φ ( z ) in order to respecify it substantively, to account for the statistical regularities summarized by the statistically adequate M θ ( z ) . The misspecification/respecification scenarios proposed by (Del Negro et al. 2007) and (Del Negro and Schorfeide 2009) enter the modeling at this stage, and not before.

Author Contributions

The two authors contributed equally to this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data set used in this paper is the same as that used in (Smets and Wouters 2007).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Miscellaneous Results

Appendix A.1. Derivation of Reduced Form Structural Model

The system in (5) can be decomposed into two subsystems by defining a vector of 7 observables d t = ( c ^ t , i ^ t , y ^ t , l ^ t , π ^ t , w ^ t , r ^ t ) and a vector of remaining 33 variables Y t as follows:
d t = D Y t 1 + E d t 1 + F ε t , Y t = G Y t 1 + H d t 1 + K ε t ,
D , E , G , H are formed by the partitioning of A 1 and F , K by the partitioning of A 2 :
Π ( 40 × 40 ) = E ( 7 × 7 ) D H G 33 × 33
Assuming that D 1 exists, elimination of Y t from (A1) yields:
d t = D G D 1 + E d t 1 + D ( H G D 1 E ) d t 2 + B ( D B F + K ) ε t 1 + F ε t .
Note that when the inverse does not exist, the generalized inverse (Rao and Mitra 1972) is used.
By relaxing all the structural restrictions imposed in (9), the statistical model in the form of Normal VAR(2) in Table 2 is obtained. From a purely probabilistic construal the VAR(2) model can now be viewed as a parameterization of a Normal, Markov(2), stationary process { Z t , t N } assumed to underlie data Z 0 .

Appendix A.2. Multivariate Student’s t

For X St μ , Σ ; ν , where X : p × 1 , the joint density function is:
f ( x ; φ ) = ( ν π ) p 2 Γ ( p + ν 2 ) Γ ( ν 2 ) ( det Σ ) 1 2 { 1 + 1 ν ( x μ ) Σ 1 ( x μ ) } ( p + ν 2 ) ,
where φ = ( μ , Σ ) E ( X ) = μ , V a r ( X ) = ν ν 2 Σ .
Student’stVAR (St-VAR) Model
Let { Z t , t = 1 , 2 , } be a vector Student’s t with ν df, Markov(p) and stationary process. The joint distribution of X t : = ( Z t , Z t 1 , , Z t p ) is denoted by:
X t St μ , Σ ; ν
X t : = Z t Z t 1 Z t 2 Z t p St μ z μ z μ z . . μ z , Σ 11 Σ 12 Σ 13 Σ 1 p + 1 Σ 12 Σ 11 Σ 12 Σ 1 p Σ 13 Σ 12 Σ 11 Σ 1 p 1 Σ 1 p + 1 Σ 1 p Σ 1 p 1 Σ 11 ; ν
where Z t : ( k × 1 ) , Σ i j : ( k × k ) , μ z : ( k × 1 ) , μ : ( p × 1 ) , Σ : ( p × p ) , p = ( p + 1 ) k -number of variables in X t , k-number of variables in Z t , p-number of lags.
Joint, Conditional and Marginal Distributions
Let us partitione the vectors X t and μ , and the matrix Σ as follows:
X t = Z t ( k × 1 ) Z t 1 0 ( p k × 1 ) , μ = μ z ( k × 1 ) μ p k ( p k × 1 ) , Σ = Σ 11 ( k × k ) Σ 12 ( k × p k ) Σ 12 ( p k × k ) Q ( p k × p k )
where, μ p k ( p k × 1 ) is a vector of p k μ z ’s. The relevant distributions for all t N are:
D ( Z t , Z t 1 0 ; θ ) = D ( Z t | Z t 1 0 ; θ 1 ) · D ( Z t 1 0 ; θ 2 ) St ( μ , Σ ; ν )
D ( Z t | Z t 1 0 ; θ 1 ) St ( a 0 + A Z t 1 0 , Ω q ( Z t 1 0 ) ; ν + p k ) D ( Z t 1 0 ; θ 2 ) St ( μ p k , Q ; ν )
q ( Z t 1 0 ) = 1 + 1 ν ( Z t 1 0 μ p k ) Q 1 ( Z t 1 0 μ p k ) A = Σ 12 Q 1
a 0 = μ z A μ p k , Ω = Σ 11 Σ 12 Q 1 Σ 12
Z t 1 0 : = ( Z t 1 , , Z t p ) , θ 1 = { a 0 , A , Ω , Q , μ } , θ 2 = { μ , Q } . The lack of variation freeness (Spanos 1994) calls for defining the likelihood function in terms of the joint distribution, but reparameterized in terms of the conditional and marginal distribution parameters θ 1 and θ 2 , respectively.
This can be easily extended to a heterogeneous St-VAR model where the mean is assumed to be: μ z ( t ) = μ 0 + μ 1 t + μ 2 t 2 . This makes the autoregressive function a quadratic function of t: a 0 = μ z ( t ) A 1 T μ z ( t 1 ) A 2 T μ z ( t 2 ) A 3 T μ z ( t 3 ) = δ 0 + δ 1 t + δ 2 t 2 . One important aspect of this model is that although heterogeneity is assumed only for the mean of the joint distribution, both the mean and the variance-covariance matrix of the conditional distribution change with t.

Appendix A.3. Software

An R package is available for estimating the St-VAR model using a maximum likelihood procedure where the practitioner can choose the number of variables, the trend polynomial, the highest lag length and the degrees of freedom. The function in R is:
StVAR ( Data , Trend , lag , v , maxiter , meth , hes , init )
where Data refers to the data matrix with observations in rows, lag refers to the maximum number of lags, v refers the degrees of freedom, maxiter refers to the number of iterations to be performed by the optimization algorithm, meth refers to the optimization method used by the optim function in R, hes if “TRUE” calls the hessian matrix to evaluate the standard errors and the p-values of the estimators, init refers to the initial values, Trend denotes a matrix with columns representing deterministic variables like trends and dummies.
The function StVAR(.) returns the following inference results:
The estimated coefficients of the autoregressive and autoskedastic functions with standard errors and p-values, the conditional variance covariance matrix, the fitted values, the residuals, and the estimated likelihood value.

References

  1. Anderson, Theodore Wilbur, and Donald Allan Darling. 1952. Asymptotic Theory of Certain “Goodness of Fit” Criteria Based on Stochastic Processes. The Annals of Mathematical Statistics 23: 193–212. [Google Scholar] [CrossRef]
  2. Blanchard, Olivier Jean, and Charles Kahn. 1980. The solution of linear difference models under rational expectations. Econometrica 48: 1305–11. [Google Scholar] [CrossRef]
  3. Bodkin, Ronald, Lawrence Robert Klein, and Kanta Marwah. 1991. A History of Macroeconometric Model-Building. Altershot: Edward Elgar. [Google Scholar]
  4. Canova, Fabio. 2007. Methods for Applied Macroeconomic Research. Hoboken: Princeton Unversity Press. [Google Scholar]
  5. Canova, Fabio. 2009. How Much Structure in Empirical Models. In Palgrave Handbook of Econometrics. Edited by Terence C. Mills and Kerry D. Patterson. Vol. 2: Applied Econometrics. Basingstoke: Palgrave MacMillan, pp. 68–97. [Google Scholar]
  6. Chang, Yongsung, Taeyoung Doh, and Frank Schorfheide. 2007. Non-stationary Hours in a DSGE Model. Journal of Money, Credit, and Banking 39: 1357–73. [Google Scholar] [CrossRef]
  7. Consolo, Agostino, Carlo A. Favero, and Alessia Paccagnini. 2009. On the statistical identification of DSGE models. Journal of Econometrics 150: 99–115. [Google Scholar] [CrossRef] [Green Version]
  8. DeJong, David N., and Chetan Dave. 2011. Structural Macroeconometrics, 2nd ed. Hoboken: Princeton Unversity Press. [Google Scholar]
  9. Del Negro, Marco, and Frank Schorfeide. 2009. Monetary Policy Analysis with Potentially Misspecified Models. The American Economic Review 99: 1415–50. [Google Scholar] [CrossRef] [Green Version]
  10. Del Negro, Marco, Frank Schorfheide, Frank Smets, and Rafael Wouters. 2007. On the Fit of New Keynesian Models. Journal of Business & Economic Statistics 25: 123–43. [Google Scholar]
  11. Edge, Rochelle M., and Refet S. Gurkaynak. 2010. How Useful are Estimated DSGE Model Forecasts for Central Bankers? Brookings Papers 2010: 209–44. [Google Scholar] [CrossRef] [Green Version]
  12. Favero, Carlo A. 2001. Applied Macroeconometrics. Oxford: Oxford University Press. [Google Scholar]
  13. Fernández-Villaverde, Jesus, and Juan F. Rubio-Ramírez. 2007. Estimating macroeconomic models: A likelihood approach. The Review of Economic Studies 74: 1059–87. [Google Scholar] [CrossRef] [Green Version]
  14. Fernández-Villaverde, Jesus, Juan F. Rubio-Ramírez, Thomas John Sargent, and Mark W. Watson. 2007. ABCs (and Ds) of Understanding VARs. The American Economic Review 97: 1021–26. [Google Scholar] [CrossRef] [Green Version]
  15. Galí, Jordi, Frank Smets, and Rafael Wouters. 2011. Unemployment in an Estimated New Keynesian Model. NBER Macroeconomics Annual 26: 329–60. [Google Scholar] [CrossRef] [Green Version]
  16. Granger, Clive William John, and Paul Newbold. 1986. Forecasting Economic Time Series, 2nd ed. London: Academic Press. [Google Scholar]
  17. Gregory, Allan W., and Gregor W. Smith. 1993. Statistical aspects of calibration in macroeconomics. In Handbook of Statistics. Edited by Gangadharrao Soundalyarao (G.S.) Maddala, Calyampudi Radhakrishna (C.R.) Rao and Hrishikesh D. Vinod. Amsterdam: Elsevier, vol. 2. [Google Scholar]
  18. Harvey, Andrew C., and Albert Jaeger. 1993. Detrending, Stylized Facts and the Business Cycle. Journal of Applied Econometrics 8: 231–47. [Google Scholar] [CrossRef] [Green Version]
  19. Hashimzade, Nigar, and Michael Alan Thornton, eds. 2013. Handbook of Research Methods and Applications in Empirical Macroeconomics. Altershot: Edward Elgar. [Google Scholar]
  20. Heer, Burkhard, and Alfred Maussner. 2009. Dynamic General Equilibrium Modeling, 2nd ed. New York: Springer. [Google Scholar]
  21. Hendry, David Forbes. 2009. The Methodology and Philosophy of Applied Econometrics. In New Palgrave Handbook of Econometrics. Edited by Terence C. Mills and Kerry D. Patterson. vol. 2: Applied Econometrics. London: MacMillan, pp. 3–67. [Google Scholar]
  22. Ireland, Peter N. 2004. Technology Shocks in the New Keynesian Model. The Review of Economics and Statistics 86: 923–36. [Google Scholar] [CrossRef] [Green Version]
  23. Ireland, Peter N. 2011. A new Keynesian perspective on the great recession. Journal of Money, Credit and Banking 43: 31–54. [Google Scholar] [CrossRef] [Green Version]
  24. Iskrev, Nikolay. 2010. Local identification in DSGE models. Journal of Monetary Economics 57: 189–202. [Google Scholar] [CrossRef] [Green Version]
  25. Kim, Kiwhan, and Adrian Rodney Pagan. 1994. The Econometric Analysis of Calibrated Macroeconomic Models. In Handbook of Applied Econometrics. Edited by Hashem Mohammad Pesaran and Michael R. Wickens. vol. I: Macroeconomics. Oxford: Blackwell. [Google Scholar]
  26. Klein, Paul. 2000. Using the generalized Schur form to solve a multivariate linear rational expectations model. Journal of Economic Dynamics and Control 24: 1405–23. [Google Scholar] [CrossRef]
  27. Kydland, Finn Erling, and Edward Christian Prescott. 1982. Time to built and aggregate fluctuations. Econometrica 50: 1345–70. [Google Scholar] [CrossRef]
  28. Kydland, Finn Erling, and Edward Christian Prescott. 1991. The Econometrics of the General Equilibrium Approach to the Business Cycles. The Scandinavian Journal of Economics 93: 161–78. [Google Scholar] [CrossRef]
  29. Lucas, Robert Emerson. 1976. Econometric Policy Evaluation: A Critique. In The Phillips Curve and Labour Markets. Edited by Karl Brunner and Allan M. Metzer. Carnegie-Rochester Conference on Public Policy I. Amsterdam: North-Holland, pp. 19–46. [Google Scholar]
  30. Lucas, Robert Emerson. 1980. Methods and Problems in Business Cycle Theory. Journal of Money, Credit and Banking 12: 696–715. [Google Scholar] [CrossRef]
  31. Lucas, Robert Emerson. 1987. Models of Business Cycles. Oxford: Blackwell. [Google Scholar]
  32. Lutkepohl, Helmut. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer. [Google Scholar]
  33. Mandelbrot, Benoit. 1963. The variation of certain speculative prices. Journal of Business 36: 394–419. [Google Scholar] [CrossRef]
  34. Mayo, Deborah G. 1996. Error and the Growth of Experimental Knowledge. Chicago: The University of Chicago Press. [Google Scholar]
  35. Mayo, Deborah G., and Aris Spanos. 2004. Methodology in Practice: Statistical Misspecification Testing. Philosophy of Science 71: 1007–25. [Google Scholar] [CrossRef]
  36. McCarthy, Michael D. 1972. The Wharton Quarterly Econometric Forecasting Model Mark III. Philadelphia: University of Pennsylvania. [Google Scholar]
  37. Nelson, Charles R. 1972. The Prediction Performance of the F.R.B. -M.I.T.-PENN model of the U.S. Economy. American Economic Review 62: 902–17. [Google Scholar]
  38. Prescott, Edward Christian. 1986. Theory Ahead of Business Cycle Measurement. Federal Reserve Bank of Minneapolis, Quarterly Review 10: 9–22. [Google Scholar]
  39. Primiceri, Alejandro, and Giorgio E. Justiniano. 2008. The Time Varying Volatility of Macroeconomic Fluctuations. The American Economic Review 98: 604–41. [Google Scholar]
  40. Rao, Calyampudi Radhakrishna (C.R.), and Sujit Kumar Mitra. 1972. Generalized Inverse of Matrices and Its Applications. New York: Wiley. [Google Scholar]
  41. Saijo, Hikaru. 2013. Estimating DSGE models using seasonally adjusted and unadjusted data. Journal of Econometrics 173: 22–35. [Google Scholar] [CrossRef]
  42. Smets, Frank, and Rafael Wouters. 2003. An estimated Dynamic Stochastic General Equilibrium Model of the Euro Area. Journal of the European Economic Association 1: 1123–75. [Google Scholar] [CrossRef] [Green Version]
  43. Smets, Frank, and Rafael Wouters. 2005. Comparing Shocks and Frictions in US and Euro Area Business Cycles: A Bayesian DSGE Approach. Journal of Applied Econometrics 20: 161–83. [Google Scholar] [CrossRef] [Green Version]
  44. Smets, Frank, and Rafael Wouters. 2007. Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach. The American Economic Review 97: 586–606. [Google Scholar] [CrossRef] [Green Version]
  45. Spanos, Aris. 1986. Statistical Foundations of Econometric Modelling. Cambridge: Cambridge University Press. [Google Scholar]
  46. Spanos, Aris. 1990. The Simultaneous Equations Model revisited: Statistical adequacy and identification. Journal of Econometrics 44: 87–108. [Google Scholar] [CrossRef]
  47. Spanos, Aris. 1994. On Modeling Heteroskedasticity: The Student’s t and Elliptical Linear Regression Models. Econometric Theory 10: 286–315. [Google Scholar] [CrossRef]
  48. Spanos, Aris. 2006a. Revisiting the omitted variables argument: Substantive vs. statistical adequacy. Journal of Economic Methodology 13: 179–218. [Google Scholar] [CrossRef]
  49. Spanos, Aris. 2006b. Where Do Statistical Models Come From? Revisiting the Problem of Specification. In Optimality: The Second Erich L. Lehmann Symposium. Edited by Javier Rojo. Lecture Notes-Monograph Series; Beachwood: Institute of Mathematical Statistics, vol. 49, pp. 98–119. [Google Scholar]
  50. Spanos, Aris. 2009a. Statistical Misspecification and the Reliability of Inference: The simple t-test in the presence of Markov dependence. The Korean Economic Review 25: 165–213. [Google Scholar]
  51. Spanos, Aris. 2009b. The Pre-Eminence of Theory Versus the European CVAR Perspective in Macroeconometric Modeling. Economics: The Open-Access, Open-Assessment E-Journal 3: 2009–10. Available online: http://www.economics-ejournal.org/economics/journalarticles/2009-10 (accessed on 23 March 2022).
  52. Spanos, Aris. 2010a. Akaike-type Criteria and the Reliability of Inference: Model Selection vs. Statistical Model Specification. Journal of Econometrics 158: 204–20. [Google Scholar] [CrossRef]
  53. Spanos, Aris. 2010b. Theory Testing in Economics and the Error Statistical Perspective. In Error and Inference. Edited by Deborah G. Mayo and Aris Spanos. Cambridge: Cambridge University Press, pp. 202–46. [Google Scholar]
  54. Spanos, Aris. 2012. Philosophy of Econometrics. In Philosophy of Economics. Edited by Uskali Maki, Dov Gabbay, Paul Thagard and Jack Woods. Series of Handbook of Philosophy of Science; Amsterdam: Elsevier, pp. 329–93. [Google Scholar]
  55. Spanos, Aris. 2014. Recurring Controversies about P values and Confidence Intervals Revisited. Ecology 95: 645–51. [Google Scholar] [CrossRef] [Green Version]
  56. Spanos, Aris. 2018. Mis-Specification Testing in Retrospect. Journal of Economic Surveys 32: 541–77. [Google Scholar] [CrossRef]
  57. Spanos, Aris. 2019. Probability Theory and Statistical Inference: Empirical Modeling with Observational Data. Cambridge: Cambridge University Press. [Google Scholar]
  58. Spanos, Aris. 2021. Methodology of Macroeconometrics. In Oxford Research Encyclopedia of Economics and Finance. Edited by Avinash Dixit, Sebastian Edwards and Kenneth Judd. Oxford: Oxford University Press. [Google Scholar]
  59. Spanos, Aris, and Anya McGuirk. 2001. The Model Specification Problem from a Probabilistic Reduction Perspective. Journal of the American Agricultural Association 83: 1168–76. [Google Scholar] [CrossRef]
  60. Stock, James Harold, and Mark W. Watson. 2002. Has the Business Cycle Changed and Why? NBER Macroeconomics Annual 17: 159–230. [Google Scholar] [CrossRef]
  61. Wickens, Michael. 1995. Real Business Cycle Analysis: A Needed Revolution in Macroeconometrics. The Economic Journal 105: 1637–48. [Google Scholar]
Figure 1. N-VAR residuals ( u ^ t 2 ) vs. St-VAR V a r ^ ( y t | Z t 1 0 ) .
Figure 1. N-VAR residuals ( u ^ t 2 ) vs. St-VAR V a r ^ ( y t | Z t 1 0 ) .
Econometrics 10 00017 g001
Figure 2. N-VAR residuals ( u ^ t 2 ) vs. St-VAR V a r ^ ( c t | Z t 1 0 ) .
Figure 2. N-VAR residuals ( u ^ t 2 ) vs. St-VAR V a r ^ ( c t | Z t 1 0 ) .
Econometrics 10 00017 g002
Figure 3. Scaled St-VAR(2) residuals.
Figure 3. Scaled St-VAR(2) residuals.
Econometrics 10 00017 g003
Figure 4. Normal-VAR(2) residuals.
Figure 4. Normal-VAR(2) residuals.
Econometrics 10 00017 g004
Figure 5. Forecasting wages growth.
Figure 5. Forecasting wages growth.
Econometrics 10 00017 g005
Figure 6. Forecasting consumption growth.
Figure 6. Forecasting consumption growth.
Econometrics 10 00017 g006
Figure 7. 1% increase of labor hours ( l t ) on inflation ( p t ).
Figure 7. 1% increase of labor hours ( l t ) on inflation ( p t ).
Econometrics 10 00017 g007
Figure 8. 1% increase of labor hours rate ( l t ) on output ( y t ).
Figure 8. 1% increase of labor hours rate ( l t ) on output ( y t ).
Econometrics 10 00017 g008
Figure 9. Histograms of the estimated/calibrated key parameters 1.
Figure 9. Histograms of the estimated/calibrated key parameters 1.
Econometrics 10 00017 g009
Figure 10. Histograms of the estimated/calibrated key parameters 2.
Figure 10. Histograms of the estimated/calibrated key parameters 2.
Econometrics 10 00017 g010
Table 1. Multivariate Linear Regression Model.
Table 1. Multivariate Linear Regression Model.
Statistical GM: Y t = β 0 + B 1 x t + u t , t N ,
[1] Normality: ( Y t X t = x t ) N ( . , . ) ,
[2] Linearity: E ( Y t X t = x t ) = β 0 + B 1 x t , linear in x t ,
[3] Homosk/city: V a r ( Y t X t = x t ) = Σ , free of x t ,
[4] Independence: { ( Y t X t = x t ) , t N } independent process,
[5] t-invariance: θ : = ( β 0 , B 1 , Σ ) do not change with t .
β 0 = E ( Y t ) B 1 E ( X t ) B 1 = Cov ( X t ) 1 Cov ( X t , Y t ) , Σ = Cov ( Y t ) Cov ( Y t , X t ) B 1
Table 2. Smets and Wouters (2007) DSGE model—Behavioral equations.
Table 2. Smets and Wouters (2007) DSGE model—Behavioral equations.
Resource constraint: y t = c y c t + i y i t + z y z t + ε t g
Consumption: c t = c 1 c t 1 + ( 1 c 1 ) E t c t + 1 + c 2 ( l t E t l t + 1 ) c 3 ( r t E t π t + 1 + ε t b )
Investment: i t = i 1 i t 1 + ( 1 i 1 ) E t i t + 1 + i 2 q t + ε t i
Arbitrage: q t = q 1 E t q t + 1 + ( 1 q 1 ) E t r t + 1 k ( r t E t π t + 1 + ε t b )
Production: y t = ϕ p ( α k t s + ( 1 α ) l t + ε t a )
Capital services: k t s = k t 1 + z t
Capital utilization: z t = z 1 r t k
Installed capital: k t = k 1 k t 1 + ( 1 k 1 ) i t + k 2 ε t i
Price mark-up: μ t p = m p l t w t = α ( k t s l t ) + ε t a w t
Phillips curve: π t = π 1 π t 1 + π 2 E t π t + 1 π 3 μ t p + ε t p
Rental rate of capital: r t k = ( k t l t ) + w t
Wage mark-up: μ t w = w t m r s t = w t ( σ l l t + 1 1 λ / γ ( c t λ / γ c t 1 ) )
Real wage: w t = w 1 w t 1 + ( 1 w 1 ) ( E t w t + 1 + E t π t + 1 ) w 2 π t + w 3 π t 1 w 4 μ t w + ε t w
Taylor rule: r t = ρ r t 1 + ( 1 ρ ) { r π π t + r Y ( y t y t p ) } + r Δ y { ( y t y t p ) ( y t 1 y t 1 p ) } + ε t r
Table 3. Smets and Wouters (2007) DSGE model: Exogenous Shocks.
Table 3. Smets and Wouters (2007) DSGE model: Exogenous Shocks.
Exogeneous spending: ε t g = ρ g ε t 1 g + η t g + ρ g a η t g
Risk premium: ε t b = ρ b ε t 1 b + η t b
Investment-specific technology: ε t i = ρ i ε t 1 i + η t i
Total factor productivity: ε t a = ρ a ε t 1 a + η t a
Price mark-up: ε t p = ρ p ε t 1 p + η t p μ p η t 1 p
Wage mark-up: ε t w = ρ w ε t 1 w + η t w μ w η t 1 w
Monetary policy: ε t r = ρ r ε t 1 r + η t r
Table 4. Normal VAR(p) model.
Table 4. Normal VAR(p) model.
Statistical GM: Z t = a 0 + i = 1 p A i Z t i + u t , t N ,
[1]Normality: ( Z t Z t 1 0 ) is Normal,
[2]Linearity: E ( Z t σ ( Z t 1 0 ) ) = a 0 + i = 1 p A i Z t i ,
[3]Homosked.: V a r ( Z t σ ( Z t 1 0 ) ) = V is free of Z t 1 0 : = ( Z t 1 , Z 1 ) ,
[4]Markov: { Z t , t N } is a Markov(p) process,
[5]t-invariance: θ : = ( a 0 , A 1 , A 2 , , A p , V ) are t-invariant for all t N .
Table 5. Joint M-S tests for the N-VAR probabilistic assumptions.
Table 5. Joint M-S tests for the N-VAR probabilistic assumptions.
Null Hypotheses
[1] Normality (Anderson-Darling) Z t N ( . , . )
[2] Linearity: F(228,1) H 0 : b 2 = 0
[3] (i) Homoskedasticity: F(228,2) H 0 : c 1 = c 2 = 0
[3] (ii) Dynamic Heterosk/sticity: F(228,2) H 0 : c 3 = c 4 = 0
[4] Markov (2): F(228,2) H 0 : b 3 = b 4 = 0
[5] (i) 1st moment t-invariance: F(228,2) H 0 : b 5 = b 6 = 0
[5] (ii) 2nd moment t-invariance: F(228,2) H 0 : c 5 = c 6 = 0
Table 6. M-S testing results for N-VAR(2).
Table 6. M-S testing results for N-VAR(2).
c ^ t i ^ t y ^ t l ^ t π ^ t w ^ t r ^ t
Linearity0.116[0.734]2.963[0.087]0.082[0.774]3.597[0.059]1.653[0.2]2.112[0.148]0.406[0.525]
1st-depend0.045[0.956]0.096[0.909]0.303[0.739]1.404[0.248]0.328[0.721]0.032[0.968]5.520[ 0.005 ]
1st-invar0.177[0.838]0.601[0.549]0.067[0.935]1.638[0.197]0.887[0.413]0.150[0.861]0.623[0.537]
Homosk/city5.947[ 0.003 ]8.005[ 0.000 ]1.397[0.249]2.694[0.070]0.265[0.767]1.593[0.206]0.021[0.979]
2nd-depen0.591[0.555]0.080[0.923]0.068[0.934]1.012[0.365]0.697[0.499]0.696[0.500]11.115[ 0.000 ]
2nd t-invar3.643[0.028]3.434[0.034]11.428[ 0.000 ]2.272[0.106]8.652[ 0.000 ]0.218[0.804]0.927[0.397]
A-D test0.960[0.015]1.355[ 0.002 ]0.610[0.111]0.435[0.298]3.315[ 0.000 ]1.155[ 0.005 ]8.053[ 0.000 ]
Table 7. M-S testing results for N-VAR(3).
Table 7. M-S testing results for N-VAR(3).
c ^ t i ^ t y ^ t l ^ t π ^ t w ^ t r ^ t
Linearity0.251[0.617]1.994[0.159]0.107[0.744]1.905[0.169]0.699[0.404]3.293[0.071]0.067[0.796]
1st-dependence0.043[0.958]0.065[0.937]0.522[0.594]0.191[0.826]0.828[0.439]0.057[0.945]0.290[0.749]
1st-t-invar0.512[0.600]0.569[0.567]0.437[0.647]0.609[0.545]0.763[0.467]0.074[0.929]1.187[0.307]
Homosk/city8.245[ 0.000 ]9.869[ 0.000 ]0.976[0.378]2.315[0.101]1.757[0.175]0.317[0.728]0.035[0.966]
2nd-dependence1.090[0.338]0.242[0.785]0.054[0.948]0.960[0.384]3.080[0.048]0.637[0.530]5.940[ 0.003 ]
2nd-t-invar3.595[0.029]2.102[0.125]1.547[ 0.000 ]1.856[0.159]8.613[ 0.000 ]0.477[0.621]0.842[0.432]
A-Darling test0.780[0.042]1.113[ 0.006 ]0.631[0.099]0.142[0.972]2.423[ 0.000 ]1.222[ 0.003 ]7.316[ 0.000 ]
Table 8. Student’s t VAR(p) model.
Table 8. Student’s t VAR(p) model.
Statistical GM: Z t = a 0 ( t ) + i = 1 p A i Z t i + u t , t N ,
[1]Student’s t: ( Z t Z t 1 0 ) , Z t 1 0 : = ( Z t 1 , Z 1 ) , is Student’s t with ν + k p d.f.
[2]Linearity: E ( Z t σ ( Z t 1 0 ) ) = a 0 ( t ) + i = 1 p A i Z t i ,
[3]Heterosk/city.: V a r ( Z t σ ( Z t 1 0 ) ) = q ( Z t 1 0 ) depends on Z t 1 0 ,
q ( Z t 1 0 ) = ν ν + k p 2 V [ 1 + 1 ν i = 1 p ( Z t i μ ( t ) ) Q i 1 ( Z t i μ ( t ) ) ]
[4]Markov (p): { Z t , t N } is a Markov(p) process
[5]t-invariance: θ : = ( a 0 , μ , A 1 , , A p , V , Q 1 , , Q p ) are constant for t N .
Table 9. Estimation results for St-VAR(p = 2).
Table 9. Estimation results for St-VAR(p = 2).
c t i t y t l t π t w t z t
const0.832[0.000]1.017[0.001]0.587[0.000]0.004[0.954]−0.001[0.979]0.356[0.000]−0.043[0.015]
t−0.343[0.397]−2.006[0.051]−1.583[0.001]−0.192[0.326]0.098[0.449]−1.201[0.001]−0.291[0.000]
c t 1 −0.227[0.000]−0.022[0.905]0.147[0.064]0.135[0.014]0.104[0.001] 0.018[0.754]0.008[0.468]
i t 1 0.025[0.222]0.382[0.000]0.111[0.000] 0.040[0.550]−0.009[0.359]−0.004[0.837]0.016[0.000]
y t 1 0.179[0.005]0.215[0.239]−0.056[0.457]0.094[0.072]−0.042[0.179]0.044[0.453]0.012[0.262]
l t 1 −0.041[0.669] −0.319[0.259]−0.074[0.533]1.046[0.000]0.008[0.867]0.037[0.672]0.015[0.386]
π t 1 −0.369[0.001]0.377[0.188]0.101[0.451]0.167[0.096]0.571[0.000]0.126[0.246]0.043[0.036]
w t 1 −0.053[0.415]0.240[0.214]−0.067[0.412]−0.001[0.992]0.070[0.039]0.013[0.829]0.004[0.755]
z t 1 −0.389[0.219]0.036[0.970]0.000[0.999]0.585[0.035]0.330[0.044]−0.855[0.003]0.303[0.000]
c t 2 0.010[0.898]−0.141[0.553]0.043[0.667]−0.052[0.483]0.037[0.387]0.041[0.602]−0.020[0.163]
i t 2 −0.024[0.324]0.050[0.488]−0.021[0.456]−0.010[0.646]0.026[0.050]−0.007[0.749]−0.005[0.235]
y t 2 0.059[0.411]−0.331[0.074]0.014[0.861]−0.039[0.516]−0.057[0.129]0.006[0.922]0.013[0.288]
l t 2 0.007[0.943] 0.210[0.466] 0.065[0.588]−0.080[0.299]−0.024[0.621]−0.003[0.970]−0.007[0.692]
π t 2 0.030[0.819]−0.996[0.003]−0.262[0.092]−0.248[0.025]0.308[0.000]−0.129[0.308]0.004[0.755]
w t 2 0.040[0.550]−0.210[0.284]−0.096[0.240]−0.062[0.275] 0.054[0.163] 0.120[0.071]−0.006[0.657]
z t 2 −0.441[0.251]−1.002[0.375]−1.532[0.001]−0.540[0.116]0.050[0.800]0.291[0.393]−0.206[0.003]
( ) significant differences are indicated by the sign.
Table 10. Estimation results for the N-VAR(2) model.
Table 10. Estimation results for the N-VAR(2) model.
c t i t y t l t π t w t r t
const.1.101[0.000]1.891[0.000]0.952[0.000]0.161[0.191]−0.130[0.146]0.534[0.000]−0.003[0.947]
c t 1 −0.398[0.000]−0.287[0.265]0.145[0.179]0.072[0.294]0.03[0.544]0.030[0.685]0.029[0.247]
i t 1 0.025[0.386]0.327[0.000]0.053[0.135]0.027[0.231]0.018[0.262]−0.010[0.671]0.022[0.008]
y t 1 0.182[0.024]0.245[0.297]0.001[0.992]0.167[0.008]0.003[0.944]0.004[0.954]−0.018[0.429]
l t 1 0.196[0.091]0.492[0.145]0.218[0.124]1.083[0.000]−0.084[0.197]0.050[0.606]0.050[0.130]
π t 1 −0.463[0.000]0.436[0.227]0.192[0.204]0.155[0.109]0.539[0.000]0.060[0.563]0.034[0.339]
w t 1 0.018[0.849]0.360[0.197]−0.081[0.488]0.017[0.817]0.084[0.121]−0.082[0.303]0.006[0.837]
r t 1 −0.585[0.018]−1.047[0.145]−0.376[0.212]0.090[0.639]0.195[0.161]−0.324[0.115]1.093[0.000]
c t 2 0.031[0.717]0.161[0.519]0.117[0.264]0.018[0.783]0.085[0.079]0.095[0.182]−0.013[0.581]
i t 2 −0.012[0.650]0.004[0.958]−0.030[0.372]0.004[0.832]0.026[0.088]−0.023[0.314]−0.012[0.121]
y t 2 0.068[0.343]−0.313[0.137]0.047[0.591]0.026[0.641]−0.047[0.247]−0.036[0.546]0.017[0.418]
l t 2 −0.271[0.020]−0.750[0.027]−0.281[0.048]−0.156[0.085]0.070[0.282]−0.007[0.940]−0.047[0.157]
π t 2 −0.078[0.555]−0.944[0.015]−0.122[0.450]−0.233[0.025]0.243[0.001]0.008[0.945]0.044[0.242]
w t 2 0.001[0.986]−0.337[0.177]−0.178[0.090]−0.169[0.012]0.111[0.022]0.115[0.108]−0.008[0.730]
r t 2 0.542[0.031]0.301[0.681]−0.043[0.888]−0.242[0.217]−0.080[0.572]0.207[0.323]−0.149[0.039]
Table 11. The estimated St-VAR(2) conditional variance St-VAR V a r ^ ( c t | Z t 1 0 ) .
Table 11. The estimated St-VAR(2) conditional variance St-VAR V a r ^ ( c t | Z t 1 0 ) .
= 0.105 ( 0.000 ) + 0.104 ( 0.000 ) c ˜ t 1 2 + 0.107 ( 0.000 ) c ˜ t 2 2 + 0.011 ( 0.000 ) ı ˜ t 1 2 + 0.011 ( 0.000 ) ı ˜ t 2 2 + 0.092 ( 0.000 ) y ˜ t 1 2 + 0.080 ( 0.000 ) y ˜ t 2 2 + 0.138 ( 0.000 ) l ˜ t 1 2 + 0.140 ( 0.000 ) l ˜ t 2 2 + 0.271 ( 0.000 ) π ˜ t 1 2 + 0.287 ( 0.000 ) π ˜ t 2 2 + 0.100 ( 0.000 ) w ˜ t 1 2 + 0.082 ( 0.000 ) w ˜ t 2 2 + 2.393 ( 0.000 ) r ˜ t 1 2 + 2.160 ( 0.000 ) r ˜ t 2 2 0.004 ( 0.007 ) c ˜ t 1 ı ˜ t 1 0.023 ( 0.508 ) π ˜ t 2 r ˜ t 2 0.041 ( 0.000 ) c ˜ t 1 y ˜ t 1 0.007 ( 0.258 ) c ˜ t 1 l ˜ t 1 + 0.020 ( 0.013 ) c ˜ t 1 π ˜ t 1 0.010 ( 0.022 ) c ˜ t 1 w ˜ t 1 0.025 ( 0.235 ) c ˜ t 1 r ˜ t 1 + 0.033 ( 0.000 ) c ˜ t 1 c ˜ t 2 + 0.004 ( 0.022 ) c ˜ t 1 ı ˜ t 2 0.018 ( 0.000 ) c ˜ t 1 y ˜ t 2 + 0.011 ( 0.081 ) c ˜ t 1 l ˜ t 2 + 0.015 ( 0.066 ) c ˜ t 1 π ˜ t 2 0.000 ( 0.983 ) c ˜ t 1 w ˜ t 2 + 0.014 ( 0.600 ) c ˜ t 1 r ˜ t 2 0.009 ( 0.000 ) ı ˜ t 1 y ˜ t 1 0.007 ( 0.001 ) ı ˜ t 1 l ˜ t 1 + 0.001 ( 0.710 ) ı ˜ t 1 π ˜ t 1 0.003 ( 0.071 ) ı ˜ t 1 w ˜ t 1 0.016 ( 0.015 ) ı ˜ t 1 r ˜ t 1 + 0.002 ( 0.199 ) ı ˜ t 1 c ˜ t 2 0.002 ( 0.000 ) ı ˜ t 1 ı ˜ t 2 0.001 ( 0.722 ) ı ˜ t 1 y ˜ t 2 + 0.007 ( 0.000 ) ı ˜ t 1 l ˜ t 2 + 0.001 ( 0.697 ) ı ˜ t 1 π ˜ t 2 0.003 ( 0.031 ) ı ˜ t 1 w ˜ t 2 + 0.007 ( 0.314 ) ı ˜ t 1 r ˜ t 2 0.041 ( 0.000 ) y ˜ t 1 l ˜ t 1 + 0.011 ( 0.122 ) y ˜ t 1 π ˜ t 1 0.019 ( 0.000 ) y ˜ t 1 w ˜ t 1 0.024 ( 0.247 ) y ˜ t 1 r ˜ t 1 0.020 ( 0.001 ) y ˜ t 1 c ˜ t 2 0.017 ( 0.001 ) y ˜ t 1 c ˜ t 2 0.003 ( 0.051 ) y ˜ t 1 ı ˜ t 2 + 0.020 ( 0.000 ) y ˜ t 1 y ˜ t 2 + 0.040 ( 0.000 ) y ˜ t 1 l ˜ t 2 0.012 ( 0.148 ) y ˜ t 1 π ˜ t 2 + 0.022 ( 0.309 ) y ˜ t 1 r ˜ t 2 0.001 ( 0.874 ) l ˜ t 1 π ˜ t 1 + 0.045 ( 0.000 ) l ˜ t 1 w ˜ t 1 0.111 ( 0.000 ) l ˜ t 1 r ˜ t 1 0.016 ( 0.012 ) l ˜ t 1 c ˜ t 2 + 0.003 ( 0.494 ) y ˜ t 1 w ˜ t 2 0.003 ( 0.110 ) l ˜ t 1 ı ˜ t 2 0.016 ( 0.003 ) l ˜ t 1 y ˜ t 2 0.136 ( 0.000 ) l ˜ t 1 l ˜ t 2 0.005 ( 0.630 ) l ˜ t 1 π ˜ t 2 + 0.008 ( 0.208 ) l ˜ t 1 w ˜ t 2 0.014 ( 0.609 ) l ˜ t 1 r ˜ t 2 + 0.041 ( 0.000 ) π ˜ t 1 w ˜ t 1 0.026 ( 0.469 ) π ˜ t 1 r ˜ t 1 0.019 ( 0.022 ) π ˜ t 1 c ˜ t 2 0.000 ( 0.999 ) π ˜ t 1 ı ˜ t 2 + 0.000 ( 0.957 ) π ˜ t 1 y ˜ t 2 + 0.007 ( 0.433 ) π ˜ t 1 l ˜ t 2 0.208 ( 0.000 ) π ˜ t 1 π ˜ t 2 0.026 ( 0.001 ) π ˜ t 1 w ˜ t 2 0.078 ( 0.033 ) π ˜ t 1 r ˜ t 2 + 0.017 ( 0.415 ) w ˜ t 1 r ˜ t 1 0.011 ( 0.033 ) w ˜ t 1 c ˜ t 2 + 0.001 ( 0.545 ) w ˜ t 1 ı ˜ t 2 0.007 ( 0.108 ) w ˜ t 1 y ˜ t 2 0.054 ( 0.000 ) π ˜ t 1 l ˜ t 2 0.042 ( 0.000 ) w ˜ t 1 π ˜ t 2 0.002 ( 0.644 ) w ˜ t 1 w ˜ t 2 + 0.021 ( 0.353 ) w ˜ t 1 r ˜ t 2 0.026 ( 0.283 ) r ˜ t 1 c ˜ t 2 0.022 ( 0.006 ) r ˜ t 1 ı ˜ t 2 0.011 ( 0.591 ) r ˜ t 1 y ˜ t 2 + 0.089 ( 0.003 ) r ˜ t 1 l ˜ t 2 0.106 ( 0.009 ) r ˜ t 1 π ˜ t 2 0.014 ( 0.523 ) r ˜ t 1 w ˜ t 2 0.511 ( 0.000 ) r ˜ t 1 r ˜ t 2 0.003 ( 0.046 ) c ˜ t 2 ı ˜ t 2 0.041 ( 0.000 ) c ˜ t 2 y ˜ t 2 + 0.019 ( 0.003 ) c ˜ t 2 l ˜ t 2 + 0.051 ( 0.000 ) c ˜ t 2 π ˜ t 2 0.005 ( 0.220 ) c ˜ t 2 w ˜ t 2 + 0.009 ( 0.641 ) c ˜ t 2 r ˜ t 2 0.012 ( 0.000 ) ı ˜ t 2 y ˜ t 2 0.002 ( 0.216 ) ı ˜ t 2 l ˜ t 2 + 0.003 ( 0.216 ) ı ˜ t 2 π ˜ t 2 + 0.001 ( 0.446 ) ı ˜ t 2 w ˜ t 2 0.028 ( 0.000 ) ı ˜ t 2 r ˜ t 2 + 0.017 ( 0.005 ) y ˜ t 2 l ˜ t 2 0.001 ( 0.860 ) y ˜ t 2 π ˜ t 2 0.008 ( 0.028 ) y ˜ t 2 w ˜ t 2 0.049 ( 0.006 ) y ˜ t 2 r ˜ t 2 + 0.012 ( 0.205 ) l ˜ t 2 π ˜ t 2 0.008 ( 0.145 ) l ˜ t 2 w ˜ t 2 0.012 ( 0.647 ) l ˜ t 2 r ˜ t 2 + 0.036 ( 0.000 ) π ˜ t 2 w ˜ t 2 + 0.048 ( 0.006 ) w ˜ t 2 r ˜ t 2
Table 12. M-S Testing results for St-VAR(2).
Table 12. M-S Testing results for St-VAR(2).
c t i t y t l t π t w t z t
[2] Linearity1.598[0.208]2.312[0.130]0.844[0.359]0.003[0.957]1.244[0.266]0.000[0.986]6.337[0.013]
[4] 1st depend0.316[0.575]0.249[0.619]0.610[0.436]1.888[0.171]0.627[0.429]0.146[0.703]0.905[0.342]
[5] 1st-invar0.697[0.499]1.157[0.316]1.398[0.249]0.972[0.380]1.548[0.215]1.268[0.284]0.206[0.814]
[3] Heterosked0.009[0.925]0.250[0.617]1.320[0.252]0.009[0.926]3.085[0.080]0.009[0.924]0.075[0.785]
[4] 2nd depend0.199[0.820]0.748[0.475]0.666[0.515]0.384[0.682]1.308[0.273]0.447[0.640]0.303[0.739]
[5] 2nd t-invar0.388[0.679]0.153[0.858]0.537[0.585]0.010[0.990]1.706[0.184]3.339[0.037]5.142[0.010]
[1] A-D test0.763[0.508]0.961[0.378]0.823[0.465]0.841[0.452]1.666[0.141]1.130[0.296]1.790[0.120]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Poudyal, N.; Spanos, A. Model Validation and DSGE Modeling. Econometrics 2022, 10, 17. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics10020017

AMA Style

Poudyal N, Spanos A. Model Validation and DSGE Modeling. Econometrics. 2022; 10(2):17. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics10020017

Chicago/Turabian Style

Poudyal, Niraj, and Aris Spanos. 2022. "Model Validation and DSGE Modeling" Econometrics 10, no. 2: 17. https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics10020017

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop