Econometrics doi: 10.3390/econometrics10020024

Authors: Rocco Mosconi Paolo Paruolo

This Special Issue collects contributions related to the advances in the theory and practice of Econometrics induced by the research of Katarina Juselius and S&oslash;ren Johansen, whom this Special Issue aims to celebrate [...]

]]>Econometrics doi: 10.3390/econometrics10020023

Authors: Mikio Ito Akihiko Noda Tatsuma Wada

A multivariate, non-Bayesian, regression-based, or feasible generalized least squares (GLS)-based approach is proposed to estimate time-varying VAR parameter models. Although it has been known that the Kalman-smoothed estimate can be alternatively estimated using GLS for univariate models, we assess the accuracy of the feasible GLS estimator compared with commonly used Bayesian estimators. Unlike the maximum likelihood estimator often used together with the Kalman filter, it is shown that the possibility of the pile-up problem occurring is negligible. In addition, this approach enables us to deal with stochastic volatility models, models with a time-dependent variance&ndash;covariance matrix, and models with non-Gaussian errors that allow us to deal with abrupt changes or structural breaks in time-varying parameters.

]]>Econometrics doi: 10.3390/econometrics10020022

Authors: Duo Qin Sophie van Huellen Qing Chao Wang Thanos Moraitis

Aggregate financial conditions indices (FCIs) are constructed to fulfil two aims: (i) The FCIs should resemble non-model-based composite indices in that their composition is adequately invariant for concatenation during regular updates; (ii) the concatenated FCIs should outperform financial variables conventionally used as leading indicators in macro models. Both aims are shown to be attainable once an algorithmic modelling route is adopted to combine leading indicator modelling with the principles of partial least-squares (PLS) modelling, supervised dimensionality reduction, and backward dynamic selection. Pilot results using US data confirm the traditional wisdom that financial imbalances are more likely to induce macro impacts than routine market volatilities. They also shed light on why the popular route of principal-component based factor analysis is ill-suited for the two aims.

]]>Econometrics doi: 10.3390/econometrics10020020

Authors: Rocco Mosconi Paolo Paruolo

This article was prepared for the Special Issue ‘Celebrated Econometricians: Katarina Juselius and Søren Johansen’ of Econometrics. It is based on material recorded on 30–31 October 2018 in Copenhagen. It explores Katarina Juselius’ research, and discusses inter alia the following issues: equilibrium; short and long-run behaviour; common trends; adjustment; integral and proportional control mechanisms; model building and model comparison; breaks, crisis, learning; univariate versus multivariate modelling; mentoring and the gender gap in Econometrics.

]]>Econometrics doi: 10.3390/econometrics10020021

Authors: Rocco Mosconi Paolo Paruolo

This article was prepared for the Special Issue “Celebrated Econometricians: Katarina Juselius and Søren Johansen” of Econometrics. It is based on material recorded on 30 October 2018 in Copenhagen. It explores Søren Johansen’s research, and discusses inter alia the following issues: estimation and inference for nonstationary time series of the I(1), I(2) and fractional cointegration types; survival analysis; statistical modelling; likelihood; econometric methodology; the teaching and practice of Statistics and Econometrics.

]]>Econometrics doi: 10.3390/econometrics10020019

Authors: Chenglong Ye Lin Zhang Mingxuan Han Yanjia Yu Bingxin Zhao Yuhong Yang

This paper aims to better predict highly skewed auto insurance claims by combining candidate predictions. We analyze a version of the Kangaroo Auto Insurance company data and study the effects of combining different methods using five measures of prediction accuracy. The results show the following. First, when there is an outstanding (in terms of Gini Index) prediction among the candidates, the &ldquo;forecast combination puzzle&rdquo; phenomenon disappears. The simple average method performs much worse than the more sophisticated model combination methods, indicating that combining different methods could help us avoid performance degradation. Second, the choice of the prediction accuracy measure is crucial in defining the best candidate prediction for &ldquo;low frequency and high severity&rdquo; (LFHS) data. For example, mean square error (MSE) does not distinguish well between model combination methods, as the values are close. Third, the performances of different model combination methods can differ drastically. We propose using a new model combination method, named ARM-Tweedie, for such LFHS data; it benefits from an optimal rate of convergence and exhibits a desirable performance in several measures for the Kangaroo data. Fourth, overall, model combination methods improve the prediction accuracy for auto insurance claim costs. In particular, Adaptive Regression by Mixing (ARM), ARM-Tweedie, and constrained Linear Regression can improve forecast performance when there are only weak learners or when no dominant learner exists.

]]>Econometrics doi: 10.3390/econometrics10020018

Authors: Gaetano Perone

The COVID-19 pandemic is a serious threat to all of us. It has caused an unprecedented shock to the world&rsquo;s economy, and it has interrupted the lives and livelihood of millions of people. In the last two years, a large body of literature has attempted to forecast the main dimensions of the COVID-19 outbreak using a wide set of models. In this paper, I forecast the short- to mid-term cumulative deaths from COVID-19 in 12 hard-hit big countries around the world as of 20 August 2021. The data used in the analysis were extracted from the Our World in Data COVID-19 dataset. Both non-seasonal and seasonal autoregressive integrated moving averages (ARIMA and SARIMA) were estimated. The analysis showed that: (i) ARIMA/SARIMA forecasts were sufficiently accurate in both the training and test set by always outperforming the simple alternative forecasting techniques chosen as benchmarks (Mean, Na&iuml;ve, and Seasonal Na&iuml;ve); (ii) SARIMA models outperformed ARIMA models in 46 out 48 metrics (in forecasting future values), i.e., on 95.8% of all the considered forecast accuracy measures (mean absolute error [MAE], mean absolute percentage error [MAPE], mean absolute scaled error [MASE], and the root mean squared error [RMSE]), suggesting a clear seasonal pattern in the data; and (iii) the forecasted values from SARIMA models fitted very well the observed (real-time) data for the period 21 August 2021&ndash;19 September 2021 for almost all the countries analyzed. This article shows that SARIMA can be safely used for both the short- and medium-term predictions of COVID-19 deaths. Thus, this approach can help government authorities to monitor and manage the huge pressure that COVID-19 is exerting on national healthcare systems.

]]>Econometrics doi: 10.3390/econometrics10020017

Authors: Niraj Poudyal Aris Spanos

The primary objective of this paper is to revisit DSGE models with a view to bringing out their key weaknesses, including statistical misspecification, non-identification of deep parameters, substantive inadequacy, weak forecasting performance, and potentially misleading policy analysis. It is argued that most of these weaknesses stem from failing to distinguish between statistical and substantive adequacy and secure the former before assessing the latter. The paper untangles the statistical from the substantive premises of inference to delineate the above-mentioned issues and propose solutions. The discussion revolves around a typical DSGE model using US quarterly data. It is shown that this model is statistically misspecified, and when respecified to arrive at a statistically adequate model gives rise to the Student&rsquo;s t VAR model. This statistical model is shown to (i) provide a sound basis for testing the DSGE overidentifying restrictions as well as probing the identifiability of the deep parameters, (ii) suggest ways to meliorate its substantive inadequacy, and (iii) give rise to reliable forecasts and policy simulations.

]]>Econometrics doi: 10.3390/econometrics10020016

Authors: Katarina Juselius

A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination with forward-looking expectations and shows that all assumptions about the model&rsquo;s shock structure and steady-state behavior can be formulated as testable hypotheses on common stochastic trends and cointegration. The basic stationarity assumptions of the monetary model failed to obtain empirical support. They were too restrictive to explain the observed long persistent swings in the real exchange rate, the real interest rates, and the inflation and interest rate differentials.

]]>Econometrics doi: 10.3390/econometrics10020015

Authors: Piero C. Kauffmann Hellinton H. Takada Ana T. Terada Julio M. Stern

Most factor-based forecasting models for the term structure of interest rates depend on a fixed number of factor loading functions that have to be specified in advance. In this study, we relax this assumption by building a yield curve forecasting model that learns new factor decompositions directly from data for an arbitrary number of factors, combining a Gaussian linear state-space model with a neural network that generates smooth yield curve factor loadings. In order to control the model complexity, we define prior distributions with a shrinkage effect over the model parameters, and we present how to obtain computationally efficient maximum a posteriori numerical estimates using the Kalman filter and automatic differentiation. An evaluation of the model&rsquo;s performance on 14 years of historical data of the Brazilian yield curve shows that the proposed technique was able to obtain better overall out-of-sample forecasts than traditional approaches, such as the dynamic Nelson and Siegel model and its extensions.

]]>Econometrics doi: 10.3390/econometrics10020014

Authors: Vassilios Bazinas Bent Nielsen

We propose a method to explore the causal transmission of an intervention through two endogenous variables of interest. We refer to the intervention as a catalyst variable. The method is based on the reduced-form system formed from the conditional distribution of the two endogenous variables given the catalyst. The method combines elements from instrumental variable analysis and Cholesky decomposition of structural vector autoregressions. We give conditions for uniqueness of the causal transmission.

]]>Econometrics doi: 10.3390/econometrics10020013

Authors: Jorge González Chapela

Misclassification of a binary response variable and nonrandom sample selection are data issues frequently encountered by empirical researchers. For cases in which both issues feature simultaneously in a data set, we formulate a sample selection model for a misclassified binary outcome in which the conditional probabilities of misclassification are allowed to depend on covariates. Assuming the availability of validation data, the pseudo-maximum likelihood technique can be used to estimate the model. The performance of the estimator accounting for misclassification and sample selection is compared to that of estimators offering partial corrections. An empirical example illustrates the proposed framework.

]]>Econometrics doi: 10.3390/econometrics10010012

Authors: Yiannis Karavias Elias Tzavalis Haotian Zhang

Missing data or missing values are a common phenomenon in applied panel data research and of great interest for panel data unit root testing. The standard approach in the literature is to balance the panel by removing units and/or trimming a common time period for all units. However, this approach can be costly in terms of lost information. Instead, existing panel unit root tests could be extended to the case of unbalanced panels, but this is often difficult because the missing observations affect the bias correction which is usually involved. This paper contributes to the literature in two ways; it extends two popular panel unit root tests to allow for missing values, and secondly, it employs asymptotic local power functions to analytically study the impact of various missing-value methods on power. We find that zeroing-out the missing observations is the method that results in the greater test power, and that this result holds for all deterministic component specifications, such as intercepts, trends and structural breaks.

]]>Econometrics doi: 10.3390/econometrics10010011

Authors: Andreas Lichtenberger Joao Paulo Braga Willi Semmler

The green bond market is emerging as an impactful financing mechanism in climate change mitigation efforts. The effectiveness of the financial market for this transition to a low-carbon economy depends on attracting investors and removing financial market roadblocks. This paper investigates the differential bond performance of green vs non-green bonds with (1) a dynamic portfolio model that integrates negative as well as positive externality effects and via (2) econometric analyses of aggregate green bond and corporate energy time-series indices; as well as a cross-sectional set of individual bonds issued between 1 January 2017, and 1 October 2020. The asset pricing model demonstrates that, in the long-run, the positive externalities of green bonds benefit the economy through positive social returns. We use a deterministic and a stochastic version of the dynamic portfolio approach to obtain model-driven results and evaluate those through our empirical evidence using harmonic estimations. The econometric analysis of this study focuses on volatility and the risk&ndash;return performance (Sharpe ratio) of green and non-green bonds, and extends recent econometric studies that focused on yield differentials of green and non-green bonds. A modified Sharpe ratio analysis, cross-sectional methods, harmonic estimations, bond pairing estimations, as well as regression tree methodology, indicate that green bonds tend to show lower volatility and deliver superior Sharpe ratios (while the evidence for green premia is mixed). As a result, green bond investment can protect investors and portfolios from oil price and business cycle fluctuations, and stabilize portfolio returns and volatility. Policymakers are encouraged to make use of the financial benefits of green instruments and increase the financial flows towards sustainable economic activities to accelerate a low-carbon transition.

]]>Econometrics doi: 10.3390/econometrics10010010

Authors: David Pacini

This note studies the criterion for identifiability in parametric models based on the minimization of the Hellinger distance and exhibits its relationship to the identifiability criterion based on the Fisher matrix. It shows that the Hellinger distance criterion serves to establish identifiability of parameters of interest, or lack of it, in situations where the criterion based on the Fisher matrix does not apply, like in models where the support of the observed variables depends on the parameter of interest or in models with irregular points of the Fisher matrix. Several examples illustrating this result are provided.

]]>Econometrics doi: 10.3390/econometrics10010009

Authors: Szabolcs Blazsek Alvaro Escribano

We use data on the following climate variables for the period of the last 798 thousand years: global ice volume (Icet), atmospheric carbon dioxide level (CO2,t), and Antarctic land surface temperature (Tempt). Those variables are cyclical and are driven by the following strongly exogenous orbital variables: eccentricity of the Earth&rsquo;s orbit, obliquity, and precession of the equinox. We introduce score-driven ice-age models which use robust filters of the conditional mean and variance, generalizing the updating mechanism and solving the misspecification of a recent climate&ndash;econometric model (benchmark ice-age model). The score-driven models control for omitted exogenous variables and extreme events, using more general dynamic structures and heteroskedasticity. We find that the score-driven models improve the performance of the benchmark ice-age model. We provide out-of-sample forecasts of the climate variables for the last 100 thousand years. We show that during the last 10&ndash;15 thousand years of the forecasting period, for which humanity influenced the Earth&rsquo;s climate, (i) the forecasts of Icet are above the observed Icet, (ii) the forecasts of CO2,t level are below the observed CO2,t, and (iii) the forecasts of Tempt are below the observed Tempt. The forecasts for the benchmark ice-age model are reinforced by the score-driven models.

]]>Econometrics doi: 10.3390/econometrics10010008

Authors: Florian Wozny

This paper studies the performance of machine learning predictions for the counterfactual analysis of air transport. It is motivated by the dynamic and universally regulated international air transport market, where ex post policy evaluations usually lack counterfactual control scenarios. As an empirical example, this paper studies the impact of the COVID-19 pandemic on airfares in 2020 as the difference between predicted and actual airfares. Airfares are important from a policy makers&rsquo; perspective, as air transport is crucial for mobility. From a methodological point of view, airfares are also of particular interest given their dynamic character, which makes them challenging for prediction. This paper adopts a novel multi-step prediction technique with walk-forward validation to increase the transparency of the model&rsquo;s predictive quality. For the analysis, the universe of worldwide airline bookings is combined with detailed airline information. The results show that machine learning with walk-forward validation is powerful for the counterfactual analysis of airfares.

]]>Econometrics doi: 10.3390/econometrics10010007

Authors: Econometrics Editorial Office Econometrics Editorial Office

Rigorous peer-reviews are the basis of high-quality academic publishing [...]

]]>Econometrics doi: 10.3390/econometrics10010006

Authors: Gianmaria Niccodemi Tom Wansbeek

In linear regression analysis, the estimator of the variance of the estimator of the regression coefficients should take into account the clustered nature of the data, if present, since using the standard textbook formula will in that case lead to a severe downward bias in the standard errors. This idea of a cluster-robust variance estimator (CRVE) generalizes to clusters the classical heteroskedasticity-robust estimator. Its justification is asymptotic in the number of clusters. Although an improvement, a considerable bias could remain when the number of clusters is low, the more so when regressors are correlated within cluster. In order to address these issues, two improved methods were proposed; one method, which we call CR2VE, was based on biased reduced linearization, while the other, CR3VE, can be seen as a jackknife estimator. The latter is unbiased under very strict conditions, in particular equal cluster size. To relax this condition, we introduce in this paper CR3VE-&lambda;, a generalization of CR3VE where the cluster size is allowed to vary freely between clusters. We illustrate the performance of CR3VE-&lambda; through simulations and we show that, especially when cluster sizes vary widely, it can outperform the other commonly used estimators.

]]>Econometrics doi: 10.3390/econometrics10010005

Authors: Ron Mittelhammer George Judge Miguel Henry

In this paper, we introduce a flexible and widely applicable nonparametric entropy-based testing procedure that can be used to assess the validity of simple hypotheses about a specific parametric population distribution. The testing methodology relies on the characteristic function of the population probability distribution being tested and is attractive in that, regardless of the null hypothesis being tested, it provides a unified framework for conducting such tests. The testing procedure is also computationally tractable and relatively straightforward to implement. In contrast to some alternative test statistics, the proposed entropy test is free from user-specified kernel and bandwidth choices, idiosyncratic and complex regularity conditions, and/or choices of evaluation grids. Several simulation exercises were performed to document the empirical performance of our proposed test, including a regression example that is illustrative of how, in some contexts, the approach can be applied to composite hypothesis-testing situations via data transformations. Overall, the testing procedure exhibits notable promise, exhibiting appreciable increasing power as sample size increases for a number of alternative distributions when contrasted with hypothesized null distributions. Possible general extensions of the approach to composite hypothesis-testing contexts, and directions for future work are also discussed.

]]>Econometrics doi: 10.3390/econometrics10010004

Authors: Chung-Yim Yiu Ka-Shing Cheung

The age&ndash;period&ndash;cohort problem has been studied for decades but without resolution. There have been many suggested solutions to make the three effects estimable, but these solutions mostly exploit non-linear specifications. Yet, these approaches may suffer from misspecification or omitted variable bias. This paper is a practical-oriented study with an aim to empirically disentangle age&ndash;period&ndash;cohort effects by providing external information on the actual depreciation of housing structure rather than taking age as a proxy. It is based on appraisals of the improvement values of properties in New Zealand to estimate the age-depreciation effect. This research method provides a novel means of solving the identification problem of the age, period, and cohort trilemma. Based on about half a million housing transactions from 1990 to 2019 in the Auckland Region of New Zealand, the results show that traditional hedonic prices models using age and time dummy variables can result, ceteris paribus, in unreasonable positive depreciation rates. The use of the improvement values model can help improve the accuracy of home value assessment and reduce estimation biases. This method also has important practical implications for property valuations.

]]>Econometrics doi: 10.3390/econometrics10010003

Authors: Philip Hans Franses Max Welz

We propose a simple and reproducible methodology to create a single equation forecasting model (SEFM) for low-frequency macroeconomic variables. Our methodology is illustrated by forecasting annual real GDP growth rates for 52 African countries, where the data are obtained from the World Bank and start in 1960. The models include lagged growth rates of other countries, as well as a cointegration relationship to capture potential common stochastic trends. With a few selection steps, our methodology quickly arrives at a reasonably small forecasting model per country. Compared with benchmark models, the single equation forecasting models seem to perform quite well.

]]>Econometrics doi: 10.3390/econometrics10010002

Authors: Jennifer L. Castle Jurgen A. Doornik David F. Hendry

By its emissions of greenhouse gases, economic activity is the source of climate change which affects pandemics that in turn can impact badly on economies. Across the three highly interacting disciplines in our title, time-series observations are measured at vastly different data frequencies: very low frequency at 1000-year intervals for paleoclimate, through annual, monthly to intra-daily for current climate; weekly and daily for pandemic data; annual, quarterly and monthly for economic data, and seconds or nano-seconds in finance. Nevertheless, there are important commonalities to economic, climate and pandemic time series. First, time series in all three disciplines are subject to non-stationarities from evolving stochastic trends and sudden distributional shifts, as well as data revisions and changes to data measurement systems. Next, all three have imperfect and incomplete knowledge of their data generating processes from changing human behaviour, so must search for reasonable empirical modeling approximations. Finally, all three need forecasts of likely future outcomes to plan and adapt as events unfold, albeit again over very different horizons. We consider how these features shape the formulation and selection of forecasting models to tackle their common data features yet distinct problems.

]]>Econometrics doi: 10.3390/econometrics10010001

Authors: Myoung-Jin Keay

This paper presents a method for estimating the average treatment effects (ATE) of an exponential endogenous switching model where the coefficients of covariates in the structural equation are random and correlated with the binary treatment variable. The estimating equations are derived under some mild identifying assumptions. We find that the ATE is identified, although each coefficient in the structural model may not be. Tests assessing the endogeneity of treatment and for model selection are provided. Monte Carlo simulations show that, in large samples, the proposed estimator has a smaller bias and a larger variance than the methods that do not take the random coefficients into account. This is applied to health insurance data of Oregon.

]]>Econometrics doi: 10.3390/econometrics9040047

Authors: Martin Huber

The estimation of the causal effect of an endogenous treatment based on an instrumental variable (IV) is often complicated by the non-observability of the outcome of interest due to attrition, sample selection, or survey non-response. To tackle the latter problem, the latent ignorability (LI) assumption imposes that attrition/sample selection is independent of the outcome conditional on the treatment compliance type (i.e., how the treatment behaves as a function of the instrument), the instrument, and possibly further observed covariates. As a word of caution, this note formally discusses the strong behavioral implications of LI in rather standard IV models. We also provide an empirical illustration based on the Job Corps experimental study, in which the sensitivity of the estimated program effect to LI and alternative assumptions about outcome attrition is investigated.

]]>Econometrics doi: 10.3390/econometrics9040046

Authors: David H. Bernstein Andrew B. Martinez

The COVID-19 pandemic resulted in the most abrupt changes in U.S. labor force participation and unemployment since the Second World War, with different consequences for men and women. This paper models the U.S. labor market to help to interpret the pandemic&rsquo;s effects. After replicating and extending Emerson&rsquo;s (2011) model of the labor market, we formulate a joint model of male and female unemployment and labor force participation rates for 1980&ndash;2019 and use it to forecast into the pandemic to understand the pandemic&rsquo;s labor market consequences. Gender-specific differences were particularly large at the pandemic&rsquo;s outset; lower labor force participation persists.

]]>Econometrics doi: 10.3390/econometrics9040045

Authors: Xin Jin Jia Liu Qiao Yang

This paper suggests a new approach to evaluate realized covariance (RCOV) estimators via their predictive power on return density. By jointly modeling returns and RCOV measures under a Bayesian framework, the predictive density of returns and ex-post covariance measures are bridged. The forecast performance of a covariance estimator can be assessed according to its improvement in return density forecasting. Empirical applications to equity data show that several RCOV estimators consistently perform better than others and emphasize the importance of RCOV selection in covariance modeling and forecasting.

]]>Econometrics doi: 10.3390/econometrics9040044

Authors: Kimon Ntotsis Alex Karagrigoriou Andreas Artemiou

When it comes to variable interpretation, multicollinearity is among the biggest issues that must be surmounted, especially in this new era of Big Data Analytics. Since even moderate size multicollinearity can prevent proper interpretation, special diagnostics must be recommended and implemented for identification purposes. Nonetheless, in the areas of econometrics and statistics, among other fields, these diagnostics are controversial concerning their &ldquo;successfulness&rdquo;. It has been remarked that they frequently fail to do proper model assessment due to information complexity, resulting in model misspecification. This work proposes and investigates a robust and easily interpretable methodology, termed Elastic Information Criterion, capable of capturing multicollinearity rather accurately and effectively and thus providing a proper model assessment. The performance is investigated via simulated and real data.

]]>Econometrics doi: 10.3390/econometrics9040043

Authors: Zheng Fang Jianying Xie Ruiming Peng Sheng Wang

Climate finance is growing popular in addressing challenges of climate change because it controls the funding and resources to emission entities and promotes green manufacturing. In this study, we determined that PM2.5, PM10, SO2, NO2, CO, and O3 are the target pollutant in the atmosphere and we use a deep neural network to enhance the regression analysis in order to investigate the relationship between air pollution and stock prices of the targeted manufacturer. We also conduct time series analysis based on air pollution and heavy industry manufacturing in China, as the country is facing serious air pollution problems. Our study uses Convolutional-Long Short Term Memory in 2 Dimension (ConvLSTM2D) to extract the features from air pollution and enhance the time series regression in the financial market. The main contribution in our paper is discovering a feature term that impacts the stock price in the financial market, particularly for the companies that are highly impacted by the local environment. We offer a higher accurate model than the traditional time series in the stock price prediction by considering the environmental factor. The experimental results suggest that there is a negative linear relationship between air pollution and the stock market, which demonstrates that air pollution has a negative effect on the financial market. It promotes the manufacturer&rsquo;s improving their emission recycling and encourages them to invest in green manufacture&mdash;otherwise, the drop in stock price will impact the company funding process.

]]>Econometrics doi: 10.3390/econometrics9040042

Authors: Albert Okunade Ahmad Reshad Osmani Toluwalope Ayangbayi Adeyinka Kevin Okunade

Obesity, as a health and social problem with rising prevalence and soaring economic cost, is increasingly drawing scholarly and public policy attention. While many studies have suggested that infant breastfeeding protects against childhood obesity, empirical evidence on this causal relationship is fragile. Using the health capital development theory, this study exploited multiple data sources from the U.S. and a three-way error components model (ECM) with a jackknife resampling plan to estimate the effect of in-hospital breastfeeding initiation and breastfeeding for durations of 3, 6, and 12 months on the prevalence of obesity during teenage years. The main finding was that a 1% rise in the in-hospital breastfeeding initiation rate reduces the teenage obesity prevalence rate by 1.7% (9.6% of a standard deviation). The magnitude of this effect declines as the infant breastfeeding duration lengthens&mdash;e.g., the 12-month infant breastfeeding duration rate is associated with a 0.53% (3.7% of a standard deviation) reduction in obesity prevalence in the teenage years (9th to 12th grades). The study findings agree with both the behavioral and physiological theories on the long-term effects of breastfeeding, and have timely implications for public policies promoting infant breastfeeding to reduce the economic burden of teenage and later adult-stage obesity prevalence rates.

]]>Econometrics doi: 10.3390/econometrics9040041

Authors: Mustafa Salamh Liqun Wang

Many financial and economic time series exhibit nonlinear patterns or relationships. However, most statistical methods for time series analysis are developed for mean-stationary processes that require transformation, such as differencing of the data. In this paper, we study a dynamic regression model with nonlinear, time-varying mean function, and autoregressive conditionally heteroscedastic errors. We propose an estimation approach based on the first two conditional moments of the response variable, which does not require specification of error distribution. Strong consistency and asymptotic normality of the proposed estimator is established under strong-mixing condition, so that the results apply to both stationary and mean-nonstationary processes. Moreover, the proposed approach is shown to be superior to the commonly used quasi-likelihood approach and the efficiency gain is significant when the (conditional) error distribution is asymmetric. We demonstrate through a real data example that the proposed method can identify a more accurate model than the quasi-likelihood method.

]]>Econometrics doi: 10.3390/econometrics9040040

Authors: Kjartan Kloster Osmundsen Tore Selland Kleppe Roman Liesenfeld Atle Oglend

We propose a State-Space Model (SSM) for commodity prices that combines the competitive storage model with a stochastic trend. This approach fits into the economic rationality of storage decisions and adds to previous deterministic trend specifications of the storage model. For a Bayesian posterior analysis of the SSM, which is nonlinear in the latent states, we used a Markov chain Monte Carlo algorithm based on the particle marginal Metropolis–Hastings approach. An empirical application to four commodity markets showed that the stochastic trend SSM is favored over deterministic trend specifications. The stochastic trend SSM identifies structural parameters that differ from those for deterministic trend specifications. In particular, the estimated price elasticities of demand are typically larger under the stochastic trend SSM.

]]>Econometrics doi: 10.3390/econometrics9040039

Authors: J. Eduardo Vera-Valdés

This paper used cross-sectional aggregation as the inspiration for a model with long-range dependence that arises in actual data. One of the advantages of our model is that it is less brittle than fractionally integrated processes. In particular, we showed that the antipersistent phenomenon is not present for the cross-sectionally aggregated process. We proved that this has implications for estimators of long-range dependence in the frequency domain, which will be misspecified for nonfractional long-range-dependent processes with negative degrees of persistence. As an application, we showed how we can approximate a fractionally differenced process using theoretically-motivated cross-sectional aggregated long-range-dependent processes. An example with temperature data showed that our framework provides a better fit to the data than the fractional difference operator.

]]>Econometrics doi: 10.3390/econometrics9040038

Authors: J. M. Calabuig E. Jiménez-Fernández E. A. Sánchez-Pérez S. Manzanares

One of the main challenges posed by the healthcare crisis generated by COVID-19 is to avoid hospital collapse. The occupation of hospital beds by patients diagnosed by COVID-19 implies the diversion or suspension of their use for other specialities. Therefore, it is useful to have information that allows efficient management of future hospital occupancy. This article presents a robust and simple model to show certain characteristics of the evolution of the dynamic process of bed occupancy by patients with COVID-19 in a hospital by means of an adaptation of Kaplan-Meier survival curves. To check this model, the evolution of the COVID-19 hospitalization process of two hospitals between 11 March and 15 June 2020 is analyzed. The information provided by the Kaplan-Meier curves allows forecasts of hospital occupancy in subsequent periods. The results shows an average deviation of 2.45 patients between predictions and actual occupancy in the period analyzed.

]]>Econometrics doi: 10.3390/econometrics9040037

Authors: C. Vladimir Rodríguez-Caballero J. Eduardo Vera-Valdés

This paper tests if air pollution serves as a carrier for SARS-CoV-2 by measuring the effect of daily exposure to air pollution on its spread by panel data models that incorporates a possible commonality between municipalities. We show that the contemporary exposure to particle matter is not the main driver behind the increasing number of cases and deaths in the Mexico City Metropolitan Area. Remarkably, we also find that the cross-dependence between municipalities in the Mexican region is highly correlated to public mobility, which plays the leading role behind the rhythm of contagion. Our findings are particularly revealing given that the Mexico City Metropolitan Area did not experience a decrease in air pollution during COVID-19 induced lockdowns.

]]>Econometrics doi: 10.3390/econometrics9040036

Authors: Chad Fulton Kirstin Hubrich

We analyze real-time forecasts of US inflation over 1999Q3–2019Q4 and subsamples, investigating whether and how forecast accuracy and robustness can be improved with additional information such as expert judgment, additional macroeconomic variables, and forecast combination. The forecasts include those from the Federal Reserve Board’s Tealbook, the Survey of Professional Forecasters, dynamic models, and combinations thereof. While simple models remain hard to beat, additional information does improve forecasts, especially after 2009. Notably, forecast combination improves forecast accuracy over simpler models and robustifies against bad forecasts; aggregating forecasts of inflation’s components can improve performance compared to forecasting the aggregate directly; and judgmental forecasts, which may incorporate larger and more timely datasets in conjunction with model-based forecasts, improve forecasts at short horizons.

]]>Econometrics doi: 10.3390/econometrics9040035

Authors: Michael Creel

This paper studies method of simulated moments (MSM) estimators that are implemented using Bayesian methods, specifically Markov chain Monte Carlo (MCMC). Motivation and theory for the methods is provided by Chernozhukov and Hong (2003). The paper shows, experimentally, that confidence intervals using these methods may have coverage which is far from the nominal level, a result which has parallels in the literature that studies overidentified GMM estimators. A neural network may be used to reduce the dimension of an initial set of moments to the minimum number that maintains identification, as in Creel (2017). When MSM-MCMC estimation and inference is based on such moments, and using a continuously updating criteria function, confidence intervals have statistically correct coverage in all cases studied. The methods are illustrated by application to several test models, including a small DSGE model, and to a jump-diffusion model for returns of the S&amp;P 500 index.

]]>Econometrics doi: 10.3390/econometrics9030034

Authors: S. Yanki Kalfa Jaime Marquez

(Hendry 1980, p. 403) The three golden rules of econometrics are “test, test, and test”. The current paper applies that approach to model the forecasts of the Federal Open Market Committee over 1992–2019 and to forecast those forecasts themselves. Monetary policy is forward-looking, and as part of the FOMC’s effort toward transparency, the FOMC publishes its (forward-looking) economic projections. The overall views on the economy of the FOMC participants–as characterized by the median of their projections for inflation, unemployment, and the Fed’s policy rate–are themselves predictable by information publicly available at the time of the FOMC’s meeting. Their projections also communicate systematic behavior on the part of the FOMC’s participants.

]]>Econometrics doi: 10.3390/econometrics9030033

Authors: Philippe Goulet Coulombe Maximilian Göbel

Stips et al. (2016) use information flows (Liang (2008, 2014)) to establish causality from various forcings to global temperature. We show that the formulas being used hinge on a simplifying assumption that is nearly always rejected by the data. We propose the well-known forecast error variance decomposition based on a Vector Autoregression as an adequate measure of information flow, and find that most results in Stips et al. (2016) cannot be corroborated. Then, we discuss which modeling choices (e.g., the choice of CO2 series and assumptions about simultaneous relationships) may help in extracting credible estimates of causal flows and the transient climate response simply by looking at the joint dynamics of two climatic time series.

]]>Econometrics doi: 10.3390/econometrics9030032

Authors: Dimitrios V. Vougas

There is no available Prais–Winsten algorithm for regression with AR(2) or higher order errors, and the one with AR(1) errors is not fully justified or is implemented incorrectly (thus being inefficient). This paper addresses both issues, providing an accurate, computationally fast, and inexpensive generic zig-zag algorithm.

]]>Econometrics doi: 10.3390/econometrics9030031

Authors: Massimo Franchi Paolo Paruolo

This paper discusses the notion of cointegrating space for linear processes integrated of any order. It first shows that the notions of (polynomial) cointegrating vectors and of root functions coincide. Second, it discusses how the cointegrating space can be defined (i) as a vector space of polynomial vectors over complex scalars, (ii) as a free module of polynomial vectors over scalar polynomials, or finally (iii) as a vector space of rational vectors over rational scalars. Third, it shows that a canonical set of root functions can be used as a basis of the various notions of cointegrating space. Fourth, it reviews results on how to reduce polynomial bases to minimal order—i.e., minimal bases. The application of these results to Vector AutoRegressive processes integrated of order 2 is found to imply the separation of polynomial cointegrating vectors from non-polynomial ones.

]]>Econometrics doi: 10.3390/econometrics9030030

Authors: Fragiskos Archontakis Rocco Mosconi

We showcase the impact of Katarina Juselius and Søren Johansen’s contribution to econometrics using bibliometric data on citations from 1989 to 2017, extracted from the Web of Science (WoS) database. Our purpose is to analyze the impact of KJ and SJ’s ideas on applied and methodological research in econometrics. To this aim, starting from WoS data, we derived two composite indices whose purpose is to disentangle the authors’ impact on applied research from their impact on methodological research. As of 2017, the number of applied citing papers per quarter had not yet reached the peak; conversely, the peak in the methodological literature seem to have been reached around 2000, although the shape of the trajectory is very flat after the peak. We analyzed the data using a multivariate dynamic version of the well known Bass model. Our estimates suggest that the methodological literature is mainly driven by “innovators”, whereas “imitators” are relatively more important in the applied literature: this might explain the different location of the peaks. We also find that, in the literature referring to KJ and SJ, the “cross-fertilization” between methodological and applied research is statistically significant and bi-directional.

]]>Econometrics doi: 10.3390/econometrics9030029

Authors: Federico Bandi Alex Maynard Hyungsik Roger Moon Benoit Perron

Peter Phillips has had a tremendous impact on econometric theory and practice [...]

]]>Econometrics doi: 10.3390/econometrics9030028

Authors: Vincenzo Candila

Recently, the world of cryptocurrencies has experienced an undoubted increase in interest. Since the first cryptocurrency appeared in 2009 in the aftermath of the Great Recession, the popularity of digital currencies has, year by year, risen continuously. As of February 2021, there are more than 8525 cryptocurrencies with a market value of approximately USD 1676 billion. These particular assets can be used to diversify the portfolio as well as for speculative actions. For this reason, investigating the daily volatility and co-volatility of cryptocurrencies is crucial for investors and portfolio managers. In this work, the interdependencies among a panel of the most traded digital currencies are explored and evaluated from statistical and economic points of view. Taking advantage of the monthly Google queries (which appear to be the factors driving the price dynamics) on cryptocurrencies, we adopted a mixed-frequency approach within the Dynamic Conditional Correlation (DCC) model. In particular, we introduced the Double Asymmetric GARCH–MIDAS model in the DCC framework.

]]>Econometrics doi: 10.3390/econometrics9030027

Authors: Arifatus Solikhah Heri Kuswanto Nur Iriawan Kartika Fithriasari

We generalize the Gaussian Mixture Autoregressive (GMAR) model to the Fisher’s z Mixture Autoregressive (ZMAR) model for modeling nonlinear time series. The model consists of a mixture of K-component Fisher’s z autoregressive models with the mixing proportions changing over time. This model can capture time series with both heteroskedasticity and multimodal conditional distribution, using Fisher’s z distribution as an innovation in the MAR model. The ZMAR model is classified as nonlinearity in the level (or mode) model because the mode of the Fisher’s z distribution is stable in its location parameter, whether symmetric or asymmetric. Using the Markov Chain Monte Carlo (MCMC) algorithm, e.g., the No-U-Turn Sampler (NUTS), we conducted a simulation study to investigate the model performance compared to the GMAR model and Student t Mixture Autoregressive (TMAR) model. The models are applied to the daily IBM stock prices and the monthly Brent crude oil prices. The results show that the proposed model outperforms the existing ones, as indicated by the Pareto-Smoothed Important Sampling Leave-One-Out cross-validation (PSIS-LOO) minimum criterion.

]]>Econometrics doi: 10.3390/econometrics9030026

Authors: Jennifer L. Castle Jurgen A. Doornik David F. Hendry

We investigate forecasting in models that condition on variables for which future values are unknown. We consider the role of the significance level because it guides the binary decisions whether to include or exclude variables. The analysis is extended by allowing for a structural break, either in the first forecast period or just before. Theoretical results are derived for a three-variable static model, but generalized to include dynamics and many more variables in the simulation experiment. The results show that the trade-off for selecting variables in forecasting models in a stationary world, namely that variables should be retained if their noncentralities exceed unity, still applies in settings with structural breaks. This provides support for model selection at looser than conventional settings, albeit with many additional features explaining the forecast performance, and with the caveat that retaining irrelevant variables that are subject to location shifts can worsen forecast performance.

]]>Econometrics doi: 10.3390/econometrics9020025

Authors: Yuanyuan Deng Hugo Benítez-Silva

Medicare is one of the largest federal social insurance programs in the United States and the secondary payer for Medicare beneficiaries covered by employer-provided health insurance (EPHI). However, an increasing number of individuals are delaying their Medicare enrollment when they first become eligible at age 65. Using administrative data from the Medicare Current Beneficiary Survey (MCBS), this paper estimates the effects of EPHI, employment, and delays in Medicare enrollment on Medicare costs. Given the administrative nature of the data, we are able to disentangle and estimate the Medicare as secondary payer (MSP) effect and the work effects on Medicare costs, as well as to construct delay enrollment indicators. Using Heckman’s sample selection model, we estimate that MSP and being employed are associated with a lower probability of observing positive Medicare spending and a lower level of Medicare spending. This paper quantifies annual savings of $5.37 billion from MSP and being employed. Delays in Medicare enrollment generate additional annual savings of $10.17 billion. Owing to the links between employment, health insurance coverage, and Medicare costs presented in this research, our findings may be of interest to policy makers who should take into account the consequences of reforms on the Medicare system.

]]>Econometrics doi: 10.3390/econometrics9020024

Authors: Hildegart Ahumada Magdalena Cornejo

We analyze the influence of climate change on soybean yields in a multivariate time-series framework for a major soybean producer and exporter—Argentina. Long-run relationships are found in partial systems involving climatic, technological, and economic factors. Automatic model selection simplifies dynamic specification for a model of soybean yields and permits encompassing tests of different economic hypotheses. Soybean yields adjust to disequilibria that reflect technological improvements to seed and crops practices. Climatic effects include (a) a positive effect from increased CO2 concentrations, which may capture accelerated photosynthesis, and (b) a negative effect from high local temperatures, which could increase with continued global warming.

]]>Econometrics doi: 10.3390/econometrics9020023

Authors: Yixiao Jiang

This paper investigates the incentive of credit rating agencies (CRAs) to bias ratings using a semiparametric, ordered-response model. The proposed model explicitly takes conflicts of interest into account and allows the ratings to depend flexibly on risk attributes through a semiparametric index structure. Asymptotic normality for the estimator is derived after using several bias correction techniques. Using Moody’s rating data from 2001 to 2016, I found that firms related to Moody’s shareholders were more likely to receive better ratings. Such favorable treatments were more pronounced in investment grade bonds compared with high yield bonds, with the 2007–2009 financial crisis being an exception. Parametric models, such as the ordered-probit, failed to identify this heterogeneity of the rating bias across different bond categories.

]]>Econometrics doi: 10.3390/econometrics9020022

Authors: Kajal Lahiri Zulkarnain Pulungan

Following recent econometric developments, we use self-assessed general health on a Likert scale conditioned by several objective determinants to measure health disparity between non-Hispanic Whites and minority groups in the United States. A statistical decomposition analysis is conducted to determine the contributions of socio-demographic and neighborhood characteristics in generating disparities. Whereas, 72% of health disparity between Whites and Blacks is attributable to Blacks’ relatively worse socio-economic and demographic characteristics, it is only 50% for Hispanics and 65% for American Indian Alaska Natives. The role of a number of factors including per capita income and income inequality vary across the groups. Interestingly, “blackness” of a county is associated with better health for all minority groups, but it affects Whites negatively. Our findings suggest that public health initiatives to eliminate health disparity should be targeted differently for different racial/ethnic groups by focusing on the most vulnerable within each group.

]]>Econometrics doi: 10.3390/econometrics9020021

Authors: Manabu Asai Chia-Lin Chang Michael McAleer Laurent Pauwels

This paper derives the statistical properties of a two-step approach to estimating multivariate rotated GARCH-BEKK (RBEKK) models. From the definition of RBEKK, the unconditional covariance matrix is estimated in the first step to rotate the observed variables in order to have the identity matrix for its sample covariance matrix. In the second step, the remaining parameters are estimated by maximizing the quasi-log-likelihood function. For this two-step quasi-maximum likelihood (2sQML) estimator, this paper shows consistency and asymptotic normality under weak conditions. While second-order moments are needed for the consistency of the estimated unconditional covariance matrix, the existence of the finite sixth-order moments is required for the convergence of the second-order derivatives of the quasi-log-likelihood function. This paper also shows the relationship between the asymptotic distributions of the 2sQML estimator for the RBEKK model and variance targeting quasi-maximum likelihood estimator for the VT-BEKK model. Monte Carlo experiments show that the bias of the 2sQML estimator is negligible and that the appropriateness of the diagonal specification depends on the closeness to either the diagonal BEKK or the diagonal RBEKK models. An empirical analysis of the returns of stocks listed on the Dow Jones Industrial Average indicates that the choice of the diagonal BEKK or diagonal RBEKK models changes over time, but most of the differences between the two forecasts are negligible.

]]>Econometrics doi: 10.3390/econometrics9020020

Authors: Antonio Pacifico

This paper improves a standard Structural Panel Bayesian Vector Autoregression model in order to jointly deal with issues of endogeneity, because of omitted factors and unobserved heterogeneity, and volatility, because of policy regime shifts and structural changes. Bayesian methods are used to select the best model solution for examining if international spillovers come from multivariate volatility, time variation, or contemporaneous relationship. An empirical application among Central-Eastern and Western Europe economies is conducted to describe the performance of the methodology, with particular emphasis on the Great Recession and post-crisis periods. A simulated example is also addressed to highlight the performance of the estimating procedure. Findings from evidence-based forecasting are also addressed to evaluate the impact of an ongoing pandemic crisis on the global economy.

]]>Econometrics doi: 10.3390/econometrics9020019

Authors: Gustavo Canavire-Bacarreza Luis Castro Peñarrieta Darwin Ugarte Ontiveros

Outliers can be particularly hard to detect, creating bias and inconsistency in the semi-parametric estimates. In this paper, we use Monte Carlo simulations to demonstrate that semi-parametric methods, such as matching, are biased in the presence of outliers. Bad and good leverage point outliers are considered. Bias arises in the case of bad leverage points because they completely change the distribution of the metrics used to define counterfactuals; good leverage points, on the other hand, increase the chance of breaking the common support condition and distort the balance of the covariates, which may push practitioners to misspecify the propensity score or the distance measures. We provide some clues to identify and correct for the effects of outliers following a reweighting strategy in the spirit of the Stahel-Donoho (SD) multivariate estimator of scale and location, and the S-estimator of multivariate location (Smultiv). An application of this strategy to experimental data is also implemented.

]]>Econometrics doi: 10.3390/econometrics9020018

Authors: D. Stephen G. Pollock

Much of the algebra that is associated with the Kronecker product of matrices has been rendered in the conventional notation of matrix algebra, which conceals the essential structures of the objects of the analysis. This makes it difficult to establish even the most salient of the results. The problems can be greatly alleviated by adopting an orderly index notation that reveals these structures. This claim is demonstrated by considering a problem that several authors have already addressed without producing a widely accepted solution.

]]>Econometrics doi: 10.3390/econometrics9020017

Authors: Konstantinos Gkillas Christoforos Konstantatos Costas Siriopoulos

We study the non-linear causal relation between uncertainty-due-to-infectious-diseases and stock–bond correlation. To this end, we use high-frequency 1-min data to compute daily realized measures of correlation and jumps, and then, we employ a nonlinear Granger causality test with the use of artificial neural networks so as to investigate the predictability of this type of uncertainty on realized stock–bond correlation and jumps. Our findings reveal that uncertainty-due-to-infectious-diseases has significant predictive value on the changes of the stock–bond relation.

]]>Econometrics doi: 10.3390/econometrics9020016

Authors: Liqiong Chen Antonio F. Galvao Suyong Song

This paper studies estimation and inference for linear quantile regression models with generated regressors. We suggest a practical two-step estimation procedure, where the generated regressors are computed in the first step. The asymptotic properties of the two-step estimator, namely, consistency and asymptotic normality are established. We show that the asymptotic variance-covariance matrix needs to be adjusted to account for the first-step estimation error. We propose a general estimator for the asymptotic variance-covariance, establish its consistency, and develop testing procedures for linear hypotheses in these models. Monte Carlo simulations to evaluate the finite-sample performance of the estimation and inference procedures are provided. Finally, we apply the proposed methods to study Engel curves for various commodities using data from the UK Family Expenditure Survey. We document strong heterogeneity in the estimated Engel curves along the conditional distribution of the budget share of each commodity. The empirical application also emphasizes that correctly estimating confidence intervals for the estimated Engel curves by the proposed estimator is of importance for inference.

]]>Econometrics doi: 10.3390/econometrics9020015

Authors: Jau-er Chen Chien-Hsun Huang Jia-Jyun Tien

In this study, we investigate the estimation and inference on a low-dimensional causal parameter in the presence of high-dimensional controls in an instrumental variable quantile regression. Our proposed econometric procedure builds on the Neyman-type orthogonal moment conditions of a previous study (Chernozhukov et al. 2018) and is thus relatively insensitive to the estimation of the nuisance parameters. The Monte Carlo experiments show that the estimator copes well with high-dimensional controls. We also apply the procedure to empirically reinvestigate the quantile treatment effect of 401(k) participation on accumulated wealth.

]]>Econometrics doi: 10.3390/econometrics9010014

Authors: Souvik Banerjee Anirban Basu

We provide evidence on the least biased ways to identify causal effects in situations where there are multiple outcomes that all depend on the same endogenous regressor and a reasonable but potentially contaminated instrumental variable that is available. Simulations provide suggestive evidence on the complementarity of instrumental variable (IV) and latent factor methods and how this complementarity depends on the number of outcome variables and the degree of contamination in the IV. We apply the causal inference methods to assess the impact of mental illness on work absenteeism and disability, using the National Comorbidity Survey Replication.

]]>Econometrics doi: 10.3390/econometrics9010013

Authors: Christian Leschinski Michelle Voges Philipp Sibbertsen

It is commonly found that the markets for long-term government bonds of Economic and Monetary Union (EMU) countries were integrated prior to the EMU debt crisis. Contrasting this, we show, based on the interrelation between market integration and fractional cointegration, that there were periods of integration and disintegration that coincide with bull and bear market periods in the stock market. An econometric argument about the spectral behavior of long-memory time series leads to the conclusion that there is a stronger differentiation between bonds with different default risks. This implied the possibility of macroeconomic and fiscal divergence between the EMU countries before the crisis periods.

]]>Econometrics doi: 10.3390/econometrics9010012

Authors: Fabian Knorre Martin Wagner Maximilian Grupe

This paper develops residual-based monitoring procedures for cointegrating polynomial regressions (CPRs), i.e., regression models including deterministic variables and integrated processes, as well as integer powers, of integrated processes as regressors. The regressors are allowed to be endogenous, and the stationary errors are allowed to be serially correlated. We consider five variants of monitoring statistics and develop the results for three modified least squares estimators for the parameters of the CPRs. The simulations show that using the combination of self-normalization and a moving window leads to the best performance. We use the developed monitoring statistics to assess the structural stability of environmental Kuznets curves (EKCs) for both CO2 and SO2 emissions for twelve industrialized countries since the first oil price shock.

]]>Econometrics doi: 10.3390/econometrics9010011

Authors: Boriss Siliverstovs

We assess the forecasting performance of the nowcasting model developed at the New York FED. We show that the observation regarding a striking difference in the model’s predictive ability across business cycle phases made earlier in the literature also applies here. During expansions, the nowcasting model forecasts at best are at least as good as the historical mean model, whereas during the recessionary periods, there are very substantial gains corresponding in the reduction in MSFE of about 90% relative to the benchmark model. We show how the asymmetry in the relative forecasting performance can be verified by the use of such recursive measures of relative forecast accuracy as Cumulated Sum of Squared Forecast Error Difference (CSSFED) and Recursive Relative Mean Squared Forecast Error (based on Rearranged observations) (R2MSFE(+R)). Ignoring these asymmetries results in a biased judgement of the relative forecasting performance of the competing models over a sample as a whole, as well as during economic expansions, when the forecasting accuracy of a more sophisticated model relative to naive benchmark models tends to be overstated. Hence, care needs to be exercised when ranking several models by their forecasting performance without taking into consideration various states of the economy.

]]>Econometrics doi: 10.3390/econometrics9010010

Authors: Šárka Hudecová Marie Hušková Simos G. Meintanis

This article considers goodness-of-fit tests for bivariate INAR and bivariate Poisson autoregression models. The test statistics are based on an L2-type distance between two estimators of the probability generating function of the observations: one being entirely nonparametric and the second one being semiparametric computed under the corresponding null hypothesis. The asymptotic distribution of the proposed tests statistics both under the null hypotheses as well as under alternatives is derived and consistency is proved. The case of testing bivariate generalized Poisson autoregression and extension of the methods to dimension higher than two are also discussed. The finite-sample performance of a parametric bootstrap version of the tests is illustrated via a series of Monte Carlo experiments. The article concludes with applications on real data sets and discussion.

]]>Econometrics doi: 10.3390/econometrics9010009

Authors: J. Eduardo Vera-Valdés

Econometric studies for global heating have typically used regional or global temperature averages to study its long memory properties. One typical explanation behind the long memory properties of temperature averages is cross-sectional aggregation. Nonetheless, formal analysis regarding the effect that aggregation has on the long memory dynamics of temperature data has been missing. Thus, this paper studies the long memory properties of individual grid temperatures and compares them against the long memory dynamics of global and regional averages. Our results show that the long memory parameters in individual grid observations are smaller than those from regional averages. Global and regional long memory estimates are greatly affected by temperature measurements at the Tropics, where the data is less reliable. Thus, this paper supports the notion that aggregation may be exacerbating the long memory estimated in regional and global temperature data. The results are robust to the bandwidth parameter, limit for station radius of influence, and sampling frequency.

]]>Econometrics doi: 10.3390/econometrics9010008

Authors: Paula Simões Sérgio Gomes Isabel Natário

Hospital emergency departments are often overused by patients that do not really need urgent care. These admissions are one of the major factors contributing to hospital costs, which should not be allowed to compromise the response and effectiveness of the National Health Services (SNS). The aim of this study is to perform a detailed spatial health econometrics analysis of the non-urgent emergency situations (classified by Manchester triage) by area, linking them with the efficient use of the national health line, the Saude24 line (S24 line). This is evaluated through the S24 savings calls, using a savings index and its spatial effectiveness in solving the non-urgent emergency situations. A savings call is a call by a user whose initial intention was to go to an urgency department, but who. after calling the S24 line. changed his/her mind. Given the spatial nature of the data, and resorting to INLA in a Bayesian paradigm, the number of non-urgent cases in the Portuguese urgency hospital departments is modeled in an autoregressive way. The spatial structure is accounted for by a set of random effects. The model additionally includes regular covariates and a spatially lagged covariate savings index, related with the S24 savings calls. Therefore, the response in a given area depends not only on the (weighted) values of the response in its neighborhood and of the considered covariates, but also on the (weighted) values of the covariate savings index measured in each neighbor, by means of a Bayesian Poisson spatial Durbin model.

]]>Econometrics doi: 10.3390/econometrics9010007

Authors: Kevin D. Hoover

The author would like to make the following correction to the article by Kevin D [...]

]]>Econometrics doi: 10.3390/econometrics9010006

Authors: Kyungsik Nam

This study proposes a nonlinear cointegrating regression model based on the well-known energy balance climate model. Specifically, I investigate the nonlinear cointegrating regression of the mean of temperature anomaly distributions on total radiative forcing using estimated spatial distributions of temperature anomalies for the Globe, Northern Hemisphere, and Southern Hemisphere. Further, I provide two types of nonlinear response functions that map the total radiative forcing level to mean temperature anomalies. The proposed statistical model provides a climatological implication that spatially heterogenous warming effects play a significant role in identifying nonlinear climate sensitivity. Cointegration and specification tests are provided that support the existence of nonlinear effects of total radiative forcing.

]]>Econometrics doi: 10.3390/econometrics9010005

Authors: Katarina Juselius

This survey paper discusses the Cointegrated Vector AutoRegressive (CVAR) methodology and how it has evolved over the past 30 years. It describes major steps in the econometric development, discusses problems to be solved when confronting theory with the data, and, as a solution, proposes a so-called theory-consistent CVAR scenario. A number of early CVAR applications are motivated by the urge to find out why the empirical results did not support Milton Friedman’s concept of monetary inflation. The paper also proposes a method for combining partial CVAR analyses into a large-scale macroeconomic model. It argues that an empirically-based approach to macroeconomics preferably should be based on Keynesian disequilibrium economics, where imperfect knowledge expectations replace so called rational expectations and where the financial sector plays a key role for understanding the long persistent movements in the data. Finally, the paper argues that the CVAR is potentially a candidate for Haavelmo’s “design of experiment for passive observations” and provides several illustrations.

]]>Econometrics doi: 10.3390/econometrics9010004

Authors: Econometrics Editorial Office Econometrics Editorial Office

Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Econometrics maintains its standards for the high quality of its published papers [...]

]]>Econometrics doi: 10.3390/econometrics9010003

Authors: D. Stephen G. Pollock

The effect of the conventional model-based methods of seasonal adjustment is to nullify the elements of the data that reside at the seasonal frequencies and to attenuate the elements at the adjacent frequencies. It may be desirable to nullify some of the adjacent elements instead of merely attenuating them. For this purpose, two alternative sets of procedures are presented that have been implemented in a computer program named SEASCAPE. In the first set of procedures, a basic seasonal adjustment filter is augmented by additional filters that are targeted at the adjacent frequencies. In the second set of procedures, a Fourier transform of the data is exploited to allow the elements in the vicinities of the seasonal frequencies to be eliminated or attenuated at will. The question is raised of whether an estimated trend-cycle trajectory that is devoid of high-frequency noise can serve in place of the seasonally adjusted data.

]]>Econometrics doi: 10.3390/econometrics9010002

Authors: Muhammad Bhatti Jae Kim

As the guest editors of this Special Issue, we feel proud and grateful to write the editorial note of this issue, which consists of seven high-quality research papers [...]

]]>Econometrics doi: 10.3390/econometrics9010001

Authors: N’Golo Koné

The maximum diversification has been shown in the literature to depend on the vector of asset volatilities and the inverse of the covariance matrix of the asset return. In practice, these two quantities need to be replaced by their sample statistics. The estimation error associated with the use of these sample statistics may be amplified due to (near) singularity of the covariance matrix, in financial markets with many assets. This, in turn, may lead to the selection of portfolios that are far from the optimal regarding standard portfolio performance measures of the financial market. To address this problem, we investigate three regularization techniques, including the ridge, the spectral cut-off, and the Landweber–Fridman approaches in order to stabilize the inverse of the covariance matrix. These regularization schemes involve a tuning parameter that needs to be chosen. In light of this fact, we propose a data-driven method for selecting the tuning parameter. We show that the selected portfolio by regularization is asymptotically efficient with respect to the diversification ratio. In empirical and Monte Carlo experiments, the resulting regularized rules are compared to several strategies, such as the most diversified portfolio, the target portfolio, the global minimum variance portfolio, and the naive 1/N strategy in terms of in-sample and out-of-sample Sharpe ratio performance, and it is shown that our method yields significant Sharpe ratio improvements.

]]>Econometrics doi: 10.3390/econometrics8040044

Authors: Martin Huber Anna Solovyeva

This paper extends the evaluation of direct and indirect treatment effects, i.e., mediation analysis, to the case that outcomes are only partially observed due to sample selection or outcome attrition. We assume sequential conditional independence of the treatment and the mediator, i.e., the variable through which the indirect effect operates. We also impose missing at random or instrumental variable assumptions on the outcome attrition process. Under these conditions, we derive identification results for the effects of interest that are based on inverse probability weighting by specific treatment, mediator, and/or selection propensity scores. We also provide a simulation study and an empirical application to the U.S. Project STAR data in which we assess the direct impact and indirect effect (via absenteeism) of smaller kindergarten classes on math test scores. The estimators considered are available in the &lsquo;causalweight&rsquo; package for the statistical software &lsquo;R&rsquo;.

]]>Econometrics doi: 10.3390/econometrics8040043

Authors: Michael D. Goldberg Olesia Kozlova Deniz Ozabaci

This paper examines the stability of the Bilson–Fama regression for a panel of 55 developed and developing countries. We find multiple break points for nearly every country in our panel. Subperiod estimates of the slope coefficient show a negative bias during some time periods and a positive bias during other time periods in nearly every country. The subperiod biases display two key patterns that shed light on the literature’s linear regression findings. The results point toward the importance of risk in currency markets. We find that risk is greater for developed country markets. The evidence undercuts the widespread view that currency returns are predictable or that developed country markets are less rational.

]]>Econometrics doi: 10.3390/econometrics8040042

Authors: Dietmar Bauer Lukas Matuschek Patrick de Matos Ribeiro Martin Wagner

We develop and discuss a parameterization of vector autoregressive moving average processes with arbitrary unit roots and (co)integration orders. The detailed analysis of the topological properties of the parameterization&mdash;based on the state space canonical form of Bauer and Wagner (2012)&mdash;is an essential input for establishing statistical and numerical properties of pseudo maximum likelihood estimators as well as, e.g., pseudo likelihood ratio tests based on them. The general results are exemplified in detail for the empirically most relevant cases, the (multiple frequency or seasonal) I(1) and the I(2) case. For these two cases we also discuss the modeling of deterministic components in detail.

]]>Econometrics doi: 10.3390/econometrics8040041

Authors: Eric Hillebrand Søren Johansen Torben Schmith

We study the stability of estimated linear statistical relations of global mean temperature and global mean sea level with regard to data revisions. Using four different model specifications proposed in the literature, we compare coefficient estimates and long-term sea level projections using two different vintages of each of the annual time series, covering the periods 1880&ndash;2001 and 1880&ndash;2013. We find that temperature and sea level updates and revisions have a substantial influence both on the magnitude of the estimated coefficients of influence (differences of up to 50%) and therefore on long-term projections of sea level rise following the RCP4.5 and RCP6 scenarios (differences of up to 40 cm by the year 2100). This shows that in order to replicate earlier results that informed the scientific discussion and motivated policy recommendations, it is crucial to have access to and to work with the data vintages used at the time.

]]>Econometrics doi: 10.3390/econometrics8040040

Authors: Erhard Reschenhofer Manveer K. Mangat

For typical sample sizes occurring in economic and financial applications, the squared bias of estimators for the memory parameter is small relative to the variance. Smoothing is therefore a suitable way to improve the performance in terms of the mean squared error. However, in an analysis of financial high-frequency data, where the estimates are obtained separately for each day and then combined by averaging, the variance decreases with the sample size but the bias remains fixed. This paper proposes a method of smoothing that does not entail an increase in the bias. This method is based on the simultaneous examination of different partitions of the data. An extensive simulation study is carried out to compare it with conventional estimation methods. In this study, the new method outperforms its unsmoothed competitors with respect to the variance and its smoothed competitors with respect to the bias. Using the results of the simulation study for the proper interpretation of the empirical results obtained from a financial high-frequency dataset, we conclude that significant long-range dependencies are present only in the intraday volatility but not in the intraday returns. Finally, the robustness of these findings against daily and weekly periodic patterns is established.

]]>Econometrics doi: 10.3390/econometrics8040039

Authors: Nandana Sengupta Fallaw Sowell

The asymptotic distribution of the linear instrumental variables (IV) estimator with empirically selected ridge regression penalty is characterized. The regularization tuning parameter is selected by splitting the observed data into training and test samples and becomes an estimated parameter that jointly converges with the parameters of interest. The asymptotic distribution is a nonstandard mixture distribution. Monte Carlo simulations show the asymptotic distribution captures the characteristics of the sampling distributions and when this ridge estimator performs better than two-stage least squares. An empirical application on returns to education data is presented.

]]>Econometrics doi: 10.3390/econometrics8030038

Authors: Yuanyuan Li Dietmar Bauer

In this paper the theory on the estimation of vector autoregressive (VAR) models for I(2) processes is extended to the case of long VAR approximation of more general processes. Hereby the order of the autoregression is allowed to tend to infinity at a certain rate depending on the sample size. We deal with unrestricted OLS estimators (in the model formulated in levels as well as in vector error correction form) as well as with two stage estimation (2SI2) in the vector error correction model (VECM) formulation. Our main results are analogous to the I(1) case: We show that the long VAR approximation leads to consistent estimates of the long and short run dynamics. Furthermore, tests on the autoregressive coefficients follow standard asymptotics. The pseudo likelihood ratio tests on the cointegrating ranks (using the Gaussian likelihood) used in the 2SI2 algorithm show under the null hypothesis the same distributions as in the case of data generating processes following finite order VARs. The same holds true for the asymptotic distribution of the long run dynamics both in the unrestricted VECM estimation and the reduced rank regression in the 2SI2 algorithm. Building on these results we show that if the data is generated by an invertible VARMA process, the VAR approximation can be used in order to derive a consistent initial estimator for subsequent pseudo likelihood optimization in the VARMA model.

]]>Econometrics doi: 10.3390/econometrics8030037

Authors: C. Vladimir Rodríguez-Caballero J. Eduardo Vera-Valdés

This paper studies long economic series to assess the long-lasting effects of pandemics. We analyze if periods that cover pandemics have a change in trend and persistence in growth, and in level and persistence in unemployment. We find that there is an upward trend in the persistence level of growth across centuries. In particular, shocks originated by pandemics in recent times seem to have a permanent effect on growth. Moreover, our results show that the unemployment rate increases and becomes more persistent after a pandemic. In this regard, our findings support the design and implementation of timely counter-cyclical policies to soften the shock of the pandemic.

]]>Econometrics doi: 10.3390/econometrics8030036

Authors: Jeremy Arkes

Building on arguments by Joshua Angrist and J&ouml;rn-Steffen Pischke arguments for how the teaching of undergraduate econometrics could become more effective, I propose a redesign of graduate econometrics that would better serve most students and help make the field of economics more relevant. The primary basis for the redesign is that the conventional methods do not adequately prepare students to recognize biases and to properly interpret significance, insignificance, and p-values; and there is an ethical problem in searching for significance and other matters. Based on these premises, I recommend that some of Angrist and Pischke&rsquo;s recommendations be adopted for graduate econometrics. In addition, I recommend further shifts in emphasis, new pedagogy, and adding important components (e.g., on interpretations and simple ethical lessons) that are largely ignored in current textbooks. An obvious implication of these recommended changes is a confirmation of most of Angrist and Pischke&rsquo;s recommendations for undergraduate econometrics, as well as further reductions in complexity.

]]>Econometrics doi: 10.3390/econometrics8030035

Authors: D. Stephen G. Pollock

The econometric data to which autoregressive moving-average models are commonly applied are liable to contain elements from a limited range of frequencies. If the data do not cover the full Nyquist frequency range of [0,&pi;] radians, then severe biases can occur in estimating their parameters. The recourse should be to reconstitute the underlying continuous data trajectory and to resample it at an appropriate lesser rate. The trajectory can be derived by associating sinc fuction kernels to the data points. This suggests a model for the underlying processes. The paper describes frequency-limited linear stochastic differential equations that conform to such a model, and it compares them with equations of a model that is assumed to be driven by a white-noise process of unbounded frequencies. The means of estimating models of both varieties are described.

]]>Econometrics doi: 10.3390/econometrics8030034

Authors: Yong Bao Xiaotian Liu Lihong Yang

The ordinary least squares (OLS) estimator for spatial autoregressions may be consistent as pointed out by Lee (2002), provided that each spatial unit is influenced aggregately by a significant portion of the total units. This paper presents a unified asymptotic distribution result of the properly recentered OLS estimator and proposes a new estimator that is based on the indirect inference (II) procedure. The resulting estimator can always be used regardless of the degree of aggregate influence on each spatial unit from other units and is consistent and asymptotically normal. The new estimator does not rely on distributional assumptions and is robust to unknown heteroscedasticity. Its good finite-sample performance, in comparison with existing estimators that are also robust to heteroscedasticity, is demonstrated by a Monte Carlo study.

]]>Econometrics doi: 10.3390/econometrics8030033

Authors: Stefan Mittnik Willi Semmler Alexander Haider

Recent research in financial economics has shown that rare large disasters have the potential to disrupt financial sectors via the destruction of capital stocks and jumps in risk premia. These disruptions often entail negative feedback effects on the macroeconomy. Research on disaster risks has also actively been pursued in the macroeconomic models of climate change. Our paper uses insights from the former work to study disaster risks in the macroeconomics of climate change and to spell out policy needs. Empirically, the link between carbon dioxide emission and the frequency of climate related disaster is investigated using a panel data approach. The modeling part then uses a multi-phase dynamic macro model to explore the effects of rare large disasters resulting in capital losses and rising risk premia. Our proposed multi-phase dynamic model, incorporating climate-related disaster shocks and their aftermath as a distressed phase, is suitable for studying mitigation and adaptation policies as well as recovery policies.

]]>Econometrics doi: 10.3390/econometrics8030032

Authors: Katsuto Tanaka Weilin Xiao Jun Yu

This paper estimates the drift parameters in the fractional Vasicek model from a continuous record of observations via maximum likelihood (ML). The asymptotic theory for the ML estimates (MLE) is established in the stationary case, the explosive case, and the boundary case for the entire range of the Hurst parameter, providing a complete treatment of asymptotic analysis. It is shown that changing the sign of the persistence parameter changes the asymptotic theory for the MLE, including the rate of convergence and the limiting distribution. It is also found that the asymptotic theory depends on the value of the Hurst parameter.

]]>Econometrics doi: 10.3390/econometrics8030031

Authors: Kevin D. Hoover

The relation between causal structure and cointegration and long-run weak exogeneity is explored using some ideas drawn from the literature on graphical causal modeling. It is assumed that the fundamental source of trending behavior is transmitted from exogenous (and typically latent) trending variables to a set of causally ordered variables that would not themselves display nonstationary behavior if the nonstationary exogenous causes were absent. The possibility of inferring the long-run causal structure among a set of time-series variables from an exhaustive examination of weak exogeneity in irreducibly cointegrated subsets of variables is explored and illustrated.

]]>Econometrics doi: 10.3390/econometrics8030030

Authors: Peter C. B. Phillips

We discuss some conceptual and practical issues that arise from the presence of global energy balance effects on station level adjustment mechanisms in dynamic panel regressions with climate data. The paper provides asymptotic analyses, observational data computations, and Monte Carlo simulations to assess the use of various estimation methodologies, including standard dynamic panel regression and cointegration techniques that have been used in earlier research. The findings reveal massive bias in system GMM estimation of the dynamic panel regression parameters, which arise from fixed effect heterogeneity across individual station level observations. Difference GMM and Within Group (WG) estimation have little bias and WG estimation is recommended for practical implementation of dynamic panel regression with highly disaggregated climate data. Intriguingly, from an econometric perspective and importantly for global policy analysis, it is shown that in this model despite the substantial differences between the estimates of the regression model parameters, estimates of global transient climate sensitivity (of temperature to a doubling of atmospheric CO2) are robust to the estimation method employed and to the specific nature of the trending mechanism in global temperature, radiation, and CO2.

]]>Econometrics doi: 10.3390/econometrics8030029

Authors: Marit Gjelsvik Ragnar Nymoen Victoria Sparrman

Wage coordination plays an important role in macroeconomic stabilization. Pattern wage bargaining systems have been common in Europe, but in different forms, and with different degrees of success in terms of actual coordination reached. We focus on wage formation in Norway, a small open economy, where it is custom to regard the manufacturing industry as the wage leader. We estimate a model of wage formation in manufacturing and in two other sectors. Deciding cointegration rank is an important step in the analysis, economically as well statistically. In combination with simultaneous equation modelling, the cointegration analysis provides evidence that collective wage negotiations in manufacturing have defined wage norms for the rest of the economy over the period 1980(1)&ndash;2014(4).

]]>Econometrics doi: 10.3390/econometrics8030028

Authors: Manveer Kaur Mangat Erhard Reschenhofer

The goal of this paper is to search for conclusive evidence against the stationarity of the global air surface temperature, which is one of the most important indicators of climate change. For this purpose, possible long-range dependencies are investigated in the frequency-domain. Since conventional tests of hypotheses about the memory parameter, which measures the degree of long-range dependence, are typically based on asymptotic arguments and are therefore of limited practical value in case of small or medium sample sizes, we employ a new small-sample test as well as a related estimator for the memory parameter. To safeguard against false positive findings, simulation studies are carried out to examine the suitability of the employed methods and hemispheric datasets are used to check the robustness of the empirical findings against low-frequency natural variability caused by oceanic cycles. Overall, our frequency-domain analysis provides strong evidence of non-stationarity, which is consistent with previous results obtained in the time domain with models allowing for stochastic or deterministic trends.

]]>Econometrics doi: 10.3390/econometrics8030027

Authors: Céline Cunen Nils Lid Hjort

When using the Focused Information Criterion (FIC) for assessing and ranking candidate models with respect to how well they do for a given estimation task, it is customary to produce a so-called FIC plot. This plot has the different point estimates along the y-axis and the root-FIC scores on the x-axis, these being the estimated root-mean-square scores. In this paper we address the estimation uncertainty involved in each of the points of such a FIC plot. This needs careful assessment of each of the estimators from the candidate models, taking also modelling bias into account, along with the relative precision of the associated estimated mean squared error quantities. We use confidence distributions for these tasks. This leads to fruitful CD&ndash;FIC plots, helping the statistician to judge to what extent the seemingly best models really are better than other models, etc. These efforts also lead to two further developments. The first is a new tool for model selection, which we call the quantile-FIC, which helps overcome certain difficulties associated with the usual FIC procedures, related to somewhat arbitrary schemes for handling estimated squared biases. A particular case is the median-FIC. The second development is to form model averaged estimators with weights determined by the relative sizes of the median- and quantile-FIC scores.

]]>Econometrics doi: 10.3390/econometrics8020026

Authors: Francis Bilson Darku Frank Konietschke Bhargab Chattopadhyay

The Gini index, a widely used economic inequality measure, is computed using data whose designs involve clustering and stratification, generally known as complex household surveys. Under complex household survey, we develop two novel procedures for estimating Gini index with a pre-specified error bound and confidence level. The two proposed approaches are based on the concept of sequential analysis which is known to be economical in the sense of obtaining an optimal cluster size which reduces project cost (that is total sampling cost) thereby achieving the pre-specified error bound and the confidence level under reasonable assumptions. Some large sample properties of the proposed procedures are examined without assuming any specific distribution. Empirical illustrations of both procedures are provided using the consumption expenditure data obtained by National Sample Survey (NSS) Organization in India.

]]>Econometrics doi: 10.3390/econometrics8020025

Authors: Fernanda Valente Márcio Laurini

In this paper, we analyze the tornado occurrences in the Unites States. To perform inference procedures for the spatio-temporal point process we adopt a dynamic representation of Log-Gaussian Cox Process. This representation is based on the decomposition of intensity function in components of trend, cycles, and spatial effects. In this model, spatial effects are also represented by a dynamic functional structure, which allows analyzing the possible changes in the spatio-temporal distribution of the occurrence of tornadoes due to possible changes in climate patterns. The model was estimated using Bayesian inference through the Integrated Nested Laplace Approximations. We use data from the Storm Prediction Center&rsquo;s Severe Weather Database between 1954 and 2018, and the results provided evidence, from new perspectives, that trends in annual tornado occurrences in the United States have remained relatively constant, supporting previously reported findings.

]]>Econometrics doi: 10.3390/econometrics8020024

Authors: Robert C. Jung Andrew R. Tremayne

The paper is concerned with estimation and application of a special stationary integer autoregressive model where multiple binomial thinnings are not independent of one another. Parameter estimation in such models has hitherto been accomplished using method of moments, or nonlinear least squares, but not maximum likelihood. We obtain the conditional distribution needed to implement maximum likelihood. The sampling performance of the new estimator is compared to extant ones by reporting the results of some simulation experiments. An application to a stock-type data set of financial counts is provided and the conditional distribution is used to compare two competing models and in forecasting.

]]>Econometrics doi: 10.3390/econometrics8020023

Authors: Virgilio Gómez-Rubio Roger S. Bivand Håvard Rue

The integrated nested Laplace approximation (INLA) for Bayesian inference is an efficient approach to estimate the posterior marginal distributions of the parameters and latent effects of Bayesian hierarchical models that can be expressed as latent Gaussian Markov random fields (GMRF). The representation as a GMRF allows the associated software R-INLA to estimate the posterior marginals in a fraction of the time as typical Markov chain Monte Carlo algorithms. INLA can be extended by means of Bayesian model averaging (BMA) to increase the number of models that it can fit to conditional latent GMRF. In this paper, we review the use of BMA with INLA and propose a new example on spatial econometrics models.

]]>Econometrics doi: 10.3390/econometrics8020022

Authors: Alex Lenkoski Fredrik L. Aanes

In economic applications, model averaging has found principal use in examining the validity of various theories related to observed heterogeneity in outcomes such as growth, development, and trade. Though often easy to articulate, these theories are imperfectly captured quantitatively. A number of different proxies are often collected for a given theory and the uneven nature of this collection requires care when employing model averaging. Furthermore, if valid, these theories ought to be relevant outside of any single narrowly focused outcome equation. We propose a methodology which treats theories as represented by latent indices, these latent processes controlled by model averaging on the proxy level. To achieve generalizability of the theory index our framework assumes a collection of outcome equations. We accommodate a flexible set of generalized additive models, enabling non-Gaussian outcomes to be included. Furthermore, selection of relevant theories also occurs on the outcome level, allowing for theories to be differentially valid. Our focus is on creating a set of theory-based indices directed at understanding a country&rsquo;s potential risk of macroeconomic collapse. These Sovereign Risk Indices are calibrated across a set of different &ldquo;collapse&rdquo; criteria, including default on sovereign debt, heightened potential for high unemployment or inflation and dramatic swings in foreign exchange values. The goal of this exercise is to render a portable set of country/year theory indices which can find more general use in the research community.

]]>Econometrics doi: 10.3390/econometrics8020021

Authors: Marcin Błażejowski Jacek Kwiatkowski Paweł Kufel

In this paper, we apply Bayesian averaging of classical estimates (BACE) and Bayesian model averaging (BMA) as an automatic modeling procedures for two well-known macroeconometric models: UK demand for narrow money and long-term inflation. Empirical results verify the correctness of BACE and BMA selection and exhibit similar or better forecasting performance compared with a non-pooling approach. As a benchmark, we use Autometrics&mdash;an algorithm for automatic model selection. Our study is implemented in the easy-to-use gretl packages, which support parallel processing, automates numerical calculations, and allows for efficient computations.

]]>Econometrics doi: 10.3390/econometrics8020020

Authors: Annalisa Cadonna Sylvia Frühwirth-Schnatter Peter Knaus

Time-varying parameter (TVP) models are very flexible in capturing gradual changes in the effect of explanatory variables on the outcome variable. However, in particular when the number of explanatory variables is large, there is a known risk of overfitting and poor predictive performance, since the effect of some explanatory variables is constant over time. We propose a new prior for variance shrinkage in TVP models, called triple gamma. The triple gamma prior encompasses a number of priors that have been suggested previously, such as the Bayesian Lasso, the double gamma prior and the Horseshoe prior. We present the desirable properties of such a prior and its relationship to Bayesian Model Averaging for variance selection. The features of the triple gamma prior are then illustrated in the context of time varying parameter vector autoregressive models, both for simulated dataset and for a series of macroeconomics variables in the Euro Area.

]]>Econometrics doi: 10.3390/econometrics8020019

Authors: Bo Yu Bruce Mizrach Norman R. Swanson

We investigate the marginal predictive content of small versus large jump variation, when forecasting one-week-ahead cross-sectional equity returns, building on Bollerslev et al. (2020). We find that sorting on signed small jump variation leads to greater value-weighted return differentials between stocks in our highest- and lowest-quintile portfolios (i.e., high&ndash;low spreads) than when either signed total jump or signed large jump variation is sorted on. It is shown that the benefit of signed small jump variation investing is driven by stock selection within an industry, rather than industry bets. Investors prefer stocks with a high probability of having positive jumps, but they also tend to overweight safer industries. Also, consistent with the findings in Scaillet et al. (2018), upside (downside) jump variation negatively (positively) predicts future returns. However, signed (large/small/total) jump variation has stronger predictive power than both upside and downside jump variation. One reason large and small (signed) jump variation have differing marginal predictive contents is that the predictive content of signed large jump variation is negligible when controlling for either signed total jump variation or realized skewness. By contrast, signed small jump variation has unique information for predicting future returns, even when controlling for these variables. By analyzing earnings announcement surprises, we find that large jumps are closely associated with &ldquo;big&rdquo; news. However, while such news-related information is embedded in large jump variation, the information is generally short-lived, and dissipates too quickly to provide marginal predictive content for subsequent weekly returns. Finally, we find that small jumps are more likely to be diversified away than large jumps and tend to be more closely associated with idiosyncratic risks. This indicates that small jumps are more likely to be driven by liquidity conditions and trading activity.

]]>Econometrics doi: 10.3390/econometrics8020018

Authors: Andrew B. Martinez

I analyze damage from hurricane strikes on the United States since 1955. Using machine learning methods to select the most important drivers for damage, I show that large errors in a hurricane&rsquo;s predicted landfall location result in higher damage. This relationship holds across a wide range of model specifications and when controlling for ex-ante uncertainty and potential endogeneity. Using a counterfactual exercise I find that the cumulative reduction in damage from forecast improvements since 1970 is about $82 billion, which exceeds the U.S. government&rsquo;s spending on the forecasts and private willingness to pay for them.

]]>Econometrics doi: 10.3390/econometrics8020017

Authors: Dimitris Fouskakis Ioannis Ntzoufras

This paper focuses on the Bayesian model average (BMA) using the power&ndash;expected&ndash; posterior prior in objective Bayesian variable selection under normal linear models. We derive a BMA point estimate of a predicted value, and present computation and evaluation strategies of the prediction accuracy. We compare the performance of our method with that of similar approaches in a simulated and a real data example from economics.

]]>Econometrics doi: 10.3390/econometrics8020016

Authors: Michael P. Clements

We apply a bootstrap test to determine whether some forecasters are able to make superior probability assessments to others. In contrast to some findings in the literature for point predictions, there is evidence that some individuals really are better than others. The testing procedure controls for the different economic conditions the forecasters may face, given that each individual responds to only a subset of the surveys. One possible explanation for the different findings for point predictions and histograms is explored: that newcomers may make less accurate histogram forecasts than experienced respondents given the greater complexity of the task.

]]>