Econometrics doi: 10.3390/econometrics9030029

Authors: Federico Bandi Alex Maynard Hyungsik Roger Moon Benoit Perron

Peter Phillips has had a tremendous impact on econometric theory and practice [...]

]]>Econometrics doi: 10.3390/econometrics9030028

Authors: Vincenzo Candila

Recently, the world of cryptocurrencies has experienced an undoubted increase in interest. Since the first cryptocurrency appeared in 2009 in the aftermath of the Great Recession, the popularity of digital currencies has, year by year, risen continuously. As of February 2021, there are more than 8525 cryptocurrencies with a market value of approximately USD 1676 billion. These particular assets can be used to diversify the portfolio as well as for speculative actions. For this reason, investigating the daily volatility and co-volatility of cryptocurrencies is crucial for investors and portfolio managers. In this work, the interdependencies among a panel of the most traded digital currencies are explored and evaluated from statistical and economic points of view. Taking advantage of the monthly Google queries (which appear to be the factors driving the price dynamics) on cryptocurrencies, we adopted a mixed-frequency approach within the Dynamic Conditional Correlation (DCC) model. In particular, we introduced the Double Asymmetric GARCH–MIDAS model in the DCC framework.

]]>Econometrics doi: 10.3390/econometrics9030027

Authors: Arifatus Solikhah Heri Kuswanto Nur Iriawan Kartika Fithriasari

We generalize the Gaussian Mixture Autoregressive (GMAR) model to the Fisher’s z Mixture Autoregressive (ZMAR) model for modeling nonlinear time series. The model consists of a mixture of K-component Fisher’s z autoregressive models with the mixing proportions changing over time. This model can capture time series with both heteroskedasticity and multimodal conditional distribution, using Fisher’s z distribution as an innovation in the MAR model. The ZMAR model is classified as nonlinearity in the level (or mode) model because the mode of the Fisher’s z distribution is stable in its location parameter, whether symmetric or asymmetric. Using the Markov Chain Monte Carlo (MCMC) algorithm, e.g., the No-U-Turn Sampler (NUTS), we conducted a simulation study to investigate the model performance compared to the GMAR model and Student t Mixture Autoregressive (TMAR) model. The models are applied to the daily IBM stock prices and the monthly Brent crude oil prices. The results show that the proposed model outperforms the existing ones, as indicated by the Pareto-Smoothed Important Sampling Leave-One-Out cross-validation (PSIS-LOO) minimum criterion.

]]>Econometrics doi: 10.3390/econometrics9030026

Authors: Jennifer L. Castle Jurgen A. Doornik David F. Hendry

We investigate forecasting in models that condition on variables for which future values are unknown. We consider the role of the significance level because it guides the binary decisions whether to include or exclude variables. The analysis is extended by allowing for a structural break, either in the first forecast period or just before. Theoretical results are derived for a three-variable static model, but generalized to include dynamics and many more variables in the simulation experiment. The results show that the trade-off for selecting variables in forecasting models in a stationary world, namely that variables should be retained if their noncentralities exceed unity, still applies in settings with structural breaks. This provides support for model selection at looser than conventional settings, albeit with many additional features explaining the forecast performance, and with the caveat that retaining irrelevant variables that are subject to location shifts can worsen forecast performance.

]]>Econometrics doi: 10.3390/econometrics9020025

Authors: Yuanyuan Deng Hugo Benítez-Silva

Medicare is one of the largest federal social insurance programs in the United States and the secondary payer for Medicare beneficiaries covered by employer-provided health insurance (EPHI). However, an increasing number of individuals are delaying their Medicare enrollment when they first become eligible at age 65. Using administrative data from the Medicare Current Beneficiary Survey (MCBS), this paper estimates the effects of EPHI, employment, and delays in Medicare enrollment on Medicare costs. Given the administrative nature of the data, we are able to disentangle and estimate the Medicare as secondary payer (MSP) effect and the work effects on Medicare costs, as well as to construct delay enrollment indicators. Using Heckman’s sample selection model, we estimate that MSP and being employed are associated with a lower probability of observing positive Medicare spending and a lower level of Medicare spending. This paper quantifies annual savings of $5.37 billion from MSP and being employed. Delays in Medicare enrollment generate additional annual savings of $10.17 billion. Owing to the links between employment, health insurance coverage, and Medicare costs presented in this research, our findings may be of interest to policy makers who should take into account the consequences of reforms on the Medicare system.

]]>Econometrics doi: 10.3390/econometrics9020024

Authors: Hildegart Ahumada Magdalena Cornejo

We analyze the influence of climate change on soybean yields in a multivariate time-series framework for a major soybean producer and exporter—Argentina. Long-run relationships are found in partial systems involving climatic, technological, and economic factors. Automatic model selection simplifies dynamic specification for a model of soybean yields and permits encompassing tests of different economic hypotheses. Soybean yields adjust to disequilibria that reflect technological improvements to seed and crops practices. Climatic effects include (a) a positive effect from increased CO2 concentrations, which may capture accelerated photosynthesis, and (b) a negative effect from high local temperatures, which could increase with continued global warming.

]]>Econometrics doi: 10.3390/econometrics9020023

Authors: Yixiao Jiang

This paper investigates the incentive of credit rating agencies (CRAs) to bias ratings using a semiparametric, ordered-response model. The proposed model explicitly takes conflicts of interest into account and allows the ratings to depend flexibly on risk attributes through a semiparametric index structure. Asymptotic normality for the estimator is derived after using several bias correction techniques. Using Moody’s rating data from 2001 to 2016, I found that firms related to Moody’s shareholders were more likely to receive better ratings. Such favorable treatments were more pronounced in investment grade bonds compared with high yield bonds, with the 2007–2009 financial crisis being an exception. Parametric models, such as the ordered-probit, failed to identify this heterogeneity of the rating bias across different bond categories.

]]>Econometrics doi: 10.3390/econometrics9020022

Authors: Kajal Lahiri Zulkarnain Pulungan

Following recent econometric developments, we use self-assessed general health on a Likert scale conditioned by several objective determinants to measure health disparity between non-Hispanic Whites and minority groups in the United States. A statistical decomposition analysis is conducted to determine the contributions of socio-demographic and neighborhood characteristics in generating disparities. Whereas, 72% of health disparity between Whites and Blacks is attributable to Blacks’ relatively worse socio-economic and demographic characteristics, it is only 50% for Hispanics and 65% for American Indian Alaska Natives. The role of a number of factors including per capita income and income inequality vary across the groups. Interestingly, “blackness” of a county is associated with better health for all minority groups, but it affects Whites negatively. Our findings suggest that public health initiatives to eliminate health disparity should be targeted differently for different racial/ethnic groups by focusing on the most vulnerable within each group.

]]>Econometrics doi: 10.3390/econometrics9020021

Authors: Manabu Asai Chia-Lin Chang Michael McAleer Laurent Pauwels

This paper derives the statistical properties of a two-step approach to estimating multivariate rotated GARCH-BEKK (RBEKK) models. From the definition of RBEKK, the unconditional covariance matrix is estimated in the first step to rotate the observed variables in order to have the identity matrix for its sample covariance matrix. In the second step, the remaining parameters are estimated by maximizing the quasi-log-likelihood function. For this two-step quasi-maximum likelihood (2sQML) estimator, this paper shows consistency and asymptotic normality under weak conditions. While second-order moments are needed for the consistency of the estimated unconditional covariance matrix, the existence of the finite sixth-order moments is required for the convergence of the second-order derivatives of the quasi-log-likelihood function. This paper also shows the relationship between the asymptotic distributions of the 2sQML estimator for the RBEKK model and variance targeting quasi-maximum likelihood estimator for the VT-BEKK model. Monte Carlo experiments show that the bias of the 2sQML estimator is negligible and that the appropriateness of the diagonal specification depends on the closeness to either the diagonal BEKK or the diagonal RBEKK models. An empirical analysis of the returns of stocks listed on the Dow Jones Industrial Average indicates that the choice of the diagonal BEKK or diagonal RBEKK models changes over time, but most of the differences between the two forecasts are negligible.

]]>Econometrics doi: 10.3390/econometrics9020020

Authors: Antonio Pacifico

This paper improves a standard Structural Panel Bayesian Vector Autoregression model in order to jointly deal with issues of endogeneity, because of omitted factors and unobserved heterogeneity, and volatility, because of policy regime shifts and structural changes. Bayesian methods are used to select the best model solution for examining if international spillovers come from multivariate volatility, time variation, or contemporaneous relationship. An empirical application among Central-Eastern and Western Europe economies is conducted to describe the performance of the methodology, with particular emphasis on the Great Recession and post-crisis periods. A simulated example is also addressed to highlight the performance of the estimating procedure. Findings from evidence-based forecasting are also addressed to evaluate the impact of an ongoing pandemic crisis on the global economy.

]]>Econometrics doi: 10.3390/econometrics9020019

Authors: Gustavo Canavire-Bacarreza Luis Castro Peñarrieta Darwin Ugarte Ontiveros

Outliers can be particularly hard to detect, creating bias and inconsistency in the semi-parametric estimates. In this paper, we use Monte Carlo simulations to demonstrate that semi-parametric methods, such as matching, are biased in the presence of outliers. Bad and good leverage point outliers are considered. Bias arises in the case of bad leverage points because they completely change the distribution of the metrics used to define counterfactuals; good leverage points, on the other hand, increase the chance of breaking the common support condition and distort the balance of the covariates, which may push practitioners to misspecify the propensity score or the distance measures. We provide some clues to identify and correct for the effects of outliers following a reweighting strategy in the spirit of the Stahel-Donoho (SD) multivariate estimator of scale and location, and the S-estimator of multivariate location (Smultiv). An application of this strategy to experimental data is also implemented.

]]>Econometrics doi: 10.3390/econometrics9020018

Authors: D. Stephen G. Pollock

Much of the algebra that is associated with the Kronecker product of matrices has been rendered in the conventional notation of matrix algebra, which conceals the essential structures of the objects of the analysis. This makes it difficult to establish even the most salient of the results. The problems can be greatly alleviated by adopting an orderly index notation that reveals these structures. This claim is demonstrated by considering a problem that several authors have already addressed without producing a widely accepted solution.

]]>Econometrics doi: 10.3390/econometrics9020017

Authors: Konstantinos Gkillas Christoforos Konstantatos Costas Siriopoulos

We study the non-linear causal relation between uncertainty-due-to-infectious-diseases and stock–bond correlation. To this end, we use high-frequency 1-min data to compute daily realized measures of correlation and jumps, and then, we employ a nonlinear Granger causality test with the use of artificial neural networks so as to investigate the predictability of this type of uncertainty on realized stock–bond correlation and jumps. Our findings reveal that uncertainty-due-to-infectious-diseases has significant predictive value on the changes of the stock–bond relation.

]]>Econometrics doi: 10.3390/econometrics9020016

Authors: Liqiong Chen Antonio F. Galvao Suyong Song

This paper studies estimation and inference for linear quantile regression models with generated regressors. We suggest a practical two-step estimation procedure, where the generated regressors are computed in the first step. The asymptotic properties of the two-step estimator, namely, consistency and asymptotic normality are established. We show that the asymptotic variance-covariance matrix needs to be adjusted to account for the first-step estimation error. We propose a general estimator for the asymptotic variance-covariance, establish its consistency, and develop testing procedures for linear hypotheses in these models. Monte Carlo simulations to evaluate the finite-sample performance of the estimation and inference procedures are provided. Finally, we apply the proposed methods to study Engel curves for various commodities using data from the UK Family Expenditure Survey. We document strong heterogeneity in the estimated Engel curves along the conditional distribution of the budget share of each commodity. The empirical application also emphasizes that correctly estimating confidence intervals for the estimated Engel curves by the proposed estimator is of importance for inference.

]]>Econometrics doi: 10.3390/econometrics9020015

Authors: Jau-er Chen Chien-Hsun Huang Jia-Jyun Tien

In this study, we investigate the estimation and inference on a low-dimensional causal parameter in the presence of high-dimensional controls in an instrumental variable quantile regression. Our proposed econometric procedure builds on the Neyman-type orthogonal moment conditions of a previous study (Chernozhukov et al. 2018) and is thus relatively insensitive to the estimation of the nuisance parameters. The Monte Carlo experiments show that the estimator copes well with high-dimensional controls. We also apply the procedure to empirically reinvestigate the quantile treatment effect of 401(k) participation on accumulated wealth.

]]>Econometrics doi: 10.3390/econometrics9010014

Authors: Souvik Banerjee Anirban Basu

We provide evidence on the least biased ways to identify causal effects in situations where there are multiple outcomes that all depend on the same endogenous regressor and a reasonable but potentially contaminated instrumental variable that is available. Simulations provide suggestive evidence on the complementarity of instrumental variable (IV) and latent factor methods and how this complementarity depends on the number of outcome variables and the degree of contamination in the IV. We apply the causal inference methods to assess the impact of mental illness on work absenteeism and disability, using the National Comorbidity Survey Replication.

]]>Econometrics doi: 10.3390/econometrics9010013

Authors: Christian Leschinski Michelle Voges Philipp Sibbertsen

It is commonly found that the markets for long-term government bonds of Economic and Monetary Union (EMU) countries were integrated prior to the EMU debt crisis. Contrasting this, we show, based on the interrelation between market integration and fractional cointegration, that there were periods of integration and disintegration that coincide with bull and bear market periods in the stock market. An econometric argument about the spectral behavior of long-memory time series leads to the conclusion that there is a stronger differentiation between bonds with different default risks. This implied the possibility of macroeconomic and fiscal divergence between the EMU countries before the crisis periods.

]]>Econometrics doi: 10.3390/econometrics9010012

Authors: Fabian Knorre Martin Wagner Maximilian Grupe

This paper develops residual-based monitoring procedures for cointegrating polynomial regressions (CPRs), i.e., regression models including deterministic variables and integrated processes, as well as integer powers, of integrated processes as regressors. The regressors are allowed to be endogenous, and the stationary errors are allowed to be serially correlated. We consider five variants of monitoring statistics and develop the results for three modified least squares estimators for the parameters of the CPRs. The simulations show that using the combination of self-normalization and a moving window leads to the best performance. We use the developed monitoring statistics to assess the structural stability of environmental Kuznets curves (EKCs) for both CO2 and SO2 emissions for twelve industrialized countries since the first oil price shock.

]]>Econometrics doi: 10.3390/econometrics9010011

Authors: Boriss Siliverstovs

We assess the forecasting performance of the nowcasting model developed at the New York FED. We show that the observation regarding a striking difference in the model’s predictive ability across business cycle phases made earlier in the literature also applies here. During expansions, the nowcasting model forecasts at best are at least as good as the historical mean model, whereas during the recessionary periods, there are very substantial gains corresponding in the reduction in MSFE of about 90% relative to the benchmark model. We show how the asymmetry in the relative forecasting performance can be verified by the use of such recursive measures of relative forecast accuracy as Cumulated Sum of Squared Forecast Error Difference (CSSFED) and Recursive Relative Mean Squared Forecast Error (based on Rearranged observations) (R2MSFE(+R)). Ignoring these asymmetries results in a biased judgement of the relative forecasting performance of the competing models over a sample as a whole, as well as during economic expansions, when the forecasting accuracy of a more sophisticated model relative to naive benchmark models tends to be overstated. Hence, care needs to be exercised when ranking several models by their forecasting performance without taking into consideration various states of the economy.

]]>Econometrics doi: 10.3390/econometrics9010010

Authors: Šárka Hudecová Marie Hušková Simos G. Meintanis

This article considers goodness-of-fit tests for bivariate INAR and bivariate Poisson autoregression models. The test statistics are based on an L2-type distance between two estimators of the probability generating function of the observations: one being entirely nonparametric and the second one being semiparametric computed under the corresponding null hypothesis. The asymptotic distribution of the proposed tests statistics both under the null hypotheses as well as under alternatives is derived and consistency is proved. The case of testing bivariate generalized Poisson autoregression and extension of the methods to dimension higher than two are also discussed. The finite-sample performance of a parametric bootstrap version of the tests is illustrated via a series of Monte Carlo experiments. The article concludes with applications on real data sets and discussion.

]]>Econometrics doi: 10.3390/econometrics9010009

Authors: J. Eduardo Vera-Valdés

Econometric studies for global heating have typically used regional or global temperature averages to study its long memory properties. One typical explanation behind the long memory properties of temperature averages is cross-sectional aggregation. Nonetheless, formal analysis regarding the effect that aggregation has on the long memory dynamics of temperature data has been missing. Thus, this paper studies the long memory properties of individual grid temperatures and compares them against the long memory dynamics of global and regional averages. Our results show that the long memory parameters in individual grid observations are smaller than those from regional averages. Global and regional long memory estimates are greatly affected by temperature measurements at the Tropics, where the data is less reliable. Thus, this paper supports the notion that aggregation may be exacerbating the long memory estimated in regional and global temperature data. The results are robust to the bandwidth parameter, limit for station radius of influence, and sampling frequency.

]]>Econometrics doi: 10.3390/econometrics9010008

Authors: Paula Simões Sérgio Gomes Isabel Natário

Hospital emergency departments are often overused by patients that do not really need urgent care. These admissions are one of the major factors contributing to hospital costs, which should not be allowed to compromise the response and effectiveness of the National Health Services (SNS). The aim of this study is to perform a detailed spatial health econometrics analysis of the non-urgent emergency situations (classified by Manchester triage) by area, linking them with the efficient use of the national health line, the Saude24 line (S24 line). This is evaluated through the S24 savings calls, using a savings index and its spatial effectiveness in solving the non-urgent emergency situations. A savings call is a call by a user whose initial intention was to go to an urgency department, but who. after calling the S24 line. changed his/her mind. Given the spatial nature of the data, and resorting to INLA in a Bayesian paradigm, the number of non-urgent cases in the Portuguese urgency hospital departments is modeled in an autoregressive way. The spatial structure is accounted for by a set of random effects. The model additionally includes regular covariates and a spatially lagged covariate savings index, related with the S24 savings calls. Therefore, the response in a given area depends not only on the (weighted) values of the response in its neighborhood and of the considered covariates, but also on the (weighted) values of the covariate savings index measured in each neighbor, by means of a Bayesian Poisson spatial Durbin model.

]]>Econometrics doi: 10.3390/econometrics9010007

Authors: Kevin D. Hoover

The author would like to make the following correction to the article by Kevin D [...]

]]>Econometrics doi: 10.3390/econometrics9010006

Authors: Kyungsik Nam

This study proposes a nonlinear cointegrating regression model based on the well-known energy balance climate model. Specifically, I investigate the nonlinear cointegrating regression of the mean of temperature anomaly distributions on total radiative forcing using estimated spatial distributions of temperature anomalies for the Globe, Northern Hemisphere, and Southern Hemisphere. Further, I provide two types of nonlinear response functions that map the total radiative forcing level to mean temperature anomalies. The proposed statistical model provides a climatological implication that spatially heterogenous warming effects play a significant role in identifying nonlinear climate sensitivity. Cointegration and specification tests are provided that support the existence of nonlinear effects of total radiative forcing.

]]>Econometrics doi: 10.3390/econometrics9010005

Authors: Katarina Juselius

This survey paper discusses the Cointegrated Vector AutoRegressive (CVAR) methodology and how it has evolved over the past 30 years. It describes major steps in the econometric development, discusses problems to be solved when confronting theory with the data, and, as a solution, proposes a so-called theory-consistent CVAR scenario. A number of early CVAR applications are motivated by the urge to find out why the empirical results did not support Milton Friedman’s concept of monetary inflation. The paper also proposes a method for combining partial CVAR analyses into a large-scale macroeconomic model. It argues that an empirically-based approach to macroeconomics preferably should be based on Keynesian disequilibrium economics, where imperfect knowledge expectations replace so called rational expectations and where the financial sector plays a key role for understanding the long persistent movements in the data. Finally, the paper argues that the CVAR is potentially a candidate for Haavelmo’s “design of experiment for passive observations” and provides several illustrations.

]]>Econometrics doi: 10.3390/econometrics9010004

Authors: Econometrics Editorial Office Econometrics Editorial Office

Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Econometrics maintains its standards for the high quality of its published papers [...]

]]>Econometrics doi: 10.3390/econometrics9010003

Authors: D. Stephen G. Pollock

The effect of the conventional model-based methods of seasonal adjustment is to nullify the elements of the data that reside at the seasonal frequencies and to attenuate the elements at the adjacent frequencies. It may be desirable to nullify some of the adjacent elements instead of merely attenuating them. For this purpose, two alternative sets of procedures are presented that have been implemented in a computer program named SEASCAPE. In the first set of procedures, a basic seasonal adjustment filter is augmented by additional filters that are targeted at the adjacent frequencies. In the second set of procedures, a Fourier transform of the data is exploited to allow the elements in the vicinities of the seasonal frequencies to be eliminated or attenuated at will. The question is raised of whether an estimated trend-cycle trajectory that is devoid of high-frequency noise can serve in place of the seasonally adjusted data.

]]>Econometrics doi: 10.3390/econometrics9010002

Authors: Muhammad Bhatti Jae Kim

As the guest editors of this Special Issue, we feel proud and grateful to write the editorial note of this issue, which consists of seven high-quality research papers [...]

]]>Econometrics doi: 10.3390/econometrics9010001

Authors: N’Golo Koné

The maximum diversification has been shown in the literature to depend on the vector of asset volatilities and the inverse of the covariance matrix of the asset return. In practice, these two quantities need to be replaced by their sample statistics. The estimation error associated with the use of these sample statistics may be amplified due to (near) singularity of the covariance matrix, in financial markets with many assets. This, in turn, may lead to the selection of portfolios that are far from the optimal regarding standard portfolio performance measures of the financial market. To address this problem, we investigate three regularization techniques, including the ridge, the spectral cut-off, and the Landweber–Fridman approaches in order to stabilize the inverse of the covariance matrix. These regularization schemes involve a tuning parameter that needs to be chosen. In light of this fact, we propose a data-driven method for selecting the tuning parameter. We show that the selected portfolio by regularization is asymptotically efficient with respect to the diversification ratio. In empirical and Monte Carlo experiments, the resulting regularized rules are compared to several strategies, such as the most diversified portfolio, the target portfolio, the global minimum variance portfolio, and the naive 1/N strategy in terms of in-sample and out-of-sample Sharpe ratio performance, and it is shown that our method yields significant Sharpe ratio improvements.

]]>Econometrics doi: 10.3390/econometrics8040044

Authors: Martin Huber Anna Solovyeva

This paper extends the evaluation of direct and indirect treatment effects, i.e., mediation analysis, to the case that outcomes are only partially observed due to sample selection or outcome attrition. We assume sequential conditional independence of the treatment and the mediator, i.e., the variable through which the indirect effect operates. We also impose missing at random or instrumental variable assumptions on the outcome attrition process. Under these conditions, we derive identification results for the effects of interest that are based on inverse probability weighting by specific treatment, mediator, and/or selection propensity scores. We also provide a simulation study and an empirical application to the U.S. Project STAR data in which we assess the direct impact and indirect effect (via absenteeism) of smaller kindergarten classes on math test scores. The estimators considered are available in the &lsquo;causalweight&rsquo; package for the statistical software &lsquo;R&rsquo;.

]]>Econometrics doi: 10.3390/econometrics8040043

Authors: Michael D. Goldberg Olesia Kozlova Deniz Ozabaci

This paper examines the stability of the Bilson–Fama regression for a panel of 55 developed and developing countries. We find multiple break points for nearly every country in our panel. Subperiod estimates of the slope coefficient show a negative bias during some time periods and a positive bias during other time periods in nearly every country. The subperiod biases display two key patterns that shed light on the literature’s linear regression findings. The results point toward the importance of risk in currency markets. We find that risk is greater for developed country markets. The evidence undercuts the widespread view that currency returns are predictable or that developed country markets are less rational.

]]>Econometrics doi: 10.3390/econometrics8040042

Authors: Dietmar Bauer Lukas Matuschek Patrick de Matos Ribeiro Martin Wagner

We develop and discuss a parameterization of vector autoregressive moving average processes with arbitrary unit roots and (co)integration orders. The detailed analysis of the topological properties of the parameterization&mdash;based on the state space canonical form of Bauer and Wagner (2012)&mdash;is an essential input for establishing statistical and numerical properties of pseudo maximum likelihood estimators as well as, e.g., pseudo likelihood ratio tests based on them. The general results are exemplified in detail for the empirically most relevant cases, the (multiple frequency or seasonal) I(1) and the I(2) case. For these two cases we also discuss the modeling of deterministic components in detail.

]]>Econometrics doi: 10.3390/econometrics8040041

Authors: Eric Hillebrand Søren Johansen Torben Schmith

We study the stability of estimated linear statistical relations of global mean temperature and global mean sea level with regard to data revisions. Using four different model specifications proposed in the literature, we compare coefficient estimates and long-term sea level projections using two different vintages of each of the annual time series, covering the periods 1880&ndash;2001 and 1880&ndash;2013. We find that temperature and sea level updates and revisions have a substantial influence both on the magnitude of the estimated coefficients of influence (differences of up to 50%) and therefore on long-term projections of sea level rise following the RCP4.5 and RCP6 scenarios (differences of up to 40 cm by the year 2100). This shows that in order to replicate earlier results that informed the scientific discussion and motivated policy recommendations, it is crucial to have access to and to work with the data vintages used at the time.

]]>Econometrics doi: 10.3390/econometrics8040040

Authors: Erhard Reschenhofer Manveer K. Mangat

For typical sample sizes occurring in economic and financial applications, the squared bias of estimators for the memory parameter is small relative to the variance. Smoothing is therefore a suitable way to improve the performance in terms of the mean squared error. However, in an analysis of financial high-frequency data, where the estimates are obtained separately for each day and then combined by averaging, the variance decreases with the sample size but the bias remains fixed. This paper proposes a method of smoothing that does not entail an increase in the bias. This method is based on the simultaneous examination of different partitions of the data. An extensive simulation study is carried out to compare it with conventional estimation methods. In this study, the new method outperforms its unsmoothed competitors with respect to the variance and its smoothed competitors with respect to the bias. Using the results of the simulation study for the proper interpretation of the empirical results obtained from a financial high-frequency dataset, we conclude that significant long-range dependencies are present only in the intraday volatility but not in the intraday returns. Finally, the robustness of these findings against daily and weekly periodic patterns is established.

]]>Econometrics doi: 10.3390/econometrics8040039

Authors: Nandana Sengupta Fallaw Sowell

The asymptotic distribution of the linear instrumental variables (IV) estimator with empirically selected ridge regression penalty is characterized. The regularization tuning parameter is selected by splitting the observed data into training and test samples and becomes an estimated parameter that jointly converges with the parameters of interest. The asymptotic distribution is a nonstandard mixture distribution. Monte Carlo simulations show the asymptotic distribution captures the characteristics of the sampling distributions and when this ridge estimator performs better than two-stage least squares. An empirical application on returns to education data is presented.

]]>Econometrics doi: 10.3390/econometrics8030038

Authors: Yuanyuan Li Dietmar Bauer

In this paper the theory on the estimation of vector autoregressive (VAR) models for I(2) processes is extended to the case of long VAR approximation of more general processes. Hereby the order of the autoregression is allowed to tend to infinity at a certain rate depending on the sample size. We deal with unrestricted OLS estimators (in the model formulated in levels as well as in vector error correction form) as well as with two stage estimation (2SI2) in the vector error correction model (VECM) formulation. Our main results are analogous to the I(1) case: We show that the long VAR approximation leads to consistent estimates of the long and short run dynamics. Furthermore, tests on the autoregressive coefficients follow standard asymptotics. The pseudo likelihood ratio tests on the cointegrating ranks (using the Gaussian likelihood) used in the 2SI2 algorithm show under the null hypothesis the same distributions as in the case of data generating processes following finite order VARs. The same holds true for the asymptotic distribution of the long run dynamics both in the unrestricted VECM estimation and the reduced rank regression in the 2SI2 algorithm. Building on these results we show that if the data is generated by an invertible VARMA process, the VAR approximation can be used in order to derive a consistent initial estimator for subsequent pseudo likelihood optimization in the VARMA model.

]]>Econometrics doi: 10.3390/econometrics8030037

Authors: C. Vladimir Rodríguez-Caballero J. Eduardo Vera-Valdés

This paper studies long economic series to assess the long-lasting effects of pandemics. We analyze if periods that cover pandemics have a change in trend and persistence in growth, and in level and persistence in unemployment. We find that there is an upward trend in the persistence level of growth across centuries. In particular, shocks originated by pandemics in recent times seem to have a permanent effect on growth. Moreover, our results show that the unemployment rate increases and becomes more persistent after a pandemic. In this regard, our findings support the design and implementation of timely counter-cyclical policies to soften the shock of the pandemic.

]]>Econometrics doi: 10.3390/econometrics8030036

Authors: Jeremy Arkes

Building on arguments by Joshua Angrist and J&ouml;rn-Steffen Pischke arguments for how the teaching of undergraduate econometrics could become more effective, I propose a redesign of graduate econometrics that would better serve most students and help make the field of economics more relevant. The primary basis for the redesign is that the conventional methods do not adequately prepare students to recognize biases and to properly interpret significance, insignificance, and p-values; and there is an ethical problem in searching for significance and other matters. Based on these premises, I recommend that some of Angrist and Pischke&rsquo;s recommendations be adopted for graduate econometrics. In addition, I recommend further shifts in emphasis, new pedagogy, and adding important components (e.g., on interpretations and simple ethical lessons) that are largely ignored in current textbooks. An obvious implication of these recommended changes is a confirmation of most of Angrist and Pischke&rsquo;s recommendations for undergraduate econometrics, as well as further reductions in complexity.

]]>Econometrics doi: 10.3390/econometrics8030035

Authors: D. Stephen G. Pollock

The econometric data to which autoregressive moving-average models are commonly applied are liable to contain elements from a limited range of frequencies. If the data do not cover the full Nyquist frequency range of [0,&pi;] radians, then severe biases can occur in estimating their parameters. The recourse should be to reconstitute the underlying continuous data trajectory and to resample it at an appropriate lesser rate. The trajectory can be derived by associating sinc fuction kernels to the data points. This suggests a model for the underlying processes. The paper describes frequency-limited linear stochastic differential equations that conform to such a model, and it compares them with equations of a model that is assumed to be driven by a white-noise process of unbounded frequencies. The means of estimating models of both varieties are described.

]]>Econometrics doi: 10.3390/econometrics8030034

Authors: Yong Bao Xiaotian Liu Lihong Yang

The ordinary least squares (OLS) estimator for spatial autoregressions may be consistent as pointed out by Lee (2002), provided that each spatial unit is influenced aggregately by a significant portion of the total units. This paper presents a unified asymptotic distribution result of the properly recentered OLS estimator and proposes a new estimator that is based on the indirect inference (II) procedure. The resulting estimator can always be used regardless of the degree of aggregate influence on each spatial unit from other units and is consistent and asymptotically normal. The new estimator does not rely on distributional assumptions and is robust to unknown heteroscedasticity. Its good finite-sample performance, in comparison with existing estimators that are also robust to heteroscedasticity, is demonstrated by a Monte Carlo study.

]]>Econometrics doi: 10.3390/econometrics8030033

Authors: Stefan Mittnik Willi Semmler Alexander Haider

Recent research in financial economics has shown that rare large disasters have the potential to disrupt financial sectors via the destruction of capital stocks and jumps in risk premia. These disruptions often entail negative feedback effects on the macroeconomy. Research on disaster risks has also actively been pursued in the macroeconomic models of climate change. Our paper uses insights from the former work to study disaster risks in the macroeconomics of climate change and to spell out policy needs. Empirically, the link between carbon dioxide emission and the frequency of climate related disaster is investigated using a panel data approach. The modeling part then uses a multi-phase dynamic macro model to explore the effects of rare large disasters resulting in capital losses and rising risk premia. Our proposed multi-phase dynamic model, incorporating climate-related disaster shocks and their aftermath as a distressed phase, is suitable for studying mitigation and adaptation policies as well as recovery policies.

]]>Econometrics doi: 10.3390/econometrics8030032

Authors: Katsuto Tanaka Weilin Xiao Jun Yu

This paper estimates the drift parameters in the fractional Vasicek model from a continuous record of observations via maximum likelihood (ML). The asymptotic theory for the ML estimates (MLE) is established in the stationary case, the explosive case, and the boundary case for the entire range of the Hurst parameter, providing a complete treatment of asymptotic analysis. It is shown that changing the sign of the persistence parameter changes the asymptotic theory for the MLE, including the rate of convergence and the limiting distribution. It is also found that the asymptotic theory depends on the value of the Hurst parameter.

]]>Econometrics doi: 10.3390/econometrics8030031

Authors: Kevin D. Hoover

The relation between causal structure and cointegration and long-run weak exogeneity is explored using some ideas drawn from the literature on graphical causal modeling. It is assumed that the fundamental source of trending behavior is transmitted from exogenous (and typically latent) trending variables to a set of causally ordered variables that would not themselves display nonstationary behavior if the nonstationary exogenous causes were absent. The possibility of inferring the long-run causal structure among a set of time-series variables from an exhaustive examination of weak exogeneity in irreducibly cointegrated subsets of variables is explored and illustrated.

]]>Econometrics doi: 10.3390/econometrics8030030

Authors: Peter C. B. Phillips

We discuss some conceptual and practical issues that arise from the presence of global energy balance effects on station level adjustment mechanisms in dynamic panel regressions with climate data. The paper provides asymptotic analyses, observational data computations, and Monte Carlo simulations to assess the use of various estimation methodologies, including standard dynamic panel regression and cointegration techniques that have been used in earlier research. The findings reveal massive bias in system GMM estimation of the dynamic panel regression parameters, which arise from fixed effect heterogeneity across individual station level observations. Difference GMM and Within Group (WG) estimation have little bias and WG estimation is recommended for practical implementation of dynamic panel regression with highly disaggregated climate data. Intriguingly, from an econometric perspective and importantly for global policy analysis, it is shown that in this model despite the substantial differences between the estimates of the regression model parameters, estimates of global transient climate sensitivity (of temperature to a doubling of atmospheric CO2) are robust to the estimation method employed and to the specific nature of the trending mechanism in global temperature, radiation, and CO2.

]]>Econometrics doi: 10.3390/econometrics8030029

Authors: Marit Gjelsvik Ragnar Nymoen Victoria Sparrman

Wage coordination plays an important role in macroeconomic stabilization. Pattern wage bargaining systems have been common in Europe, but in different forms, and with different degrees of success in terms of actual coordination reached. We focus on wage formation in Norway, a small open economy, where it is custom to regard the manufacturing industry as the wage leader. We estimate a model of wage formation in manufacturing and in two other sectors. Deciding cointegration rank is an important step in the analysis, economically as well statistically. In combination with simultaneous equation modelling, the cointegration analysis provides evidence that collective wage negotiations in manufacturing have defined wage norms for the rest of the economy over the period 1980(1)&ndash;2014(4).

]]>Econometrics doi: 10.3390/econometrics8030028

Authors: Manveer Kaur Mangat Erhard Reschenhofer

The goal of this paper is to search for conclusive evidence against the stationarity of the global air surface temperature, which is one of the most important indicators of climate change. For this purpose, possible long-range dependencies are investigated in the frequency-domain. Since conventional tests of hypotheses about the memory parameter, which measures the degree of long-range dependence, are typically based on asymptotic arguments and are therefore of limited practical value in case of small or medium sample sizes, we employ a new small-sample test as well as a related estimator for the memory parameter. To safeguard against false positive findings, simulation studies are carried out to examine the suitability of the employed methods and hemispheric datasets are used to check the robustness of the empirical findings against low-frequency natural variability caused by oceanic cycles. Overall, our frequency-domain analysis provides strong evidence of non-stationarity, which is consistent with previous results obtained in the time domain with models allowing for stochastic or deterministic trends.

]]>Econometrics doi: 10.3390/econometrics8030027

Authors: Céline Cunen Nils Lid Hjort

When using the Focused Information Criterion (FIC) for assessing and ranking candidate models with respect to how well they do for a given estimation task, it is customary to produce a so-called FIC plot. This plot has the different point estimates along the y-axis and the root-FIC scores on the x-axis, these being the estimated root-mean-square scores. In this paper we address the estimation uncertainty involved in each of the points of such a FIC plot. This needs careful assessment of each of the estimators from the candidate models, taking also modelling bias into account, along with the relative precision of the associated estimated mean squared error quantities. We use confidence distributions for these tasks. This leads to fruitful CD&ndash;FIC plots, helping the statistician to judge to what extent the seemingly best models really are better than other models, etc. These efforts also lead to two further developments. The first is a new tool for model selection, which we call the quantile-FIC, which helps overcome certain difficulties associated with the usual FIC procedures, related to somewhat arbitrary schemes for handling estimated squared biases. A particular case is the median-FIC. The second development is to form model averaged estimators with weights determined by the relative sizes of the median- and quantile-FIC scores.

]]>Econometrics doi: 10.3390/econometrics8020026

Authors: Francis Bilson Darku Frank Konietschke Bhargab Chattopadhyay

The Gini index, a widely used economic inequality measure, is computed using data whose designs involve clustering and stratification, generally known as complex household surveys. Under complex household survey, we develop two novel procedures for estimating Gini index with a pre-specified error bound and confidence level. The two proposed approaches are based on the concept of sequential analysis which is known to be economical in the sense of obtaining an optimal cluster size which reduces project cost (that is total sampling cost) thereby achieving the pre-specified error bound and the confidence level under reasonable assumptions. Some large sample properties of the proposed procedures are examined without assuming any specific distribution. Empirical illustrations of both procedures are provided using the consumption expenditure data obtained by National Sample Survey (NSS) Organization in India.

]]>Econometrics doi: 10.3390/econometrics8020025

Authors: Fernanda Valente Márcio Laurini

In this paper, we analyze the tornado occurrences in the Unites States. To perform inference procedures for the spatio-temporal point process we adopt a dynamic representation of Log-Gaussian Cox Process. This representation is based on the decomposition of intensity function in components of trend, cycles, and spatial effects. In this model, spatial effects are also represented by a dynamic functional structure, which allows analyzing the possible changes in the spatio-temporal distribution of the occurrence of tornadoes due to possible changes in climate patterns. The model was estimated using Bayesian inference through the Integrated Nested Laplace Approximations. We use data from the Storm Prediction Center&rsquo;s Severe Weather Database between 1954 and 2018, and the results provided evidence, from new perspectives, that trends in annual tornado occurrences in the United States have remained relatively constant, supporting previously reported findings.

]]>Econometrics doi: 10.3390/econometrics8020024

Authors: Robert C. Jung Andrew R. Tremayne

The paper is concerned with estimation and application of a special stationary integer autoregressive model where multiple binomial thinnings are not independent of one another. Parameter estimation in such models has hitherto been accomplished using method of moments, or nonlinear least squares, but not maximum likelihood. We obtain the conditional distribution needed to implement maximum likelihood. The sampling performance of the new estimator is compared to extant ones by reporting the results of some simulation experiments. An application to a stock-type data set of financial counts is provided and the conditional distribution is used to compare two competing models and in forecasting.

]]>Econometrics doi: 10.3390/econometrics8020023

Authors: Virgilio Gómez-Rubio Roger S. Bivand Håvard Rue

The integrated nested Laplace approximation (INLA) for Bayesian inference is an efficient approach to estimate the posterior marginal distributions of the parameters and latent effects of Bayesian hierarchical models that can be expressed as latent Gaussian Markov random fields (GMRF). The representation as a GMRF allows the associated software R-INLA to estimate the posterior marginals in a fraction of the time as typical Markov chain Monte Carlo algorithms. INLA can be extended by means of Bayesian model averaging (BMA) to increase the number of models that it can fit to conditional latent GMRF. In this paper, we review the use of BMA with INLA and propose a new example on spatial econometrics models.

]]>Econometrics doi: 10.3390/econometrics8020022

Authors: Alex Lenkoski Fredrik L. Aanes

In economic applications, model averaging has found principal use in examining the validity of various theories related to observed heterogeneity in outcomes such as growth, development, and trade. Though often easy to articulate, these theories are imperfectly captured quantitatively. A number of different proxies are often collected for a given theory and the uneven nature of this collection requires care when employing model averaging. Furthermore, if valid, these theories ought to be relevant outside of any single narrowly focused outcome equation. We propose a methodology which treats theories as represented by latent indices, these latent processes controlled by model averaging on the proxy level. To achieve generalizability of the theory index our framework assumes a collection of outcome equations. We accommodate a flexible set of generalized additive models, enabling non-Gaussian outcomes to be included. Furthermore, selection of relevant theories also occurs on the outcome level, allowing for theories to be differentially valid. Our focus is on creating a set of theory-based indices directed at understanding a country&rsquo;s potential risk of macroeconomic collapse. These Sovereign Risk Indices are calibrated across a set of different &ldquo;collapse&rdquo; criteria, including default on sovereign debt, heightened potential for high unemployment or inflation and dramatic swings in foreign exchange values. The goal of this exercise is to render a portable set of country/year theory indices which can find more general use in the research community.

]]>Econometrics doi: 10.3390/econometrics8020021

Authors: Marcin Błażejowski Jacek Kwiatkowski Paweł Kufel

In this paper, we apply Bayesian averaging of classical estimates (BACE) and Bayesian model averaging (BMA) as an automatic modeling procedures for two well-known macroeconometric models: UK demand for narrow money and long-term inflation. Empirical results verify the correctness of BACE and BMA selection and exhibit similar or better forecasting performance compared with a non-pooling approach. As a benchmark, we use Autometrics&mdash;an algorithm for automatic model selection. Our study is implemented in the easy-to-use gretl packages, which support parallel processing, automates numerical calculations, and allows for efficient computations.

]]>Econometrics doi: 10.3390/econometrics8020020

Authors: Annalisa Cadonna Sylvia Frühwirth-Schnatter Peter Knaus

Time-varying parameter (TVP) models are very flexible in capturing gradual changes in the effect of explanatory variables on the outcome variable. However, in particular when the number of explanatory variables is large, there is a known risk of overfitting and poor predictive performance, since the effect of some explanatory variables is constant over time. We propose a new prior for variance shrinkage in TVP models, called triple gamma. The triple gamma prior encompasses a number of priors that have been suggested previously, such as the Bayesian Lasso, the double gamma prior and the Horseshoe prior. We present the desirable properties of such a prior and its relationship to Bayesian Model Averaging for variance selection. The features of the triple gamma prior are then illustrated in the context of time varying parameter vector autoregressive models, both for simulated dataset and for a series of macroeconomics variables in the Euro Area.

]]>Econometrics doi: 10.3390/econometrics8020019

Authors: Bo Yu Bruce Mizrach Norman R. Swanson

We investigate the marginal predictive content of small versus large jump variation, when forecasting one-week-ahead cross-sectional equity returns, building on Bollerslev et al. (2020). We find that sorting on signed small jump variation leads to greater value-weighted return differentials between stocks in our highest- and lowest-quintile portfolios (i.e., high&ndash;low spreads) than when either signed total jump or signed large jump variation is sorted on. It is shown that the benefit of signed small jump variation investing is driven by stock selection within an industry, rather than industry bets. Investors prefer stocks with a high probability of having positive jumps, but they also tend to overweight safer industries. Also, consistent with the findings in Scaillet et al. (2018), upside (downside) jump variation negatively (positively) predicts future returns. However, signed (large/small/total) jump variation has stronger predictive power than both upside and downside jump variation. One reason large and small (signed) jump variation have differing marginal predictive contents is that the predictive content of signed large jump variation is negligible when controlling for either signed total jump variation or realized skewness. By contrast, signed small jump variation has unique information for predicting future returns, even when controlling for these variables. By analyzing earnings announcement surprises, we find that large jumps are closely associated with &ldquo;big&rdquo; news. However, while such news-related information is embedded in large jump variation, the information is generally short-lived, and dissipates too quickly to provide marginal predictive content for subsequent weekly returns. Finally, we find that small jumps are more likely to be diversified away than large jumps and tend to be more closely associated with idiosyncratic risks. This indicates that small jumps are more likely to be driven by liquidity conditions and trading activity.

]]>Econometrics doi: 10.3390/econometrics8020018

Authors: Andrew B. Martinez

I analyze damage from hurricane strikes on the United States since 1955. Using machine learning methods to select the most important drivers for damage, I show that large errors in a hurricane&rsquo;s predicted landfall location result in higher damage. This relationship holds across a wide range of model specifications and when controlling for ex-ante uncertainty and potential endogeneity. Using a counterfactual exercise I find that the cumulative reduction in damage from forecast improvements since 1970 is about $82 billion, which exceeds the U.S. government&rsquo;s spending on the forecasts and private willingness to pay for them.

]]>Econometrics doi: 10.3390/econometrics8020017

Authors: Dimitris Fouskakis Ioannis Ntzoufras

This paper focuses on the Bayesian model average (BMA) using the power&ndash;expected&ndash; posterior prior in objective Bayesian variable selection under normal linear models. We derive a BMA point estimate of a predicted value, and present computation and evaluation strategies of the prediction accuracy. We compare the performance of our method with that of similar approaches in a simulated and a real data example from economics.

]]>Econometrics doi: 10.3390/econometrics8020016

Authors: Michael P. Clements

We apply a bootstrap test to determine whether some forecasters are able to make superior probability assessments to others. In contrast to some findings in the literature for point predictions, there is evidence that some individuals really are better than others. The testing procedure controls for the different economic conditions the forecasters may face, given that each individual responds to only a subset of the surveys. One possible explanation for the different findings for point predictions and histograms is explored: that newcomers may make less accurate histogram forecasts than experienced respondents given the greater complexity of the task.

]]>Econometrics doi: 10.3390/econometrics8020015

Authors: Ali Mehrabani Aman Ullah

In this paper, we propose an efficient weighted average estimator in Seemingly Unrelated Regressions. This average estimator shrinks a generalized least squares (GLS) estimator towards a restricted GLS estimator, where the restrictions represent possible parameter homogeneity specifications. The shrinkage weight is inversely proportional to a weighted quadratic loss function. The approximate bias and second moment matrix of the average estimator using the large-sample approximations are provided. We give the conditions under which the average estimator dominates the GLS estimator on the basis of their mean squared errors. We illustrate our estimator by applying it to a cost system for United States (U.S.) Commercial banks, over the period from 2000 to 2018. Our results indicate that on average most of the banks have been operating under increasing returns to scale. We find that over the recent years, scale economies are a plausible reason for the growth in average size of banks and the tendency toward increasing scale is likely to continue

]]>Econometrics doi: 10.3390/econometrics8020014

Authors: Marta Boczoń Jean-François Richard

In this paper, we propose a hybrid version of Dynamic Stochastic General Equilibrium models with an emphasis on parameter invariance and tracking performance at times of rapid changes (recessions). We interpret hypothetical balanced growth ratios as moving targets for economic agents that rely upon an Error Correction Mechanism to adjust to changes in target ratios driven by an underlying state Vector AutoRegressive process. Our proposal is illustrated by an application to a pilot Real Business Cycle model for the US economy from 1948 to 2019. An extensive recursive validation exercise over the last 35 years, covering 3 recessions, is used to highlight its parameters invariance, tracking and 1- to 3-step ahead forecasting performance, outperforming those of an unconstrained benchmark Vector AutoRegressive model.

]]>Econometrics doi: 10.3390/econometrics8020013

Authors: Kamil Makieła Błażej Mazur

This paper discusses Bayesian model averaging (BMA) in Stochastic Frontier Analysis and investigates inference sensitivity to prior assumptions made about the scale parameter of (in)efficiency. We turn our attention to the &ldquo;standard&rdquo; prior specifications for the popular normal-half-normal and normal-exponential models. To facilitate formal model comparison, we propose a model that nests both sampling models and generalizes the symmetric term of the compound error. Within this setup it is possible to develop coherent priors for model parameters in an explicit way. We analyze sensitivity of different prior specifications on the aforementioned scale parameter with respect to posterior characteristics of technology, stochastic parameters, latent variables and&mdash;especially&mdash;the models&rsquo; posterior probabilities, which are crucial for adequate inference pooling. We find that using incoherent priors on the scale parameter of inefficiency has (i) virtually no impact on the technology parameters; (ii) some impact on inference about the stochastic parameters and latent variables and (iii) substantial impact on marginal data densities, which are crucial in BMA.

]]>Econometrics doi: 10.3390/econometrics8020012

Authors: Lynda Khalaf Beatriz Peraza López

A two-stage simulation-based framework is proposed to derive Identification Robust confidence sets by applying Indirect Inference, in the context of Autoregressive Moving Average (ARMA) processes for finite samples. Resulting objective functions are treated as test statistics, which are inverted rather than optimized, via the Monte Carlo test method. Simulation studies illustrate accurate size and good power. Projected impulse-response confidence bands are simultaneous by construction and exhibit robustness to parameter identification problems. The persistence of shocks on oil prices and returns is analyzed via impulse-response confidence bands. Our findings support the usefulness of impulse-responses as an empirically relevant transformation of the confidence set.

]]>Econometrics doi: 10.3390/econometrics8010011

Authors: Richard A. Ashley Christopher F. Parmeter

This work describes a versatile and readily-deployable sensitivity analysis of an ordinary least squares (OLS) inference with respect to possible endogeneity in the explanatory variables of the usual k-variate linear multiple regression model. This sensitivity analysis is based on a derivation of the sampling distribution of the OLS parameter estimator, extended to the setting where some, or all, of the explanatory variables are endogenous. In exchange for restricting attention to possible endogeneity which is solely linear in nature&mdash;the most typical case&mdash;no additional model assumptions must be made, beyond the usual ones for a model with stochastic regressors. The sensitivity analysis quantifies the sensitivity of hypothesis test rejection p-values and/or estimated confidence intervals to such endogeneity, enabling an informed judgment as to whether any selected inference is &ldquo;robust&rdquo; versus &ldquo;fragile.&rdquo; The usefulness of this sensitivity analysis&mdash;as a &ldquo;screen&rdquo; for potential endogeneity issues&mdash;is illustrated with an example from the empirical growth literature. This example is extended to an extremely large sample, so as to illustrate how this sensitivity analysis can be applied to parameter confidence intervals in the context of massive datasets, as in &ldquo;big data&rdquo;.

]]>Econometrics doi: 10.3390/econometrics8010010

Authors: Deliang Dai

A factor model based covariance matrix is used to build a new form of Mahalanobis distance. The distribution and relative properties of the new Mahalanobis distances are derived. A new type of Mahalanobis distance based on the separated part of the factor model is defined. Contamination effects of outliers detected by the new defined Mahalanobis distances are also investigated. An empirical example indicates that the new proposed separated type of Mahalanobis distances predominate the original sample Mahalanobis distance.

]]>Econometrics doi: 10.3390/econometrics8010009

Authors: Brendan P. M. McCabe Christopher L. Skeels

The Poisson regression model remains an important tool in the econometric analysis of count data. In a pioneering contribution to the econometric analysis of such models, Lung-Fei Lee presented a specification test for a Poisson model against a broad class of discrete distributions sometimes called the Katz family. Two members of this alternative class are the binomial and negative binomial distributions, which are commonly used with count data to allow for under- and over-dispersion, respectively. In this paper we explore the structure of other distributions within the class and their suitability as alternatives to the Poisson model. Potential difficulties with the Katz likelihood leads us to investigate a class of point optimal tests of the Poisson assumption against the alternative of over-dispersion in both the regression and intercept only cases. In a simulation study, we compare score tests of &lsquo;Poisson-ness&rsquo; with various point optimal tests, based on the Katz family, and conclude that it is possible to choose a point optimal test which is better in the intercept only case, although the nuisance parameters arising in the regression case are problematic. One possible cause is poor choice of the point at which to optimize. Consequently, we explore the use of Hellinger distance to aid this choice. Ultimately we conclude that score tests remain the most practical approach to testing for over-dispersion in this context.

]]>Econometrics doi: 10.3390/econometrics8010008

Authors: Ramses Abul Naga Christopher Stapenhurst Gaston Yalonetzky

We examine the performance of asymptotic inference as well as bootstrap tests for the Alphabeta and Kobus&ndash;Miłoś family of inequality indices for ordered response data. We use Monte Carlo experiments to compare the empirical size and statistical power of asymptotic inference and the Studentized bootstrap test. In a broad variety of settings, both tests are found to have similar rejection probabilities of true null hypotheses, and similar power. Nonetheless, the asymptotic test remains correctly sized in the presence of certain types of severe class imbalances exhibiting very low or very high levels of inequality, whereas the bootstrap test becomes somewhat oversized in these extreme settings.

]]>Econometrics doi: 10.3390/econometrics8010007

Authors: Haili Zhang Guohua Zou

Functional data is a common and important type in econometrics and has been easier and easier to collect in the big data era. To improve estimation accuracy and reduce forecast risks with functional data, in this paper, we propose a novel cross-validation model averaging method for generalized functional linear model where the scalar response variable is related to a random function predictor by a link function. We establish asymptotic theoretical result on the optimality of the weights selected by our method when the true model is not in the candidate model set. Our simulations show that the proposed method often performs better than the commonly used model selection and averaging methods. We also apply the proposed method to Beijing second-hand house price data.

]]>Econometrics doi: 10.3390/econometrics8010006

Authors: Shahram Amini Christopher F. Parmeter

We provide a general overview of Bayesian model averaging (BMA) along with the concept of jointness. We then describe the relative merits and attractiveness of the newest BMA software package, BMS, available in the statistical language R to implement a BMA exercise. BMS provides the user a wide range of customizable priors for conducting a BMA exercise, provides ample graphs to visualize results, and offers several alternative model search mechanisms. We also provide an application of the BMS package to equity premia and describe a simple function that can easily ascertain jointness measures of covariates and integrates with the BMS package.

]]>Econometrics doi: 10.3390/econometrics8010005

Authors: Tahsin Mehdi

Although a wide array of stochastic dominance tests exist for poverty measurement and identification, they assume the income distributions have independent poverty lines or a common absolute (fixed) poverty line. We propose a stochastic dominance test for comparing income distributions up to a common relative poverty line (i.e., some fraction of the pooled median). A Monte Carlo study demonstrates its superior performance over existing methods in terms of power. The test is then applied to some Canadian household survey data for illustration.

]]>Econometrics doi: 10.3390/econometrics8010004

Authors: David Ardia Lukasz T. Gatarek Lennart Hoogerheide Herman K. Van Dijk

The authors wish to make the following corrections to this paper (Ardia et al [...]

]]>Econometrics doi: 10.3390/econometrics8010003

Authors: Matteo Barigozzi Marco Lippi Matteo Luciani

Large-dimensional dynamic factor models and dynamic stochastic general equilibrium models, both widely used in empirical macroeconomics, deal with singular stochastic vectors, i.e., vectors of dimension r which are driven by a q-dimensional white noise, with q &lt; r . The present paper studies cointegration and error correction representations for an I ( 1 ) singular stochastic vector y t . It is easily seen that y t is necessarily cointegrated with cointegrating rank c &ge; r &minus; q . Our contributions are: (i) we generalize Johansen&rsquo;s proof of the Granger representation theorem to I ( 1 ) singular vectors under the assumption that y t has rational spectral density; (ii) using recent results on singular vectors by Anderson and Deistler, we prove that for generic values of the parameters the autoregressive representation of y t has a finite-degree polynomial. The relationship between the cointegration of the factors and the cointegration of the observable variables in a large-dimensional factor model is also discussed.

]]>Econometrics doi: 10.3390/econometrics8010002

Authors: Econometrics Editorial Office Econometrics Editorial Office

The editorial team greatly appreciates the reviewers who have dedicated their considerable time and expertise to the journal&rsquo;s rigorous editorial process over the past 12 months, regardless of whether the papers are finally published or not [...]

]]>Econometrics doi: 10.3390/econometrics8010001

Authors: Krzysztof Piasecki Anna Łyczkowska-Hanćkowiak

The Japanese candlesticks&rsquo; technique is one of the well-known graphic methods of dynamic analysis of securities. If we apply Japanese candlesticks for the analysis of high-frequency financial data, then we need a numerical representation of any Japanese candlestick. Kacprzak et al. have proposed to represent Japanese candlesticks by ordered fuzzy numbers introduced by Kosiński and his cooperators. For some formal reasons, Kosiński&rsquo;s theory of ordered fuzzy numbers has been revised. The main goal of our paper is to propose a universal method of representation of Japanese candlesticks by revised ordered fuzzy numbers. The discussion also justifies the need for such revision of a numerical model of the Japanese candlesticks. There are considered the following main kinds of Japanese candlestick: White Candle (White Spinning), Black Candle (Black Spinning), Doji Star, Dragonfly Doji, Gravestone Doji, and Four Price Doji. For example, we apply numerical model of Japanese candlesticks for financial portfolio analysis.

]]>Econometrics doi: 10.3390/econometrics7040050

Authors: Peter C. B. Phillips Xiaohu Wang Yonghui Zhang

The usual t test, the t test based on heteroskedasticity and autocorrelation consistent (HAC) covariance matrix estimators, and the heteroskedasticity and autocorrelation robust (HAR) test are three statistics that are widely used in applied econometric work. The use of these significance tests in trend regression is of particular interest given the potential for spurious relationships in trend formulations. Following a longstanding tradition in the spurious regression literature, this paper investigates the asymptotic and finite sample properties of these test statistics in several spurious regression contexts, including regression of stochastic trends on time polynomials and regressions among independent random walks. Concordant with existing theory (Phillips 1986, 1998; Sun 2004, 2014b) the usual t test and HAC standardized test fail to control size as the sample size n &rarr; &infin; in these spurious formulations, whereas HAR tests converge to well-defined limit distributions in each case and therefore have the capacity to be consistent and control size. However, it is shown that when the number of trend regressors K &rarr; &infin; , all three statistics, including the HAR test, diverge and fail to control size as n &rarr; &infin; . These findings are relevant to high-dimensional nonstationary time series regressions where machine learning methods may be employed.

]]>Econometrics doi: 10.3390/econometrics7040049

Authors: Jau-er Chen Chen-Wei Hsiang

We propose an econometric procedure based mainly on the generalized random forests method. Not only does this process estimate the quantile treatment effect nonparametrically, but our procedure yields a measure of variable importance in terms of heterogeneity among control variables. We also apply the proposed procedure to reinvestigate the distributional effect of 401(k) participation on net financial assets, and the quantile earnings effect of participating in a job training program.

]]>Econometrics doi: 10.3390/econometrics7040048

Authors: Hiroyuki Kawakatsu

This paper considers observation driven models with conditional mean and variance dynamics for non-negative valued time series. The motivation is to relax the restriction imposed on the higher order moment dynamics in standard multiplicative error models driven only by the conditional mean dynamics. The empirical fit of a zero inflated mixture distribution is assessed with trade duration data with a large fraction of zero observations. All authors have read and agreed to the published version of the manuscript.

]]>Econometrics doi: 10.3390/econometrics7040047

Authors: Carsten Jentsch Lena Reichmann

The serial dependence of categorical data is commonly described using Markovian models. Such models are very flexible, but they can suffer from a huge number of parameters if the state space or the model order becomes large. To address the problem of a large number of model parameters, the class of (new) discrete autoregressive moving-average (NDARMA) models has been proposed as a parsimonious alternative to Markov models. However, NDARMA models do not allow any negative model parameters, which might be a severe drawback in practical applications. In particular, this model class cannot capture any negative serial correlation. For the special case of binary data, we propose an extension of the NDARMA model class that allows for negative model parameters, and, hence, autocorrelations leading to the considerably larger and more flexible model class of generalized binary ARMA (gbARMA) processes. We provide stationary conditions, give the stationary solution, and derive stochastic properties of gbARMA processes. For the purely autoregressive case, classical Yule&ndash;Walker equations hold that facilitate parameter estimation of gbAR models. Yule&ndash;Walker type equations are also derived for gbARMA processes.

]]>Econometrics doi: 10.3390/econometrics7040046

Authors: Marek Chudý Erhard Reschenhofer

Previous findings indicate that the inclusion of dynamic factors obtained from a large set of predictors can improve macroeconomic forecasts. In this paper, we explore three possible further developments: (i) using automatic criteria for choosing those factors which have the greatest predictive power; (ii) using only a small subset of preselected predictors for the calculation of the factors; and (iii) utilizing frequency-domain information for the estimation of the factor models. Reanalyzing a standard macroeconomic dataset of 143 U.S. time series and using the major measures of economic activity as dependent variables, we find that (i) is not helpful, whereas focusing on the low-frequency components of the factors and disregarding the high-frequency components can actually improve the forecasting performance for some variables. In the case of the gross domestic product, a combination of (ii) and (iii) yields the best results.

]]>Econometrics doi: 10.3390/econometrics7040045

Authors: John C. Chao Peter C. B. Phillips

This paper considers estimation and inference concerning the autoregressive coefficient ( &rho; ) in a panel autoregression for which the degree of persistence in the time dimension is unknown. Our main objective is to construct confidence intervals for &rho; that are asymptotically valid, having asymptotic coverage probability at least that of the nominal level uniformly over the parameter space. The starting point for our confidence procedure is the estimating equation of the Anderson&ndash;Hsiao (AH) IV procedure. It is well known that the AH IV estimation suffers from weak instrumentation when &rho; is near unity. But it is not so well known that AH IV estimation is still consistent when &rho; = 1 . In fact, the AH estimating equation is very well-centered and is an unbiased estimating equation in the sense of Durbin (1960), a feature that is especially useful in confidence interval construction. We show that a properly normalized statistic based on the AH estimating equation, which we call the M statistic, is uniformly convergent and can be inverted to obtain asymptotically valid interval estimates. To further improve the informativeness of our confidence procedure in the unit root and near unit root regions and to alleviate the problem that the AH procedure has greater variation in these regions, we use information from unit root pretesting to select among alternative confidence intervals. Two sequential tests are used to assess how close &rho; is to unity, and different intervals are applied depending on whether the test results indicate &rho; to be near or far away from unity. When &rho; is relatively close to unity, our procedure activates intervals whose width shrinks to zero at a faster rate than that of the confidence interval based on the M statistic. Only when both of our unit root tests reject the null hypothesis does our procedure turn to the M statistic interval, whose width has the optimal N &minus; 1 / 2 T &minus; 1 / 2 rate of shrinkage when the underlying process is stable. Our asymptotic analysis shows this pretest-based confidence procedure to have coverage probability that is at least the nominal level in large samples uniformly over the parameter space. Simulations confirm that the proposed interval estimation methods perform well in finite samples and are easy to implement in practice. A supplement to the paper provides an extensive set of new results on the asymptotic behavior of panel IV estimators in weak instrument settings.

]]>Econometrics doi: 10.3390/econometrics7040044

Authors: John Quiggin

This paper begins with the observation that the constrained maximisation central to model estimation and hypothesis testing may be interpreted as a kind of profit maximisation. The output of estimation is a model that maximises some measure of model fit, subject to costs that may be interpreted as the shadow price of constraints imposed on the model. The replication crisis may be regarded as a market failure in which the price of &ldquo;significant&rdquo; results is lower than would be socially optimal.

]]>Econometrics doi: 10.3390/econometrics7040043

Authors: Harry Joe

For modeling count time series data, one class of models is generalized integer autoregressive of order p based on thinning operators. It is shown how numerical maximum likelihood estimation is possible by inverting the probability generating function of the conditional distribution of an observation given the past p observations. Two data examples are included and show that thinning operators based on compounding can substantially improve the model fit compared with the commonly used binomial thinning operator.

]]>Econometrics doi: 10.3390/econometrics7040042

Authors: Takamitsu Kurita Bent Nielsen

This paper proposes a class of partial cointegrated models allowing for structural breaks in the deterministic terms. Moving-average representations of the models are given. It is then shown that, under the assumption of martingale difference innovations, the limit distributions of partial quasi-likelihood ratio tests for cointegrating rank have a close connection to those for standard full models. This connection facilitates a response surface analysis that is required to extract critical information about moments from large-scale simulation studies. An empirical illustration of the proposed methodology is also provided.

]]>Econometrics doi: 10.3390/econometrics7030041

Authors: Marius Matei Xari Rovira Núria Agell

We propose a methodology to include night volatility estimates in the day volatility modeling problem with high-frequency data in a realized generalized autoregressive conditional heteroskedasticity (GARCH) framework, which takes advantage of the natural relationship between the realized measure and the conditional variance. This improves volatility modeling by adding, in a two-factor structure, information on latent processes that occur while markets are closed but captures the leverage effect and maintains a mathematical structure that facilitates volatility estimation. A class of bivariate models that includes intraday, day, and night volatility estimates is proposed and was empirically tested to confirm whether using night volatility information improves the day volatility estimation. The results indicate a forecasting improvement using bivariate models over those that do not include night volatility estimates.

]]>Econometrics doi: 10.3390/econometrics7030040

Authors: Tian Xie

In this paper, we study forecasting problems of Bitcoin-realized volatility computed on data from the largest crypto exchange&mdash;Binance. Given the unique features of the crypto asset market, we find that conventional regression models exhibit strong model specification uncertainty. To circumvent this issue, we suggest using least squares model-averaging methods to model and forecast Bitcoin volatility. The empirical results demonstrate that least squares model-averaging methods in general outperform many other conventional regression models that ignore specification uncertainty.

]]>Econometrics doi: 10.3390/econometrics7030039

Authors: Wei Qian Craig A. Rolling Gang Cheng Yuhong Yang

It is often reported in the forecast combination literature that a simple average of candidate forecasts is more robust than sophisticated combining methods. This phenomenon is usually referred to as the &ldquo;forecast combination puzzle&rdquo;. Motivated by this puzzle, we explore its possible explanations, including high variance in estimating the target optimal weights (estimation error), invalid weighting formulas, and model/candidate screening before combination. We show that the existing understanding of the puzzle should be complemented by the distinction of different forecast combination scenarios known as combining for adaptation and combining for improvement. Applying combining methods without considering the underlying scenario can itself cause the puzzle. Based on our new understandings, both simulations and real data evaluations are conducted to illustrate the causes of the puzzle. We further propose a multi-level AFTER strategy that can integrate the strengths of different combining methods and adapt intelligently to the underlying scenario. In particular, by treating the simple average as a candidate forecast, the proposed strategy is shown to reduce the heavy cost of estimation error and, to a large extent, mitigate the puzzle.

]]>Econometrics doi: 10.3390/econometrics7030038

Authors: Qingfeng Liu Andrey L. Vasnev

To avoid the risk of misspecification between homoscedastic and heteroscedastic models, we propose a combination method based on ordinary least-squares (OLS) and generalized least-squares (GLS) model-averaging estimators. To select optimal weights for the combination, we suggest two information criteria and propose feasible versions that work even when the variance-covariance matrix is unknown. The optimality of the method is proven under some regularity conditions. The results of a Monte Carlo simulation demonstrate that the method is adaptive in the sense that it achieves almost the same estimation accuracy as if the homoscedasticity or heteroscedasticity of the error term were known.

]]>Econometrics doi: 10.3390/econometrics7030037

Authors: Richard M. Golden Steven S. Henley Halbert White T. Michael Kashner

Researchers are often faced with the challenge of developing statistical models with incomplete data. Exacerbating this situation is the possibility that either the researcher&rsquo;s complete-data model or the model of the missing-data mechanism is misspecified. In this article, we create a formal theoretical framework for developing statistical models and detecting model misspecification in the presence of incomplete data where maximum likelihood estimates are obtained by maximizing the observable-data likelihood function when the missing-data mechanism is assumed ignorable. First, we provide sufficient regularity conditions on the researcher&rsquo;s complete-data model to characterize the asymptotic behavior of maximum likelihood estimates in the simultaneous presence of both missing data and model misspecification. These results are then used to derive robust hypothesis testing methods for possibly misspecified models in the presence of Missing at Random (MAR) or Missing Not at Random (MNAR) missing data. Second, we introduce a method for the detection of model misspecification in missing data problems using recently developed Generalized Information Matrix Tests (GIMT). Third, we identify regularity conditions for the Missing Information Principle (MIP) to hold in the presence of model misspecification so as to provide useful computational covariance matrix estimation formulas. Fourth, we provide regularity conditions that ensure the observable-data expected negative log-likelihood function is convex in the presence of partially observable data when the amount of missingness is sufficiently small and the complete-data likelihood is convex. Fifth, we show that when the researcher has correctly specified a complete-data model with a convex negative likelihood function and an ignorable missing-data mechanism, then its strict local minimizer is the true parameter value for the complete-data model when the amount of missingness is sufficiently small. Our results thus provide new robust estimation, inference, and specification analysis methods for developing statistical models with incomplete data.

]]>Econometrics doi: 10.3390/econometrics7030036

Authors: Sophie van Huellen Duo Qin

This paper re-examines the instrumental variable (IV) approach to estimating returns to education by use of compulsory school law (CSL) in the US. We show that the IV-approach amounts to a change in model specification by changing the causal status of the variable of interest. From this perspective, the IV-OLS (ordinary least square) choice becomes a model selection issue between non-nested models and is hence testable using cross validation methods. It also enables us to unravel several logic flaws in the conceptualisation of IV-based models. Using the causal chain model specification approach, we overcome these flaws by carefully distinguishing returns to education from the treatment effect of CSL. We find relatively robust estimates for the first effect, while estimates for the second effect are hindered by measurement errors in the CSL indicators. We find reassurance of our approach from fundamental theories in statistical learning.

]]>Econometrics doi: 10.3390/econometrics7030035

Authors: Richard Kouamé Moussa

This paper introduces an estimation procedure for a random effects probit model in presence of heteroskedasticity and a likelihood ratio test for homoskedasticity. The cases where the heteroskedasticity is due to individual effects or idiosyncratic errors or both are analyzed. Monte Carlo simulations show that the test performs well in the case of high degree of heteroskedasticity. Furthermore, the power of the test increases with larger individual and time dimensions. The robustness analysis shows that applying the wrong approach may generate misleading results except for the case where both individual effects and idiosyncratic errors are modelled as heteroskedastic.

]]>Econometrics doi: 10.3390/econometrics7030034

Authors: Jie Chen Dimitris N. Politis

This paper gives a computer-intensive approach to multi-step-ahead prediction of volatility in financial returns series under an ARCH/GARCH model and also under a model-free setting, namely employing the NoVaS transformation. Our model-based approach only assumes i . i . d innovations without requiring knowledge/assumption of the error distribution and is computationally straightforward. The model-free approach is formally quite similar, albeit a GARCH model is not assumed. We conducted a number of simulations to show that the proposed approach works well for both point prediction (under L 1 and/or L 2 measures) and prediction intervals that were constructed using bootstrapping. The performance of GARCH models and the model-free approach for multi-step ahead prediction was also compared under different data generating processes.

]]>Econometrics doi: 10.3390/econometrics7030033

Authors: Chuanming Gao Kajal Lahiri

We compare the finite sample performance of a number of Bayesian and classical procedures for limited information simultaneous equations models with weak instruments by a Monte Carlo study. We consider Bayesian approaches developed by Chao and Phillips, Geweke, Kleibergen and van Dijk, and Zellner. Amongst the sampling theory methods, OLS, 2SLS, LIML, Fuller&rsquo;s modified LIML, and the jackknife instrumental variable estimator (JIVE) due to Angrist et al. and Blomquist and Dahlberg are also considered. Since the posterior densities and their conditionals in Chao and Phillips and Kleibergen and van Dijk are nonstandard, we use a novel &ldquo;Gibbs within Metropolis&ndash;Hastings&rdquo; algorithm, which only requires the availability of the conditional densities from the candidate generating density. Our results show that with very weak instruments, there is no single estimator that is superior to others in all cases. When endogeneity is weak, Zellner&rsquo;s MELO does the best. When the endogeneity is not weak and &rho; &omega; 12 &gt; 0 , where &rho; is the correlation coefficient between the structural and reduced form errors, and &omega; 12 is the covariance between the unrestricted reduced form errors, the Bayesian method of moments (BMOM) outperforms all other estimators by a wide margin. When the endogeneity is not weak and &beta; &rho; &lt; 0 ( &beta; being the structural parameter), the Kleibergen and van Dijk approach seems to work very well. Surprisingly, the performance of JIVE was disappointing in all our experiments.

]]>Econometrics doi: 10.3390/econometrics7030032

Authors: Maria Felice Arezzo Giuseppina Guagnano

Most empirical work in the social sciences is based on observational data that are often both incomplete, and therefore unrepresentative of the population of interest, and affected by measurement errors. These problems are very well known in the literature and ad hoc procedures for parametric modeling have been proposed and developed for some time, in order to correct estimate&rsquo;s bias and obtain consistent estimators. However, to our best knowledge, the aforementioned problems have not yet been jointly considered. We try to overcome this by proposing a parametric approach for the estimation of the probabilities of misclassification of a binary response variable by incorporating them in the likelihood of a binary choice model with sample selection.

]]>Econometrics doi: 10.3390/econometrics7030031

Authors: Franz Ramsauer Aleksey Min Michael Lingauer

This article extends the Factor-Augmented Vector Autoregression Model (FAVAR) to mixed-frequency and incomplete panel data. Within the scope of a fully parametric two-step approach, the alternating application of two expectation-maximization algorithms jointly estimates model parameters and missing data. In contrast to the existing literature, we do not require observable factor components to be part of the panel data. For this purpose, we modify the Kalman Filter for factors consisting of latent and observed components, which significantly improves the reconstruction of latent factors according to the performed simulation study. To identify model parameters uniquely, the loadings matrix is constrained. In our empirical application, the presented framework analyzes US data for measuring the effects of the monetary policy on the real economy and financial markets. Here, the consequences for the quarterly Gross Domestic Product (GDP) growth rates are of particular importance.

]]>Econometrics doi: 10.3390/econometrics7030030

Authors: Annika Homburg Christian H. Weiß Layth C. Alwan Gabriel Frahm Rainer Göb

In forecasting count processes, practitioners often ignore the discreteness of counts and compute forecasts based on Gaussian approximations instead. For both central and non-central point forecasts, and for various types of count processes, the performance of such approximate point forecasts is analyzed. The considered data-generating processes include different autoregressive schemes with varying model orders, count models with overdispersion or zero inflation, counts with a bounded range, and counts exhibiting trend or seasonality. We conclude that Gaussian forecast approximations should be avoided.

]]>Econometrics doi: 10.3390/econometrics7030029

Authors: Emanuela Ciapanna Marco Taboga

This paper deals with instability in regression coefficients. We propose a Bayesian regression model with time-varying coefficients (TVC) that allows to jointly estimate the degree of instability and the time-path of the coefficients. Thanks to the computational tractability of the model and to the fact that it is fully automatic, we are able to run Monte Carlo experiments and analyze its finite-sample properties. We find that the estimation precision and the forecasting accuracy of the TVC model compare favorably to those of other methods commonly employed to deal with parameter instability. A distinguishing feature of the TVC model is its robustness to mis-specification: Its performance is also satisfactory when regression coefficients are stable or when they experience discrete structural breaks. As a demonstrative application, we used our TVC model to estimate the exposures of S&amp;P 500 stocks to market-wide risk factors: We found that a vast majority of stocks had time-varying exposures and the TVC model helped to better forecast these exposures.

]]>Econometrics doi: 10.3390/econometrics7020028

Authors: Fernando Rios-Avila

This paper presents an extension to the Oaxaca&ndash;Blinder decomposition with continuous groups using a semiparametric approach known as varying coefficients model. To account for potential self-selection into the continuum of groups, the use of inverse mills ratios is expanded upon following the literature on endogenous selection. The flexibility of this methodology may allow detecting heterogeneity when analyzing endogenous dose treatments effects, as well as correcting for endogeneity when analyzing the heterogeneous partial effects across the continuous group variable. For illustration, the methodology is used to revisit the impact of body weight on wages, using body mass index (BMI) as the continuum of groups, finding evidence that body weight has a negative, but decreasing impact on wages for both white men and women.

]]>Econometrics doi: 10.3390/econometrics7020027

Authors: Zhengyuan Gao Christian M. Hafner

Filtering has had a profound impact as a device of perceiving information and deriving agent expectations in dynamic economic models. For an abstract economic system, this paper shows that the foundation of applying the filtering method corresponds to the existence of a conditional expectation as an equilibrium process. Agent-based rational behavior of looking backward and looking forward is generalized to a conditional expectation process where the economic system is approximated by a class of models, which can be represented and estimated without information loss. The proposed framework elucidates the range of applications of a general filtering device and is not limited to a particular model class such as rational expectations.

]]>Econometrics doi: 10.3390/econometrics7020026

Authors: David Trafimow

There has been much debate about null hypothesis significance testing, p-values without null hypothesis significance testing, and confidence intervals. The first major section of the present article addresses some of the main reasons these procedures are problematic. The conclusion is that none of them are satisfactory. However, there is a new procedure, termed the a priori procedure (APP), that validly aids researchers in obtaining sample statistics that have acceptable probabilities of being close to their corresponding population parameters. The second major section provides a description and review of APP advances. Not only does the APP avoid the problems that plague other inferential statistical procedures, but it is easy to perform too. Although the APP can be performed in conjunction with other procedures, the present recommendation is that it be used alone.

]]>Econometrics doi: 10.3390/econometrics7020025

Authors: Kyoo il Kim

It is well known that efficient estimation of average treatment effects can be obtained by the method of inverse propensity score weighting, using the estimated propensity score, even when the true one is known. When the true propensity score is unknown but parametric, it is conjectured from the literature that we still need nonparametric propensity score estimation to achieve the efficiency. We formalize this argument and further identify the source of the efficiency loss arising from parametric estimation of the propensity score. We also provide an intuition of why this overfitting is necessary. Our finding suggests that, even when we know that the true propensity score belongs to a parametric class, we still need to estimate the propensity score by a nonparametric method in applications.

]]>Econometrics doi: 10.3390/econometrics7020024

Authors: Jan R. Magnus

The t-ratio has not one but two uses in econometrics, which should be carefully distinguished. It is used as a test and also as a diagnostic. I emphasize that the commonly-used estimators are in fact pretest estimators, and argue in favor of an improved (continuous) version of pretesting, called model averaging.

]]>