Special Issue "Recent Advances in Theory and Methods for the Analysis of High Dimensional and High Frequency Financial Data"

A special issue of Econometrics (ISSN 2225-1146).

Deadline for manuscript submissions: closed (30 September 2020).

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Norman R. Swanson
E-Mail Website
Guest Editor
Department of Economics, School of Arts and Sciences, Rutgers University, 75 Hamilton Street, New Brunswick, NJ 08901, USA
Interests: financial econometrics; macroeconometrics; time series analysis; forecasting
Special Issues, Collections and Topics in MDPI journals
Xiye Yang
E-Mail Website
Guest Editor
Department of Economics, Rutgers University, USA
Interests: econometric theory; financial econometrics; asset pricing; empirical finance

Special Issue Information

Dear Colleagues,

There have been numerous econometric advances made in the fields of empirical and theoretical finance in recent years. Many such advances were initially spurred by recent technological, computing and data collection innovations. In particular, as computing ability and dataset sizes have increased, both empiricists and theoreticians have focused considerable attention on solving key unresolved issues relating to estimation and inference in the study of large datasets used in financial economics. Examples of topics in which important advances have been made include nonparametric and parametric estimation of models (e.g., simulated method of moments, indirect inference, and nonparametric simulated maximum likelihood, among others), and estimation and inference based on point and density estimators of possibly latent variables (e.g., realized measures of integrated volatility, and estimation and accuracy testing of predictive densities or conditional distributions, among others). Recently, considerable attention has also been placed on the development and application of tools useful for the analysis of the high dimensional and/or high frequency datasets that now dominate the landscape. These tools include machine learning, dimension reduction, and shrinkage based data methods, for example. The purpose of this Special Issue is to collect both methodological and empirical papers that develop and utilize state-of-the-art econometric techniques for the analysis of such data.

Prof. Norman R. Swanson
Prof. Xiye Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Econometrics is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Empirical and theoretical financial econometrics
  • Big data
  • High dimensional and high frequency data

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Regularized Maximum Diversification Investment Strategy
Econometrics 2021, 9(1), 1; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics9010001 - 29 Dec 2020
Cited by 1 | Viewed by 964
Abstract
The maximum diversification has been shown in the literature to depend on the vector of asset volatilities and the inverse of the covariance matrix of the asset return. In practice, these two quantities need to be replaced by their sample statistics. The estimation [...] Read more.
The maximum diversification has been shown in the literature to depend on the vector of asset volatilities and the inverse of the covariance matrix of the asset return. In practice, these two quantities need to be replaced by their sample statistics. The estimation error associated with the use of these sample statistics may be amplified due to (near) singularity of the covariance matrix, in financial markets with many assets. This, in turn, may lead to the selection of portfolios that are far from the optimal regarding standard portfolio performance measures of the financial market. To address this problem, we investigate three regularization techniques, including the ridge, the spectral cut-off, and the Landweber–Fridman approaches in order to stabilize the inverse of the covariance matrix. These regularization schemes involve a tuning parameter that needs to be chosen. In light of this fact, we propose a data-driven method for selecting the tuning parameter. We show that the selected portfolio by regularization is asymptotically efficient with respect to the diversification ratio. In empirical and Monte Carlo experiments, the resulting regularized rules are compared to several strategies, such as the most diversified portfolio, the target portfolio, the global minimum variance portfolio, and the naive 1/N strategy in terms of in-sample and out-of-sample Sharpe ratio performance, and it is shown that our method yields significant Sharpe ratio improvements. Full article
Show Figures

Figure 1

Article
Reducing the Bias of the Smoothed Log Periodogram Regression for Financial High-Frequency Data
Econometrics 2020, 8(4), 40; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics8040040 - 10 Oct 2020
Cited by 1 | Viewed by 1324
Abstract
For typical sample sizes occurring in economic and financial applications, the squared bias of estimators for the memory parameter is small relative to the variance. Smoothing is therefore a suitable way to improve the performance in terms of the mean squared error. However, [...] Read more.
For typical sample sizes occurring in economic and financial applications, the squared bias of estimators for the memory parameter is small relative to the variance. Smoothing is therefore a suitable way to improve the performance in terms of the mean squared error. However, in an analysis of financial high-frequency data, where the estimates are obtained separately for each day and then combined by averaging, the variance decreases with the sample size but the bias remains fixed. This paper proposes a method of smoothing that does not entail an increase in the bias. This method is based on the simultaneous examination of different partitions of the data. An extensive simulation study is carried out to compare it with conventional estimation methods. In this study, the new method outperforms its unsmoothed competitors with respect to the variance and its smoothed competitors with respect to the bias. Using the results of the simulation study for the proper interpretation of the empirical results obtained from a financial high-frequency dataset, we conclude that significant long-range dependencies are present only in the intraday volatility but not in the intraday returns. Finally, the robustness of these findings against daily and weekly periodic patterns is established. Full article
Show Figures

Figure 1

Article
New Evidence of the Marginal Predictive Content of Small and Large Jumps in the Cross-Section
Econometrics 2020, 8(2), 19; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics8020019 - 19 May 2020
Viewed by 2228
Abstract
We investigate the marginal predictive content of small versus large jump variation, when forecasting one-week-ahead cross-sectional equity returns, building on Bollerslev et al. (2020). We find that sorting on signed small jump variation leads to greater value-weighted return differentials between stocks in our [...] Read more.
We investigate the marginal predictive content of small versus large jump variation, when forecasting one-week-ahead cross-sectional equity returns, building on Bollerslev et al. (2020). We find that sorting on signed small jump variation leads to greater value-weighted return differentials between stocks in our highest- and lowest-quintile portfolios (i.e., high–low spreads) than when either signed total jump or signed large jump variation is sorted on. It is shown that the benefit of signed small jump variation investing is driven by stock selection within an industry, rather than industry bets. Investors prefer stocks with a high probability of having positive jumps, but they also tend to overweight safer industries. Also, consistent with the findings in Scaillet et al. (2018), upside (downside) jump variation negatively (positively) predicts future returns. However, signed (large/small/total) jump variation has stronger predictive power than both upside and downside jump variation. One reason large and small (signed) jump variation have differing marginal predictive contents is that the predictive content of signed large jump variation is negligible when controlling for either signed total jump variation or realized skewness. By contrast, signed small jump variation has unique information for predicting future returns, even when controlling for these variables. By analyzing earnings announcement surprises, we find that large jumps are closely associated with “big” news. However, while such news-related information is embedded in large jump variation, the information is generally short-lived, and dissipates too quickly to provide marginal predictive content for subsequent weekly returns. Finally, we find that small jumps are more likely to be diversified away than large jumps and tend to be more closely associated with idiosyncratic risks. This indicates that small jumps are more likely to be driven by liquidity conditions and trading activity. Full article
Show Figures

Figure 1

Article
Representation of Japanese Candlesticks by Oriented Fuzzy Numbers
Econometrics 2020, 8(1), 1; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics8010001 - 18 Dec 2019
Cited by 7 | Viewed by 3272
Abstract
The Japanese candlesticks’ technique is one of the well-known graphic methods of dynamic analysis of securities. If we apply Japanese candlesticks for the analysis of high-frequency financial data, then we need a numerical representation of any Japanese candlestick. Kacprzak et al. have proposed [...] Read more.
The Japanese candlesticks’ technique is one of the well-known graphic methods of dynamic analysis of securities. If we apply Japanese candlesticks for the analysis of high-frequency financial data, then we need a numerical representation of any Japanese candlestick. Kacprzak et al. have proposed to represent Japanese candlesticks by ordered fuzzy numbers introduced by Kosiński and his cooperators. For some formal reasons, Kosiński’s theory of ordered fuzzy numbers has been revised. The main goal of our paper is to propose a universal method of representation of Japanese candlesticks by revised ordered fuzzy numbers. The discussion also justifies the need for such revision of a numerical model of the Japanese candlesticks. There are considered the following main kinds of Japanese candlestick: White Candle (White Spinning), Black Candle (Black Spinning), Doji Star, Dragonfly Doji, Gravestone Doji, and Four Price Doji. For example, we apply numerical model of Japanese candlesticks for financial portfolio analysis. Full article
Show Figures

Figure 1

Article
Bivariate Volatility Modeling with High-Frequency Data
Econometrics 2019, 7(3), 41; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics7030041 - 15 Sep 2019
Cited by 3 | Viewed by 3434
Abstract
We propose a methodology to include night volatility estimates in the day volatility modeling problem with high-frequency data in a realized generalized autoregressive conditional heteroskedasticity (GARCH) framework, which takes advantage of the natural relationship between the realized measure and the conditional variance. This [...] Read more.
We propose a methodology to include night volatility estimates in the day volatility modeling problem with high-frequency data in a realized generalized autoregressive conditional heteroskedasticity (GARCH) framework, which takes advantage of the natural relationship between the realized measure and the conditional variance. This improves volatility modeling by adding, in a two-factor structure, information on latent processes that occur while markets are closed but captures the leverage effect and maintains a mathematical structure that facilitates volatility estimation. A class of bivariate models that includes intraday, day, and night volatility estimates is proposed and was empirically tested to confirm whether using night volatility information improves the day volatility estimation. The results indicate a forecasting improvement using bivariate models over those that do not include night volatility estimates. Full article
Article
Covariance Prediction in Large Portfolio Allocation
Econometrics 2019, 7(2), 19; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics7020019 - 09 May 2019
Cited by 7 | Viewed by 4683
Abstract
Many financial decisions, such as portfolio allocation, risk management, option pricing and hedge strategies, are based on forecasts of the conditional variances, covariances and correlations of financial returns. The paper shows an empirical comparison of several methods to predict one-step-ahead conditional covariance matrices. [...] Read more.
Many financial decisions, such as portfolio allocation, risk management, option pricing and hedge strategies, are based on forecasts of the conditional variances, covariances and correlations of financial returns. The paper shows an empirical comparison of several methods to predict one-step-ahead conditional covariance matrices. These matrices are used as inputs to obtain out-of-sample minimum variance portfolios based on stocks belonging to the S&P500 index from 2000 to 2017 and sub-periods. The analysis is done through several metrics, including standard deviation, turnover, net average return, information ratio and Sortino’s ratio. We find that no method is the best in all scenarios and the performance depends on the criterion, the period of analysis and the rebalancing strategy. Full article
Article
Using the Entire Yield Curve in Forecasting Output and Inflation
Econometrics 2018, 6(3), 40; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics6030040 - 29 Aug 2018
Cited by 5 | Viewed by 5583
Abstract
In forecasting a variable (forecast target) using many predictors, a factor model with principal components (PC) is often used. When the predictors are the yield curve (a set of many yields), the Nelson–Siegel (NS) factor model is used in place of the PC [...] Read more.
In forecasting a variable (forecast target) using many predictors, a factor model with principal components (PC) is often used. When the predictors are the yield curve (a set of many yields), the Nelson–Siegel (NS) factor model is used in place of the PC factors. These PC or NS factors are combining information (CI) in the predictors (yields). However, these CI factors are not “supervised” for a specific forecast target in that they are constructed by using only the predictors but not using a particular forecast target. In order to “supervise” factors for a forecast target, we follow Chan et al. (1999) and Stock and Watson (2004) to compute PC or NS factors of many forecasts (not of the predictors), with each of the many forecasts being computed using one predictor at a time. These PC or NS factors of forecasts are combining forecasts (CF). The CF factors are supervised for a specific forecast target. We demonstrate the advantage of the supervised CF factor models over the unsupervised CI factor models via simple numerical examples and Monte Carlo simulation. In out-of-sample forecasting of monthly US output growth and inflation, it is found that the CF factor models outperform the CI factor models especially at longer forecast horizons. Full article
Show Figures

Figure 1

Back to TopTop