Quantitative Methods for Economics and Finance

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Financial Mathematics".

Deadline for manuscript submissions: closed (30 September 2020) | Viewed by 71559

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Economy and Company, University of Almería, La Cañada de San Urbano, Almeria, Spain
Interests: long memory; portfolio theory; fractal dimension
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Departamento de Matemáticas, Universidad de Almería, 04120 Almería, Spain
Interests: fractal structures; fractal dimension; Hurst exponent; finance; asymmetric topology
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Since the mid-twentieth century, it has been clear that the more classical mathematical models were not enough to explain the complexity of the financial and economic series. Since then, the effort to develop new tools and mathematical models for their application to economics and finance has been remarkable. However, it is still necessary to continue developing new tools, as well as continue studying the latest tools developed for the study of the financial and economic series. These tools can come from techniques and models taken from physics or from new branches of mathematics such as fractals, dynamical systems, or new statistical techniques such as big data.

The purpose of this Special Issue is to gather a collection of articles reflecting the latest developments in different fields of economics and finance where mathematics plays an important role.

Prof. Dr. J.E. Trinidad-Segovia
Prof. Dr. Miguel Ángel Sánchez-Granero
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Financial series
  • Portfolio theory
  • Factor models
  • Volatility modeling
  • Quantitative methods
  • Long memory
  • Computational finance
  • Statistical arbitrage

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1022 KiB  
Article
US Policy Uncertainty and Stock Market Nexus Revisited through Dynamic ARDL Simulation and Threshold Modelling
by Muhammad Asif Khan, Masood Ahmed, József Popp and Judit Oláh
Mathematics 2020, 8(11), 2073; https://0-doi-org.brum.beds.ac.uk/10.3390/math8112073 - 20 Nov 2020
Cited by 13 | Viewed by 2708
Abstract
Since the introduction of the measure of economic policy uncertainty, businesses, policymakers, and academic scholars closely monitor its momentum due to expected economic implications. The US is the world’s top-ranked equity market by size, and prior literature on policy uncertainty and stock prices [...] Read more.
Since the introduction of the measure of economic policy uncertainty, businesses, policymakers, and academic scholars closely monitor its momentum due to expected economic implications. The US is the world’s top-ranked equity market by size, and prior literature on policy uncertainty and stock prices for the US is conflicting. In this study, we reexamine the policy uncertainty and stock price nexus from the US perspective, using a novel dynamically simulated autoregressive distributed lag setting introduced in 2018, which appears superior to traditional models. The empirical findings document a negative response of stock prices to 10% positive/negative shock in policy uncertainty in the short-run, while in the long-run, an increase in policy uncertainty by 10% reduces the stock prices, which increases in response to a decrease with the same magnitude. Moreover, we empirically identified two significant thresholds: (1) policy score of 4.89 (original score 132.39), which negatively explain stock prices with high magnitude, and (2) policy score 4.48 (original score 87.98), which explains stock prices negatively with a relatively low magnitude, and interestingly, policy changes below the second threshold become irrelevant to explain stock prices in the United States. It is worth noting that all indices are not equally exposed to unfavorable policy changes. The overall findings are robust to the alternative measures of policy uncertainty and stock prices and offer useful policy input. The limitations of the study and future line of research are also highlighted. All in all, the policy uncertainty is an indicator that shall remain ever-important due to its nature and implication on the various sectors of the economy (the equity market in particular). Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

32 pages, 1694 KiB  
Article
Do Trade and Investment Agreements Promote Foreign Direct Investment within Latin America? Evidence from a Structural Gravity Model
by Marta Bengoa, Blanca Sanchez-Robles and Yochanan Shachmurove
Mathematics 2020, 8(11), 1882; https://0-doi-org.brum.beds.ac.uk/10.3390/math8111882 - 30 Oct 2020
Cited by 7 | Viewed by 2808
Abstract
Latin America has experienced a surge in foreign direct investment (FDI) in the last two decades, in parallel with the ratification of major regional trade agreements (RTAs) and bilateral investment treaties (BITs). This paper uses the latest developments in the structural gravity model [...] Read more.
Latin America has experienced a surge in foreign direct investment (FDI) in the last two decades, in parallel with the ratification of major regional trade agreements (RTAs) and bilateral investment treaties (BITs). This paper uses the latest developments in the structural gravity model theory to study if the co-existence of BITs and two major regional agreements, Mercosur and the Latin American Integration Association (ALADI), exerts enhancing or overlapping effects on FDI for eleven countries in Latin America over the period 1995–2018. The study is novel as it accounts for variations in the degree of investment protection across BITs within Latin America by computing a quality index of BITs. It also explores the nature of interactions (enhancing/overlapping effects) between RTAs and BITs. The findings reveal that belonging to a well-established regional trade agreement, such as Mercosur, is significantly more effective than BITs in fostering intra-regional FDI. Phasing-in effects are large and significant and there is evidence of enhancing effects. Results within the bloc are heterogeneous: BITs exert a positive, but small effect, for middle income countries. However, BITs are not effective in attracting FDI in the case of middle to low income countries, unless these countries ratify BITs with a high degree of investment protection. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

28 pages, 1267 KiB  
Article
An Application of the SRA Copulas Approach to Price-Volume Research
by Pedro Antonio Martín Cervantes, Salvador Cruz Rambaud and María del Carmen Valls Martínez
Mathematics 2020, 8(11), 1864; https://0-doi-org.brum.beds.ac.uk/10.3390/math8111864 - 26 Oct 2020
Viewed by 1882
Abstract
The objective of this study was to apply the Sadegh, Ragno, and AghaKouchak (SRA) approach to the field of quantitative finance by analyzing, for the first time, the relationship between price and trading volume of the securities using four stock market indices: DJIA, [...] Read more.
The objective of this study was to apply the Sadegh, Ragno, and AghaKouchak (SRA) approach to the field of quantitative finance by analyzing, for the first time, the relationship between price and trading volume of the securities using four stock market indices: DJIA, FOOTSIE100, NIKKEI225, and IBEX35. This procedure is a completely new methodology in finance that consists of the application of a Bayesian framework and the development of a hybrid evolution algorithm of the Markov Chain Monte Carlo (MCMC) method to analyze a large number (26) of parametric copulas. With respect to the DJIA, the Joe’s copula is the one that most efficiently models its succinct dependence structures. One of the copulas included in the SRA approach, the Tawn’s copula, is jointly adjusted to the FOOTSIE100, NIKKEI225, and IBEX 35 indices to analyze the asymmetric relationship between price and trading volume. This adjustment can be considered almost perfect for the NIKKEI225, and a relatively different characterization for the IBEX35 seems to indicate the existence of endogenous patterns in the price and volume. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

15 pages, 965 KiB  
Article
On the Relationship of Cryptocurrency Price with US Stock and Gold Price Using Copula Models
by Jong-Min Kim, Seong-Tae Kim and Sangjin Kim
Mathematics 2020, 8(11), 1859; https://0-doi-org.brum.beds.ac.uk/10.3390/math8111859 - 23 Oct 2020
Cited by 26 | Viewed by 5696
Abstract
This paper examines the relationship of the leading financial assets, Bitcoin, Gold, and S&P 500 with GARCH-Dynamic Conditional Correlation (DCC), Nonlinear Asymmetric GARCH DCC (NA-DCC), Gaussian copula-based GARCH-DCC (GC-DCC), and Gaussian copula-based Nonlinear Asymmetric-DCC (GCNA-DCC). Under the high volatility financial situation such as [...] Read more.
This paper examines the relationship of the leading financial assets, Bitcoin, Gold, and S&P 500 with GARCH-Dynamic Conditional Correlation (DCC), Nonlinear Asymmetric GARCH DCC (NA-DCC), Gaussian copula-based GARCH-DCC (GC-DCC), and Gaussian copula-based Nonlinear Asymmetric-DCC (GCNA-DCC). Under the high volatility financial situation such as the COVID-19 pandemic occurrence, there exist a computation difficulty to use the traditional DCC method to the selected cryptocurrencies. To solve this limitation, GC-DCC and GCNA-DCC are applied to investigate the time-varying relationship among Bitcoin, Gold, and S&P 500. In terms of log-likelihood, we show that GC-DCC and GCNA-DCC are better models than DCC and NA-DCC to show relationship of Bitcoin with Gold and S&P 500. We also consider the relationships among time-varying conditional correlation with Bitcoin volatility, and S&P 500 volatility by a Gaussian Copula Marginal Regression (GCMR) model. The empirical findings show that S&P 500 and Gold price are statistically significant to Bitcoin in terms of log-return and volatility. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

19 pages, 2008 KiB  
Article
Predicting Primary Energy Consumption Using Hybrid ARIMA and GA-SVR Based on EEMD Decomposition
by Yu-Sheng Kao, Kazumitsu Nawata and Chi-Yo Huang
Mathematics 2020, 8(10), 1722; https://0-doi-org.brum.beds.ac.uk/10.3390/math8101722 - 07 Oct 2020
Cited by 23 | Viewed by 2743
Abstract
Forecasting energy consumption is not easy because of the nonlinear nature of the time series for energy consumptions, which cannot be accurately predicted by traditional forecasting methods. Therefore, a novel hybrid forecasting framework based on the ensemble empirical mode decomposition (EEMD) approach and [...] Read more.
Forecasting energy consumption is not easy because of the nonlinear nature of the time series for energy consumptions, which cannot be accurately predicted by traditional forecasting methods. Therefore, a novel hybrid forecasting framework based on the ensemble empirical mode decomposition (EEMD) approach and a combination of individual forecasting models is proposed. The hybrid models include the autoregressive integrated moving average (ARIMA), the support vector regression (SVR), and the genetic algorithm (GA). The integrated framework, the so-called EEMD-ARIMA-GA-SVR, will be used to predict the primary energy consumption of an economy. An empirical study case based on the Taiwanese consumption of energy will be used to verify the feasibility of the proposed forecast framework. According to the empirical study results, the proposed hybrid framework is feasible. Compared with prediction results derived from other forecasting mechanisms, the proposed framework demonstrates better precisions, but such a hybrid system can also be seen as a basis for energy management and policy definition. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

22 pages, 937 KiB  
Article
Dispersion Trading Based on the Explanatory Power of S&P 500 Stock Returns
by Lucas Schneider and Johannes Stübinger
Mathematics 2020, 8(9), 1627; https://0-doi-org.brum.beds.ac.uk/10.3390/math8091627 - 20 Sep 2020
Cited by 1 | Viewed by 6692
Abstract
This paper develops a dispersion trading strategy based on a statistical index subsetting procedure and applies it to the S&P 500 constituents from January 2000 to December 2017. In particular, our selection process determines appropriate subset weights by exploiting a principal component analysis [...] Read more.
This paper develops a dispersion trading strategy based on a statistical index subsetting procedure and applies it to the S&P 500 constituents from January 2000 to December 2017. In particular, our selection process determines appropriate subset weights by exploiting a principal component analysis to specify the individual index explanatory power of each stock. In the following out-of-sample trading period, we trade the most suitable stocks using a hedged and unhedged approach. Within the large-scale back-testing study, the trading frameworks achieve statistically and economically significant returns of 14.52 and 26.51 percent p.a. after transaction costs, as well as a Sharpe ratio of 0.40 and 0.34, respectively. Furthermore, the trading performance is robust across varying market conditions. By benchmarking our strategies against a naive subsetting scheme and a buy-and-hold approach, we find that our statistical trading systems possess superior risk-return characteristics. Finally, a deep dive analysis shows synchronous developments between the chosen number of principal components and the S&P 500 index. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

27 pages, 1492 KiB  
Article
Non-Parametric Analysis of Efficiency: An Application to the Pharmaceutical Industry
by Ricardo F. Díaz and Blanca Sanchez-Robles
Mathematics 2020, 8(9), 1522; https://0-doi-org.brum.beds.ac.uk/10.3390/math8091522 - 07 Sep 2020
Cited by 5 | Viewed by 3036
Abstract
Increases in the cost of research, specialization and reductions in public expenditure in health are changing the economic environment for the pharmaceutical industry. Gains in productivity and efficiency are increasingly important in order for firms to succeed in this environment. We analyze empirically [...] Read more.
Increases in the cost of research, specialization and reductions in public expenditure in health are changing the economic environment for the pharmaceutical industry. Gains in productivity and efficiency are increasingly important in order for firms to succeed in this environment. We analyze empirically the performance of efficiency in the pharmaceutical industry over the period 2010–2018. We work with microdata from a large sample of European firms of different characteristics regarding size, main activity, country of origin and other idiosyncratic features. We compute efficiency scores for the firms in the sample on a yearly basis by means of non-parametric data envelopment analysis (DEA) techniques. Basic results show a moderate average level of efficiency for the firms which encompass the sample. Efficiency is higher for companies which engage in manufacturing and distribution than for firms focusing on research and development (R&D) activities. Large firms display higher levels of efficiency than medium-size and small firms. Our estimates point to a decreasing pattern of average efficiency over the years 2010–2018. Furthermore, we explore the potential correlation of efficiency with particular aspects of the firms’ performance. Profit margins and financial solvency are positively correlated with efficiency, whereas employee costs display a negative correlation. Institutional aspects of the countries of origin also influence efficiency levels. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

32 pages, 1738 KiB  
Article
The Net Worth Trap: Investment and Output Dynamics in the Presence of Financing Constraints
by Jukka Isohätälä, Alistair Milne and Donald Robertson
Mathematics 2020, 8(8), 1327; https://0-doi-org.brum.beds.ac.uk/10.3390/math8081327 - 10 Aug 2020
Viewed by 2149
Abstract
This paper investigates investment and output dynamics in a simple continuous time setting, showing that financing constraints substantially alter the relationship between net worth and the decisions of an optimizing firm. In the absence of financing constraints, net worth is irrelevant (the 1958 [...] Read more.
This paper investigates investment and output dynamics in a simple continuous time setting, showing that financing constraints substantially alter the relationship between net worth and the decisions of an optimizing firm. In the absence of financing constraints, net worth is irrelevant (the 1958 Modigliani–Miller irrelevance proposition applies). When incorporating financing constraints, a decline in net worth leads to the firm reducing investment and also output (when this reduces risk exposure). This negative relationship between net worth and investment has already been examined in the literature. The contribution here is providing new intuitive insights: (i) showing how large and long lasting the resulting non-linearity of firm behaviour can be, even with linear production and preferences; and (ii) highlighting the economic mechanisms involved—the emergence of shadow prices creating both corporate prudential saving and induced risk aversion. The emergence of such pronounced non-linearity, even with linear production and preference functions, suggests that financing constraints can have a major impact on investment and output; and this should be allowed for in empirical modelling of economic and financial crises (for example, the great depression of the 1930s, the global financial crisis of 2007–2008 and the crash following the Covid-19 pandemic of 2020). Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

27 pages, 992 KiB  
Article
Financial Distress Prediction and Feature Selection in Multiple Periods by Lassoing Unconstrained Distributed Lag Non-linear Models
by Dawen Yan, Guotai Chi and Kin Keung Lai
Mathematics 2020, 8(8), 1275; https://0-doi-org.brum.beds.ac.uk/10.3390/math8081275 - 03 Aug 2020
Cited by 15 | Viewed by 3290
Abstract
In this paper, we propose a new framework of a financial early warning system through combining the unconstrained distributed lag model (DLM) and widely used financial distress prediction models such as the logistic model and the support vector machine (SVM) for the purpose [...] Read more.
In this paper, we propose a new framework of a financial early warning system through combining the unconstrained distributed lag model (DLM) and widely used financial distress prediction models such as the logistic model and the support vector machine (SVM) for the purpose of improving the performance of an early warning system for listed companies in China. We introduce simultaneously the 3~5-period-lagged financial ratios and macroeconomic factors in the consecutive time windows t − 3, t − 4 and t − 5 to the prediction models; thus, the influence of the early continued changes within and outside the company on its financial condition is detected. Further, by introducing lasso penalty into the logistic-distributed lag and SVM-distributed lag frameworks, we implement feature selection and exclude the potentially redundant factors, considering that an original long list of accounting ratios is used in the financial distress prediction context. We conduct a series of comparison analyses to test the predicting performance of the models proposed by this study. The results show that our models outperform logistic, SVM, decision tree and neural network (NN) models in a single time window, which implies that the models incorporating indicator data in multiple time windows convey more information in terms of financial distress prediction when compared with the existing singe time window models. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

13 pages, 2024 KiB  
Article
Deep Learning Methods for Modeling Bitcoin Price
by Prosper Lamothe-Fernández, David Alaminos, Prosper Lamothe-López and Manuel A. Fernández-Gámez
Mathematics 2020, 8(8), 1245; https://0-doi-org.brum.beds.ac.uk/10.3390/math8081245 - 30 Jul 2020
Cited by 33 | Viewed by 7794
Abstract
A precise prediction of Bitcoin price is an important aspect of digital financial markets because it improves the valuation of an asset belonging to a decentralized control market. Numerous studies have studied the accuracy of models from a set of factors. Hence, previous [...] Read more.
A precise prediction of Bitcoin price is an important aspect of digital financial markets because it improves the valuation of an asset belonging to a decentralized control market. Numerous studies have studied the accuracy of models from a set of factors. Hence, previous literature shows how models for the prediction of Bitcoin suffer from poor performance capacity and, therefore, more progress is needed on predictive models, and they do not select the most significant variables. This paper presents a comparison of deep learning methodologies for forecasting Bitcoin price and, therefore, a new prediction model with the ability to estimate accurately. A sample of 29 initial factors was used, which has made possible the application of explanatory factors of different aspects related to the formation of the price of Bitcoin. To the sample under study, different methods have been applied to achieve a robust model, namely, deep recurrent convolutional neural networks, which have shown the importance of transaction costs and difficulty in Bitcoin price, among others. Our results have a great potential impact on the adequacy of asset pricing against the uncertainties derived from digital currencies, providing tools that help to achieve stability in cryptocurrency markets. Our models offer high and stable success results for a future prediction horizon, something useful for asset valuation of cryptocurrencies like Bitcoin. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

15 pages, 637 KiB  
Article
A Novel Methodology to Calculate the Probability of Volatility Clusters in Financial Series: An Application to Cryptocurrency Markets
by Venelina Nikolova, Juan E. Trinidad Segovia, Manuel Fernández-Martínez and Miguel Angel Sánchez-Granero
Mathematics 2020, 8(8), 1216; https://0-doi-org.brum.beds.ac.uk/10.3390/math8081216 - 24 Jul 2020
Cited by 14 | Viewed by 3439
Abstract
One of the main characteristics of cryptocurrencies is the high volatility of their exchange rates. In a previous work, the authors found that a process with volatility clusters displays a volatility series with a high Hurst exponent. In this paper, we provide a [...] Read more.
One of the main characteristics of cryptocurrencies is the high volatility of their exchange rates. In a previous work, the authors found that a process with volatility clusters displays a volatility series with a high Hurst exponent. In this paper, we provide a novel methodology to calculate the probability of volatility clusters with a special emphasis on cryptocurrencies. With this aim, we calculate the Hurst exponent of a volatility series by means of the FD4 approach. An explicit criterion to computationally determine whether there exist volatility clusters of a fixed size is described. We found that the probabilities of volatility clusters of an index (S&P500) and a stock (Apple) showed a similar profile, whereas the probability of volatility clusters of a forex pair (Euro/USD) became quite lower. On the other hand, a similar profile appeared for Bitcoin/USD, Ethereum/USD, and Ripple/USD cryptocurrencies, with the probabilities of volatility clusters of all such cryptocurrencies being much greater than the ones of the three traditional assets. Our results suggest that the volatility in cryptocurrencies changes faster than in traditional assets, and much faster than in forex pairs. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

13 pages, 277 KiB  
Article
A Proposal to Fix the Number of Factors on Modeling the Dynamics of Futures Contracts on Commodity Prices
by Andrés García-Mirantes, Beatriz Larraz and Javier Población
Mathematics 2020, 8(6), 973; https://0-doi-org.brum.beds.ac.uk/10.3390/math8060973 - 14 Jun 2020
Cited by 1 | Viewed by 1623
Abstract
In the literature on modeling commodity futures prices, we find that the stochastic behavior of the spot price is a response to between one and four factors, including both short- and long-term components. The more factors considered in modeling a spot price process, [...] Read more.
In the literature on modeling commodity futures prices, we find that the stochastic behavior of the spot price is a response to between one and four factors, including both short- and long-term components. The more factors considered in modeling a spot price process, the better the fit to observed futures prices—but the more complex the procedure can be. With a view to contributing to the knowledge of how many factors should be considered, this study presents a new way of computing the best number of factors to be accounted for when modeling risk-management of energy derivatives. The new method identifies the number of factors one should consider in the model and the type of stochastic process to be followed. This study aims to add value to previous studies which consider principal components by assuming that the spot price can be modeled as a sum of several factors. When applied to four different commodities (weekly observations corresponding to futures prices traded at the NYMEX for WTI light sweet crude oil, heating oil, unleaded gasoline and Henry Hub natural gas) we find that, while crude oil and heating oil are satisfactorily well-modeled with two factors, unleaded gasoline and natural gas need a third factor to capture seasonality. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
17 pages, 833 KiB  
Article
Detection of Near-Nulticollinearity through Centered and Noncentered Regression
by Román Salmerón Gómez, Catalina García García and José García Pérez
Mathematics 2020, 8(6), 931; https://0-doi-org.brum.beds.ac.uk/10.3390/math8060931 - 07 Jun 2020
Cited by 7 | Viewed by 2781
Abstract
This paper analyzes the diagnostic of near-multicollinearity in a multiple linear regression from auxiliary centered (with intercept) and noncentered (without intercept) regressions. From these auxiliary regressions, the centered and noncentered variance inflation factors (VIFs) are calculated. An expression is also presented that relates [...] Read more.
This paper analyzes the diagnostic of near-multicollinearity in a multiple linear regression from auxiliary centered (with intercept) and noncentered (without intercept) regressions. From these auxiliary regressions, the centered and noncentered variance inflation factors (VIFs) are calculated. An expression is also presented that relates both of them. In addition, this paper analyzes why the VIF is not able to detect the relation between the intercept and the rest of the independent variables of an econometric model. At the same time, an analysis is also provided to determine how the auxiliary regression applied to calculate the VIF can be useful to detect this kind of multicollinearity. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
21 pages, 4668 KiB  
Article
Market Volatility of the Three Most Powerful Military Countries during Their Intervention in the Syrian War
by Viviane Naimy, José-María Montero, Rim El Khoury and Nisrine Maalouf
Mathematics 2020, 8(5), 834; https://0-doi-org.brum.beds.ac.uk/10.3390/math8050834 - 21 May 2020
Cited by 6 | Viewed by 2666
Abstract
This paper analyzes the volatility dynamics in the financial markets of the (three) most powerful countries from a military perspective, namely, the U.S., Russia, and China, during the period 2015–2018 that corresponds to their intervention in the Syrian war. As far as we [...] Read more.
This paper analyzes the volatility dynamics in the financial markets of the (three) most powerful countries from a military perspective, namely, the U.S., Russia, and China, during the period 2015–2018 that corresponds to their intervention in the Syrian war. As far as we know, there is no literature studying this topic during such an important distress period, which has had very serious economic, social, and humanitarian consequences. The Generalized Autoregressive Conditional Heteroscedasticity (GARCH (1, 1)) model yielded the best volatility results for the in-sample period. The weighted historical simulation produced an accurate value at risk (VaR) for a period of one month at the three considered confidence levels. For the out-of-sample period, the Monte Carlo simulation method, based on student t-copula and peaks-over-threshold (POT) extreme value theory (EVT) under the Gaussian kernel and the generalized Pareto (GP) distribution, overstated the risk for the three countries. The comparison of the POT-EVT VaR of the three countries to a portfolio of stock indices pertaining to non-military countries, namely Finland, Sweden, and Ecuador, for the same out-of-sample period, revealed that the intervention in the Syrian war may be one of the pertinent reasons that significantly affected the volatility of the stock markets of the three most powerful military countries. This paper is of great interest for policy makers, central bank leaders, participants involved in these markets, and all practitioners given the economic and financial consequences derived from such dynamics. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

14 pages, 323 KiB  
Article
An Extension of the Concept of Derivative: Its Application to Intertemporal Choice
by Salvador Cruz Rambaud and Blas Torrecillas Jover
Mathematics 2020, 8(5), 696; https://0-doi-org.brum.beds.ac.uk/10.3390/math8050696 - 02 May 2020
Cited by 3 | Viewed by 2083
Abstract
The framework of this paper is the concept of derivative from the point of view of abstract algebra and differential calculus. The objective of this paper is to introduce a novel concept of derivative which arises in certain economic problems, specifically in intertemporal [...] Read more.
The framework of this paper is the concept of derivative from the point of view of abstract algebra and differential calculus. The objective of this paper is to introduce a novel concept of derivative which arises in certain economic problems, specifically in intertemporal choice when trying to characterize moderately and strongly decreasing impatience. To do this, we have employed the usual tools and magnitudes of financial mathematics with an algebraic nomenclature. The main contribution of this paper is twofold. On the one hand, we have proposed a novel framework and a different approach to the concept of relative derivation which satisfies the so-called generalized Leibniz’s rule. On the other hand, in spite of the fact that this peculiar approach can be applied to other disciplines, we have presented the mathematical characterization of the two main types of decreasing impatience in the ambit of behavioral finance, based on a previous characterization involving the proportional increasing of the variable “time”. Finally, this paper points out other patterns of variation which could be applied in economics and other scientific disciplines. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

28 pages, 490 KiB  
Article
The VIF and MSE in Raise Regression
by Román Salmerón Gómez, Ainara Rodríguez Sánchez, Catalina García García and José García Pérez
Mathematics 2020, 8(4), 605; https://0-doi-org.brum.beds.ac.uk/10.3390/math8040605 - 16 Apr 2020
Cited by 30 | Viewed by 4295
Abstract
The raise regression has been proposed as an alternative to ordinary least squares estimation when a model presents collinearity. In order to analyze whether the problem has been mitigated, it is necessary to develop measures to detect collinearity after the application of the [...] Read more.
The raise regression has been proposed as an alternative to ordinary least squares estimation when a model presents collinearity. In order to analyze whether the problem has been mitigated, it is necessary to develop measures to detect collinearity after the application of the raise regression. This paper extends the concept of the variance inflation factor to be applied in a raise regression. The relevance of this extension is that it can be applied to determine the raising factor which allows an optimal application of this technique. The mean square error is also calculated since the raise regression provides a biased estimator. The results are illustrated by two empirical examples where the application of the raise estimator is compared to the application of the ridge and Lasso estimators that are commonly applied to estimate models with multicollinearity as an alternative to ordinary least squares. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

17 pages, 468 KiB  
Article
Discounted and Expected Utility from the Probability and Time Trade-Off Model
by Salvador Cruz Rambaud and Ana María Sánchez Pérez
Mathematics 2020, 8(4), 601; https://0-doi-org.brum.beds.ac.uk/10.3390/math8040601 - 15 Apr 2020
Cited by 3 | Viewed by 4348
Abstract
This paper shows the interaction between probabilistic and delayed rewards. In decision- making processes, the Expected Utility (EU) model has been employed to assess risky choices whereas the Discounted Utility (DU) model has been applied to intertemporal choices. Despite both models being different, [...] Read more.
This paper shows the interaction between probabilistic and delayed rewards. In decision- making processes, the Expected Utility (EU) model has been employed to assess risky choices whereas the Discounted Utility (DU) model has been applied to intertemporal choices. Despite both models being different, they are based on the same theoretical principle: the rewards are assessed by taking into account the sum of their utilities and some similar anomalies have been revealed in both models. The aim of this paper is to characterize and consider particular cases of the Time Trade-Off (PPT) model and show that they correspond to the EU and DU models. Additionally, we will try to build a PTT model starting from a discounted and an expected utility model able to overcome the limitations pointed out by Baucells and Heukamp. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

17 pages, 2381 KiB  
Article
Some Notes on the Formation of a Pair in Pairs Trading
by José Pedro Ramos-Requena, Juan Evangelista Trinidad-Segovia and Miguel Ángel Sánchez-Granero
Mathematics 2020, 8(3), 348; https://0-doi-org.brum.beds.ac.uk/10.3390/math8030348 - 05 Mar 2020
Cited by 13 | Viewed by 5913
Abstract
The main goal of the paper is to introduce different models to calculate the amount of money that must be allocated to each stock in a statistical arbitrage technique known as pairs trading. The traditional allocation strategy is based on an equal weight [...] Read more.
The main goal of the paper is to introduce different models to calculate the amount of money that must be allocated to each stock in a statistical arbitrage technique known as pairs trading. The traditional allocation strategy is based on an equal weight methodology. However, we will show how, with an optimal allocation, the performance of pairs trading increases significantly. Four methodologies are proposed to set up the optimal allocation. These methodologies are based on distance, correlation, cointegration and Hurst exponent (mean reversion). It is showed that the new methodologies provide an improvement in the obtained results with respect to an equal weighted strategy. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

16 pages, 502 KiB  
Article
Exploring the Link between Academic Dishonesty and Economic Delinquency: A Partial Least Squares Path Modeling Approach
by Elena Druică, Călin Vâlsan, Rodica Ianole-Călin, Răzvan Mihail-Papuc and Irena Munteanu
Mathematics 2019, 7(12), 1241; https://0-doi-org.brum.beds.ac.uk/10.3390/math7121241 - 15 Dec 2019
Cited by 12 | Viewed by 4173
Abstract
This paper advances the study of the relationship between the attitude towards academic dishonesty and other types of dishonest and even fraudulent behavior, such as tax evasion and piracy. It proposes a model in which the attitudes towards two types of cheating and [...] Read more.
This paper advances the study of the relationship between the attitude towards academic dishonesty and other types of dishonest and even fraudulent behavior, such as tax evasion and piracy. It proposes a model in which the attitudes towards two types of cheating and fraud are systematically analyzed in connection with a complex set of latent construct determinants and control variables. It attempts to predict the tolerance towards tax evasion and social insurance fraud and piracy, using academic cheating as the main predictor. The proposed model surveys 504 student respondents, uses a partial least squares—path modeling analysis, and employs two subsets of latent constructs to account for context and disposition. The relationship between the outcome variable and the subset of predictors that account for context is mediated by yet another latent construct—Preoccupation about Money—that has been shown to strongly influence people’s attitude towards a whole range of social and economic behaviors. The results show academic dishonesty is a statistically significant predictor of an entire range of unethical and fraudulent behavior acceptance, and confirm the role played by both contextual and dispositional variables; moreover, they show that dispositional and contextual variables tend to be segregated according to how they impact the outcome. They also show that money priming does not act as a mediator, in spite of its stand-alone impact on the outcome variables. The most important result, however, is that the effect size of the main predictor is large. The contribution of this paper is two-fold: it advances a line of research previously sidestepped, and it proposes a comprehensive and robust model with a view to establish a hierarchy of significance and effect size in predicting deviance and fraud. Most of all, this research highlights the central role played by academic dishonesty in predicting the acceptance of any type of dishonest behavior, be it in the workplace, at home, or when discharging one’s responsibilities as a citizen. The results presented here give important clues as to where to start intervening in order to discourage the acceptance of deviance and fraud. Educators, university professors, and academic administrators should be at the forefront of targeted campaigns and policies aimed at fighting and reducing academic dishonesty. Full article
(This article belongs to the Special Issue Quantitative Methods for Economics and Finance)
Show Figures

Figure 1

Back to TopTop