Special Issue "Towards a New Paradigm for Statistical Evidence"

A special issue of Econometrics (ISSN 2225-1146).

Deadline for manuscript submissions: closed (30 September 2019).

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Jae H. (Paul) Kim
E-Mail Website
Guest Editor
Department of Economics and Finance, La Trobe Business School, La Trobe University, Melbourne 3086, Australia
Interests: econometrics; empirical finance; statistical inference
Muhammad Ishaq Bhatti
E-Mail Website
Guest Editor
Department of Economics and Finance, La Trobe Business School, La Trobe University, Melbourne, VIC 3108, Australia
Interests: oil prices; stocks; forecasting; copula; DCC models; wavelets; financial econometrics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In many areas of science including business disciplines, statistical decisions are often made almost exclusively using the “p-value < 0.05” criterion, regardless of sample size, statistical power, or expected loss.

Serious concerns about this practice have grown, with the warnings that false or distorted scientific findings are widespread as a result. They include the statement made by the American Statistical Association (Wasserstein and Lazar, 2016) and Presidential address given by American Finance Association (Harvey, 2017). Past and recent studies that document the empirical evidence on this practice include Keuzenkamp and Magnus (1995) and McCloskey and Ziliak (1996) for economics, Kim et al. (2018) for accounting, and Kim and Ji (2016) for finance, among others.

The problem has particularly become more serious in recent times, with increasing availability of large or massive data sets.  On this point, Rao and Lovric (2016) propose that the 21st century researchers work towards a “paradigm shift” in testing statistical hypothesis. There are also calls that the researchers conduct more extensive exploratory data analysis before inferential statistics are considered for decision-making (see, for example, Leek and Peng, 2015; Soyer and Hogarth 2012).

 In light of these concerns and proposals, it is important that the new research efforts should be directed to

  • developing a new criterion for statistical evidence;
  • providing modifications to the p-value criterion; and
  • adopting more sensible alternatives to the p-value criterion including graphical methods.

This special issue invites theoretical or empirical papers on these issues. Possible topics include, but not limited to,

  1. New or alternative methods of hypothesis testing such as estimation-based method (e.g. confidence interval), predictive inference, and equivalence testing;
  2. Application of adaptive or optimal level of significance to business decisions
  3. Decision-theoretic approach to hypothesis testing and its applications
  4. Compromise between the classical and Bayesian methods of hypothesis testing
  5. Exploratory data analysis for large or massive data sets

Critical review papers on the current practice of hypothesis testing and future directions in the business or related disciplines may also be considered.

References:

Harvey, C. R. (2017), ‘Presidential Address: The Scientific Outlook in Financial Economics’, Journal of Finance, Vol. 72, No. 4, pp. 1399-1440.

Kim J. H., Ji, P., Ahmed, K., 2018, Significance Testing in Accounting Research: A Critical Evaluation based on Evidence, Abacus: a Journal of Accounting, Finance and Business Studies, forthcoming.

Kim J. H.  Choi, I, 2017, Unit Roots in Economic and Financial Time Series: A Re-evaluation at the Decision-based Significance Levels. Econometrics, 5(3), 41, Special Issue “Celebrated Econometricians: Peter Phillips”.

Kim, J. H., P. Ji, 2015, Significance Testing in Empirical Finance: A Critical Review and Assessment, Journal of Empirical Finance, 34. 1-14.

Keuzenkamp, H.A. and Magnus, J. 1995, On tests and significance in econometrics, Journal of Econometrics, 67, 1, 103–128.

Leek, J. T., & Peng, R. D. (2015). Statistics: P values are just the tip of the iceberg, Nature. 2015 Apr 30, 520-612 (7549): doi: 10.1038/520612a

McCloskey, D. and Ziliak, S. 1996, The standard error of regressions, Journal of Economic Literature, 34, 97–114.

Rao, C. R. and Lovric, M. M., 2016, Testing Point Null Hypothesis of a Normal Mean and the Truth: 21st Century Perspective, Journal of Modern Applied Statistical Methods, 15 (2), 2-21.

Soyer, E., Hogarth, R.M., 2012. The illusion of predictability: how regression statistics mislead experts. International Journal of Forecasting. 28, 695–711.

Wasserstein, R. L. and N. A. Lazar (2016), ‘The ASA's statement on p-values: Context, process, and purpose’, The American Statistician, Vol. 70, No. 2, pp. 129-133.

Prof. M. Ishaq Bhatti
Prof. Jae H. Kim
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Econometrics is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

Editorial
Towards a New Paradigm for Statistical Evidence in the Use of p-Value
Econometrics 2021, 9(1), 2; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics9010002 - 31 Dec 2020
Cited by 1 | Viewed by 1231
Abstract
As the guest editors of this Special Issue, we feel proud and grateful to write the editorial note of this issue, which consists of seven high-quality research papers [...] Full article
(This article belongs to the Special Issue Towards a New Paradigm for Statistical Evidence)

Research

Jump to: Editorial

Article
Teaching Graduate (and Undergraduate) Econometrics: Some Sensible Shifts to Improve Efficiency, Effectiveness, and Usefulness
Econometrics 2020, 8(3), 36; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics8030036 - 07 Sep 2020
Viewed by 1686
Abstract
Building on arguments by Joshua Angrist and Jörn-Steffen Pischke arguments for how the teaching of undergraduate econometrics could become more effective, I propose a redesign of graduate econometrics that would better serve most students and help make the field of economics more relevant. [...] Read more.
Building on arguments by Joshua Angrist and Jörn-Steffen Pischke arguments for how the teaching of undergraduate econometrics could become more effective, I propose a redesign of graduate econometrics that would better serve most students and help make the field of economics more relevant. The primary basis for the redesign is that the conventional methods do not adequately prepare students to recognize biases and to properly interpret significance, insignificance, and p-values; and there is an ethical problem in searching for significance and other matters. Based on these premises, I recommend that some of Angrist and Pischke’s recommendations be adopted for graduate econometrics. In addition, I recommend further shifts in emphasis, new pedagogy, and adding important components (e.g., on interpretations and simple ethical lessons) that are largely ignored in current textbooks. An obvious implication of these recommended changes is a confirmation of most of Angrist and Pischke’s recommendations for undergraduate econometrics, as well as further reductions in complexity. Full article
(This article belongs to the Special Issue Towards a New Paradigm for Statistical Evidence)
Show Figures

Figure 1

Article
The Replication Crisis as Market Failure
Econometrics 2019, 7(4), 44; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics7040044 - 24 Nov 2019
Cited by 1 | Viewed by 3177
Abstract
This paper begins with the observation that the constrained maximisation central to model estimation and hypothesis testing may be interpreted as a kind of profit maximisation. The output of estimation is a model that maximises some measure of model fit, subject to costs [...] Read more.
This paper begins with the observation that the constrained maximisation central to model estimation and hypothesis testing may be interpreted as a kind of profit maximisation. The output of estimation is a model that maximises some measure of model fit, subject to costs that may be interpreted as the shadow price of constraints imposed on the model. The replication crisis may be regarded as a market failure in which the price of “significant” results is lower than would be socially optimal. Full article
(This article belongs to the Special Issue Towards a New Paradigm for Statistical Evidence)
Article
A Frequentist Alternative to Significance Testing, p-Values, and Confidence Intervals
Econometrics 2019, 7(2), 26; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics7020026 - 04 Jun 2019
Cited by 18 | Viewed by 5684
Abstract
There has been much debate about null hypothesis significance testing, p-values without null hypothesis significance testing, and confidence intervals. The first major section of the present article addresses some of the main reasons these procedures are problematic. The conclusion is that none [...] Read more.
There has been much debate about null hypothesis significance testing, p-values without null hypothesis significance testing, and confidence intervals. The first major section of the present article addresses some of the main reasons these procedures are problematic. The conclusion is that none of them are satisfactory. However, there is a new procedure, termed the a priori procedure (APP), that validly aids researchers in obtaining sample statistics that have acceptable probabilities of being close to their corresponding population parameters. The second major section provides a description and review of APP advances. Not only does the APP avoid the problems that plague other inferential statistical procedures, but it is easy to perform too. Although the APP can be performed in conjunction with other procedures, the present recommendation is that it be used alone. Full article
(This article belongs to the Special Issue Towards a New Paradigm for Statistical Evidence)
Short Note
On Using the t-Ratio as a Diagnostic
Econometrics 2019, 7(2), 24; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics7020024 - 29 May 2019
Viewed by 3939
Abstract
The t-ratio has not one but two uses in econometrics, which should be carefully distinguished. It is used as a test and also as a diagnostic. I emphasize that the commonly-used estimators are in fact pretest estimators, and argue in favor of [...] Read more.
The t-ratio has not one but two uses in econometrics, which should be carefully distinguished. It is used as a test and also as a diagnostic. I emphasize that the commonly-used estimators are in fact pretest estimators, and argue in favor of an improved (continuous) version of pretesting, called model averaging. Full article
(This article belongs to the Special Issue Towards a New Paradigm for Statistical Evidence)
Article
Interval-Based Hypothesis Testing and Its Applications to Economics and Finance
Econometrics 2019, 7(2), 21; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics7020021 - 15 May 2019
Cited by 3 | Viewed by 4781
Abstract
This paper presents a brief review of interval-based hypothesis testing, widely used in bio-statistics, medical science, and psychology, namely, tests for minimum-effect, equivalence, and non-inferiority. We present the methods in the contexts of a one-sample t-test and a test for linear restrictions [...] Read more.
This paper presents a brief review of interval-based hypothesis testing, widely used in bio-statistics, medical science, and psychology, namely, tests for minimum-effect, equivalence, and non-inferiority. We present the methods in the contexts of a one-sample t-test and a test for linear restrictions in a regression. We present applications in testing for market efficiency, validity of asset-pricing models, and persistence of economic time series. We argue that, from the point of view of economics and finance, interval-based hypothesis testing provides more sensible inferential outcomes than those based on point-null hypothesis. We propose that interval-based tests be routinely employed in empirical research in business, as an alternative to point null hypothesis testing, especially in the new era of big data. Full article
(This article belongs to the Special Issue Towards a New Paradigm for Statistical Evidence)
Show Figures

Figure 1

Article
Important Issues in Statistical Testing and Recommended Improvements in Accounting Research
Econometrics 2019, 7(2), 18; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics7020018 - 08 May 2019
Cited by 7 | Viewed by 4593
Abstract
A great deal of the accounting research published in recent years has involved statistical tests. Our paper proposes improvements to both the quality and execution of such research. We address the following limitations in current research that appear to us to be ignored [...] Read more.
A great deal of the accounting research published in recent years has involved statistical tests. Our paper proposes improvements to both the quality and execution of such research. We address the following limitations in current research that appear to us to be ignored or used inappropriately: (1) unaddressed situational effects resulting from model limitations and what has been referred to as “data carpentry,” (2) limitations and alternatives to winsorizing, (3) necessary improvements to relying on a study’s calculated “p-values” instead of on the economic or behavioral importance of the results, and (4) the information loss incurred by under-valuing what can and cannot be learned from replications. Full article
(This article belongs to the Special Issue Towards a New Paradigm for Statistical Evidence)
Article
Not p-Values, Said a Little Bit Differently
Econometrics 2019, 7(1), 11; https://0-doi-org.brum.beds.ac.uk/10.3390/econometrics7010011 - 13 Mar 2019
Cited by 1 | Viewed by 4860
Abstract
As a contribution toward the ongoing discussion about the use and mis-use of p-values, numerical examples are presented demonstrating that a p-value can, as a practical matter, give you a really different answer than the one that you want. Full article
(This article belongs to the Special Issue Towards a New Paradigm for Statistical Evidence)
Show Figures

Figure 1

Back to TopTop