entropy-logo

Journal Browser

Journal Browser

Inductive Statistical Methods

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Statistical Physics".

Deadline for manuscript submissions: closed (31 May 2015) | Viewed by 57858

Special Issue Editors

Institute of Mathematics and Statistics, University of São Paulo, Rua do Matão, 1010, São Paulo 05508-900, Brazil
Interests: Bayesian statistics; controversies and paradoxes in probability and statistics; Bayesian reliability; Bayesian analysis of discrete data (BADD); applied statistics
Special Issues, Collections and Topics in MDPI journals
Federal University of Sao Carlos, São Carlos, São Paulo, Brazil
Interests: Bayesian inference; foundations of statistics; significance tests; reliability and survival analysis; model selection; biostatistics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The importance of using statistical methods in the development of research is widely recognized by the general scientific community. This special issue focuses on foundations of statistics and their proper application in practical problems.

Interest in foundations of inductive Statistics has been growing together with the increasing availability of new Bayesian methodological alternatives. Scientists need to deal with the increase difficulty in the choice of optimal methods to be applied to their research problems that each day becomes closer to reality. The examination and discussion on foundations prevent the mere ad-hoc application of Bayesian methods to scientific problems.

Specific areas of interest include (but are not limited to):

  • Formal applications of Inductive Statistical methods to novel applied problems.
  • Philosophical accounts of Inductive Statistical methods (including contributions arguing for or against those).
  • Methodological developments of Inductive Statistics (taking in account the practical usability of them).
  • Surveys of the state of the art in one of the above areas.

Prof.  Dr. Carlos A. de B. Pereira
Prof.  Dr. Adriano Polpo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.


Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

530 KiB  
Article
Bayesian Inference on the Memory Parameter for Gamma-Modulated Regression Models
by Plinio Andrade, Laura Rifo, Soledad Torres and Francisco Torres-Avilés
Entropy 2015, 17(10), 6576-6597; https://0-doi-org.brum.beds.ac.uk/10.3390/e17106576 - 25 Sep 2015
Cited by 2 | Viewed by 4695
Abstract
In this work, we propose a Bayesian methodology to make inferences for the memory parameter and other characteristics under non-standard assumptions for a class of stochastic processes. This class generalizes the Gamma-modulated process, with trajectories that exhibit long memory behavior, as well as [...] Read more.
In this work, we propose a Bayesian methodology to make inferences for the memory parameter and other characteristics under non-standard assumptions for a class of stochastic processes. This class generalizes the Gamma-modulated process, with trajectories that exhibit long memory behavior, as well as decreasing variability as time increases. Different values of the memory parameter influence the speed of this decrease, making this heteroscedastic model very flexible. Its properties are used to implement an approximate Bayesian computation and MCMC scheme to obtain posterior estimates. We test and validate our method through simulations and real data from the big earthquake that occurred in 2010 in Chile. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Show Figures

Graphical abstract

529 KiB  
Article
A Bayesian Decision-Theoretic Approach to Logically-Consistent Hypothesis Testing
by Gustavo Miranda Da Silva, Luis Gustavo Esteves, Victor Fossaluza, Rafael Izbicki and Sergio Wechsler
Entropy 2015, 17(10), 6534-6559; https://0-doi-org.brum.beds.ac.uk/10.3390/e17106534 - 24 Sep 2015
Cited by 8 | Viewed by 4900
Abstract
This work addresses an important issue regarding the performance of simultaneous test procedures: the construction of multiple tests that at the same time are optimal from a statistical perspective and that also yield logically-consistent results that are easy to communicate to practitioners of [...] Read more.
This work addresses an important issue regarding the performance of simultaneous test procedures: the construction of multiple tests that at the same time are optimal from a statistical perspective and that also yield logically-consistent results that are easy to communicate to practitioners of statistical methods. For instance, if hypothesis A implies hypothesis B, is it possible to create optimal testing procedures that reject A whenever they reject B? Unfortunately, several standard testing procedures fail in having such logical consistency. Although this has been deeply investigated under a frequentist perspective, the literature lacks analyses under a Bayesian paradigm. In this work, we contribute to the discussion by investigating three rational relationships under a Bayesian decision-theoretic standpoint: coherence, invertibility and union consonance. We characterize and illustrate through simple examples optimal Bayes tests that fulfill each of these requisites separately. We also explore how far one can go by putting these requirements together. We show that although fairly intuitive tests satisfy both coherence and invertibility, no Bayesian testing scheme meets the desiderata as a whole, strengthening the understanding that logical consistency cannot be combined with statistical optimality in general. Finally, we associate Bayesian hypothesis testing with Bayes point estimation procedures. We prove the performance of logically-consistent hypothesis testing by means of a Bayes point estimator to be optimal only under very restrictive conditions. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Show Figures

Figure 1

438 KiB  
Article
A Bayesian Predictive Discriminant Analysis with Screened Data
by Hea-Jung Kim
Entropy 2015, 17(9), 6481-6502; https://0-doi-org.brum.beds.ac.uk/10.3390/e17096481 - 21 Sep 2015
Viewed by 4790
Abstract
In the application of discriminant analysis, a situation sometimes arises where individual measurements are screened by a multidimensional screening scheme. For this situation, a discriminant analysis with screened populations is considered from a Bayesian viewpoint, and an optimal predictive rule for the analysis [...] Read more.
In the application of discriminant analysis, a situation sometimes arises where individual measurements are screened by a multidimensional screening scheme. For this situation, a discriminant analysis with screened populations is considered from a Bayesian viewpoint, and an optimal predictive rule for the analysis is proposed. In order to establish a flexible method to incorporate the prior information of the screening mechanism, we propose a hierarchical screened scale mixture of normal (HSSMN) model, which makes provision for flexible modeling of the screened observations. An Markov chain Monte Carlo (MCMC) method using the Gibbs sampler and the Metropolis–Hastings algorithm within the Gibbs sampler is used to perform a Bayesian inference on the HSSMN models and to approximate the optimal predictive rule. A simulation study is given to demonstrate the performance of the proposed predictive discrimination procedure. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Show Figures

Graphical abstract

1232 KiB  
Article
Approximate Methods for Maximum Likelihood Estimation of Multivariate Nonlinear Mixed-Effects Models
by Wan-Lun Wang
Entropy 2015, 17(8), 5353-5381; https://0-doi-org.brum.beds.ac.uk/10.3390/e17085353 - 29 Jul 2015
Cited by 6 | Viewed by 6125
Abstract
Multivariate nonlinear mixed-effects models (MNLMM) have received increasing use due to their flexibility for analyzing multi-outcome longitudinal data following possibly nonlinear profiles. This paper presents and compares five different iterative algorithms for maximum likelihood estimation of the MNLMM. These algorithmic schemes include the [...] Read more.
Multivariate nonlinear mixed-effects models (MNLMM) have received increasing use due to their flexibility for analyzing multi-outcome longitudinal data following possibly nonlinear profiles. This paper presents and compares five different iterative algorithms for maximum likelihood estimation of the MNLMM. These algorithmic schemes include the penalized nonlinear least squares coupled to the multivariate linear mixed-effects (PNLS-MLME) procedure, Laplacian approximation, the pseudo-data expectation conditional maximization (ECM) algorithm, the Monte Carlo EM algorithm and the importance sampling EM algorithm. When fitting the MNLMM, it is rather difficult to exactly evaluate the observed log-likelihood function in a closed-form expression, because it involves complicated multiple integrals. To address this issue, the corresponding approximations of the observed log-likelihood function under the five algorithms are presented. An expected information matrix of parameters is also provided to calculate the standard errors of model parameters. A comparison of computational performances is investigated through simulation and a real data example from an AIDS clinical study. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Show Figures

Figure 1

550 KiB  
Article
Statistical Evidence Measured on a Properly Calibrated Scale across Nested and Non-nested Hypothesis Comparisons
by Veronica J. Vieland and Sang-Cheol Seok
Entropy 2015, 17(8), 5333-5352; https://0-doi-org.brum.beds.ac.uk/10.3390/e17085333 - 29 Jul 2015
Cited by 2 | Viewed by 4710
Abstract
Statistical modeling is often used to measure the strength of evidence for or against hypotheses about given data. We have previously proposed an information-dynamic framework in support of a properly calibrated measurement scale for statistical evidence, borrowing some mathematics from thermodynamics, and showing [...] Read more.
Statistical modeling is often used to measure the strength of evidence for or against hypotheses about given data. We have previously proposed an information-dynamic framework in support of a properly calibrated measurement scale for statistical evidence, borrowing some mathematics from thermodynamics, and showing how an evidential analogue of the ideal gas equation of state could be used to measure evidence for a one-sided binomial hypothesis comparison (“coin is fair” vs. “coin is biased towards heads”). Here we take three important steps forward in generalizing the framework beyond this simple example, albeit still in the context of the binomial model. We: (1) extend the scope of application to other forms of hypothesis comparison; (2) show that doing so requires only the original ideal gas equation plus one simple extension, which has the form of the Van der Waals equation; (3) begin to develop the principles required to resolve a key constant, which enables us to calibrate the measurement scale across applications, and which we find to be related to the familiar statistical concept of degrees of freedom. This paper thus moves our information-dynamic theory substantially closer to the goal of producing a practical, properly calibrated measure of statistical evidence for use in general applications. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Show Figures

Graphical abstract

1727 KiB  
Article
Escalation with Overdose Control is More Efficient and Safer than Accelerated Titration for Dose Finding
by André Rogatko, Galen Cook-Wiens, Mourad Tighiouart and Steven Piantadosi
Entropy 2015, 17(8), 5288-5303; https://0-doi-org.brum.beds.ac.uk/10.3390/e17085288 - 27 Jul 2015
Cited by 6 | Viewed by 5762
Abstract
The standard 3 + 3 or “modified Fibonacci” up-and-down (MF-UD) method of dose escalation is by far the most used design in dose-finding cancer trials. However, MF-UD has always shown inferior performance when compared with its competitors regarding number of patients treated at [...] Read more.
The standard 3 + 3 or “modified Fibonacci” up-and-down (MF-UD) method of dose escalation is by far the most used design in dose-finding cancer trials. However, MF-UD has always shown inferior performance when compared with its competitors regarding number of patients treated at optimal doses. A consequence of using less effective designs is that more patients are treated with doses outside the therapeutic window. In June 2012, the U S Food and Drug Administration (FDA) rejected the proposal to use Escalation with Overdose Control (EWOC), an established dose-finding method which has been extensively used in FDA-approved first in human trials and imposed a variation of the MF-UD, known as accelerated titration (AT) design. This event motivated us to perform an extensive simulation study comparing the operating characteristics of AT and EWOC. We show that the AT design has poor operating characteristics relative to three versions of EWOC under several practical scenarios. From the clinical investigator’s perspective, lower bias and mean square error make EWOC designs preferable than AT designs without compromising safety. From a patient’s perspective, uniformly higher proportion of patients receiving doses within an optimal range of the true MTD makes EWOC designs preferable than AT designs. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Show Figures

Figure 1

915 KiB  
Article
Averaged Extended Tree Augmented Naive Classifier
by Aaron Meehan and Cassio P. De Campos
Entropy 2015, 17(7), 5085-5100; https://0-doi-org.brum.beds.ac.uk/10.3390/e17075085 - 21 Jul 2015
Cited by 6 | Viewed by 4628
Abstract
This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN), which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN) and Averaged One-Dependence Estimator (AODE) classifiers. We describe the main properties of [...] Read more.
This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN), which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN) and Averaged One-Dependence Estimator (AODE) classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Show Figures

1339 KiB  
Article
General and Local: Averaged k-Dependence Bayesian Classifiers
by Limin Wang, Haoyu Zhao, Minghui Sun and Yue Ning
Entropy 2015, 17(6), 4134-4154; https://0-doi-org.brum.beds.ac.uk/10.3390/e17064134 - 16 Jun 2015
Cited by 10 | Viewed by 4929
Abstract
The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB) classifier can construct at arbitrary points (values of k) along the attribute dependence spectrum, it cannot identify the changes of [...] Read more.
The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB) classifier can construct at arbitrary points (values of k) along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB) classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI) showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB), tree augmented naive Bayes (TAN), Averaged one-dependence estimators (AODE), and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Show Figures

285 KiB  
Article
A Penalized Likelihood Approach to Parameter Estimation with Integral Reliability Constraints
by Barry Smith, Steven Wang, Augustine Wong and Xiaofeng Zhou
Entropy 2015, 17(6), 4040-4063; https://0-doi-org.brum.beds.ac.uk/10.3390/e17064040 - 12 Jun 2015
Cited by 7 | Viewed by 4799
Abstract
Stress-strength reliability problems arise frequently in applied statistics and related fields. Often they involve two independent and possibly small samples of measurements on strength and breakdown pressures (stress). The goal of the researcher is to use the measurements to obtain inference on reliability, [...] Read more.
Stress-strength reliability problems arise frequently in applied statistics and related fields. Often they involve two independent and possibly small samples of measurements on strength and breakdown pressures (stress). The goal of the researcher is to use the measurements to obtain inference on reliability, which is the probability that stress will exceed strength. This paper addresses the case where reliability is expressed in terms of an integral which has no closed form solution and where the number of observed values on stress and strength is small. We find that the Lagrange approach to estimating constrained likelihood, necessary for inference, often performs poorly. We introduce a penalized likelihood method and it appears to always work well. We use third order likelihood methods to partially offset the issue of small samples. The proposed method is applied to draw inferences on reliability in stress-strength problems with independent exponentiated exponential distributions. Simulation studies are carried out to assess the accuracy of the proposed method and to compare it with some standard asymptotic methods. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
261 KiB  
Article
Density Regression Based on Proportional Hazards Family
by Wei Dang and Keming Yu
Entropy 2015, 17(6), 3679-3691; https://0-doi-org.brum.beds.ac.uk/10.3390/e17063679 - 04 Jun 2015
Viewed by 3931
Abstract
This paper develops a class of density regression models based on proportional hazards family, namely, Gamma transformation proportional hazard (Gt-PH) model . Exact inference for the regression parameters and hazard ratio is derived. These estimators enjoy some good properties such as unbiased estimation, [...] Read more.
This paper develops a class of density regression models based on proportional hazards family, namely, Gamma transformation proportional hazard (Gt-PH) model . Exact inference for the regression parameters and hazard ratio is derived. These estimators enjoy some good properties such as unbiased estimation, which may not be shared by other inference methods such as maximum likelihood estimate (MLE). Generalised confidence interval and hypothesis testing for regression parameters are also provided. The method itself is easy to implement in practice. The regression method is also extended to Lasso-based variable selection. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Show Figures

3746 KiB  
Article
A Robust Bayesian Approach to an Optimal Replacement Policy for Gas Pipelines
by José Pablo Arias-Nicolás, Jacinto Martín, Fabrizio Ruggeri and Alfonso Suárez-Llorens
Entropy 2015, 17(6), 3656-3678; https://0-doi-org.brum.beds.ac.uk/10.3390/e17063656 - 03 Jun 2015
Cited by 5 | Viewed by 4059
Abstract
In the paper, we address Bayesian sensitivity issues when integrating experts’ judgments with available historical data in a case study about strategies for the preventive maintenance of low-pressure cast iron pipelines in an urban gas distribution network. We are interested in replacement priorities, [...] Read more.
In the paper, we address Bayesian sensitivity issues when integrating experts’ judgments with available historical data in a case study about strategies for the preventive maintenance of low-pressure cast iron pipelines in an urban gas distribution network. We are interested in replacement priorities, as determined by the failure rates of pipelines deployed under different conditions. We relax the assumptions, made in previous papers, about the prior distributions on the failure rates and study changes in replacement priorities under different choices of generalized moment-constrained classes of priors. We focus on the set of non-dominated actions, and among them, we propose the least sensitive action as the optimal choice to rank different classes of pipelines, providing a sound approach to the sensitivity problem. Moreover, we are also interested in determining which classes have a failure rate exceeding a given acceptable value, considered as the threshold determining no need for replacement. Graphical tools are introduced to help decisionmakers to determine if pipelines are to be replaced and the corresponding priorities. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Show Figures

1018 KiB  
Article
Identifying the Most Relevant Lag with Runs
by Úrsula Faura, Matilde Lafuente, Mariano Matilla-García and Manuel Ruiz
Entropy 2015, 17(5), 2706-2722; https://0-doi-org.brum.beds.ac.uk/10.3390/e17052706 - 28 Apr 2015
Cited by 2 | Viewed by 3526
Abstract
In this paper, we propose a nonparametric statistical tool to identify the most relevant lag in the model description of a time series. It is also shown that it can be used for model identification. The statistic is based on the number of [...] Read more.
In this paper, we propose a nonparametric statistical tool to identify the most relevant lag in the model description of a time series. It is also shown that it can be used for model identification. The statistic is based on the number of runs, when the time series is symbolized depending on the empirical quantiles of the time series. With a Monte Carlo simulation, we show the size and power performance of our new test statistic under linear and nonlinear data generating processes. From the theoretical point of view, it is the first time that symbolic analysis and runs are proposed to identifying characteristic lags and also to help in the identification of univariate time series models. From a more applied point of view, the results show the power and competitiveness of the proposed tool with respect to other techniques without presuming or specifying a model. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Show Figures

Back to TopTop