entropy-logo

Journal Browser

Journal Browser

New Developments in Statistical Information Theory Based on Entropy and Divergence Measures

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (30 September 2018) | Viewed by 57472

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Statistics and O.R., Complutense University of Madrid, 28040 Madrid, Spain
Interests: minimum divergence estimators: robustness and efficiency; robust test procedures based on minimum divergence estimators; robust test procedures in composite likelihood, empirical likelihood, change point, and time series
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. In fact, the so-called Statistical Information Theory has been the subject of much statistical research over the last fifty years. Minimum divergence estimators or minimum distance estimators have been used successfully in models for continuous and discrete data due to its robustness properties. Divergence statistics, i.e., those ones obtained by replacing either one or both arguments in the measures of divergence by suitable estimators, have become a very good alternative to the classical likelihood ratio test in both continuous and discrete models, as well as to the classical Pearson-type statistic in discrete models.

Divergence statistics, based on maximum likelihood estimators, as well as Wald’s statistics, likelihood ratio statistics and Rao’s score statistics, enjoy several optimum asymptotic properties but they are highly non-robust in case of model misspecification under presence of outlying observations. It is well-known that a small deviation from the underlying assumptions on the model can have drastic effect on the performance of these classical tests. Thus, the practical importance of a robust test procedure is beyond doubt; and it is helpful for solving several real-life problems containing some outliers in the observed sample. For this reason in the last years a robust version of the classical Wald test statistic, for testing simple and composite null hypotheses for general parametric models, have been introduced and studied for different problems in the statistical literature. These test statistics are based on minimum divergence estimators instead of the maximum likelihood and have been considered in many different statistical problems: Censoring, equality of means in normal and lognormal models, logistic regression models in particular and GLM models in general, etc.

The scope of the contributions to this Special Issue will be to present new and original research based on minimum divergence estimators and divergence statistics, from a theoretical and applied point of view, in different statistical problem with special emphasis on efficiency and robustness. Manuscripts summarizing the most recent state-of-the-art of these topics will also be welcome.

Prof. Dr. Leandro Pardo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Divergence measures
  • Entropy measures
  • Minimum divergence estimator (MDE)
  • Testing based on divergence measures
  • Wald-type tests based on MDE
  • Robustness
  • Efficiency
  • Parametric models
  • Complex random sampling
  • Composite Likelihood
  • Empirical likelihood
  • Change point
  • Censoring
  • GLM models

Related Special Issue

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

7 pages, 205 KiB  
Editorial
New Developments in Statistical Information Theory Based on Entropy and Divergence Measures
by Leandro Pardo
Entropy 2019, 21(4), 391; https://0-doi-org.brum.beds.ac.uk/10.3390/e21040391 - 11 Apr 2019
Cited by 10 | Viewed by 2602
Abstract
In the last decades the interest in statistical methods based on information measures and particularly in pseudodistances or divergences has grown substantially [...] Full article

Research

Jump to: Editorial, Review

40 pages, 1294 KiB  
Article
Robust Inference after Random Projections via Hellinger Distance for Location-Scale Family
by Lei Li, Anand N. Vidyashankar, Guoqing Diao and Ejaz Ahmed
Entropy 2019, 21(4), 348; https://0-doi-org.brum.beds.ac.uk/10.3390/e21040348 - 29 Mar 2019
Cited by 2 | Viewed by 3384
Abstract
Big data and streaming data are encountered in a variety of contemporary applications in business and industry. In such cases, it is common to use random projections to reduce the dimension of the data yielding compressed data. These data however possess various anomalies [...] Read more.
Big data and streaming data are encountered in a variety of contemporary applications in business and industry. In such cases, it is common to use random projections to reduce the dimension of the data yielding compressed data. These data however possess various anomalies such as heterogeneity, outliers, and round-off errors which are hard to detect due to volume and processing challenges. This paper describes a new robust and efficient methodology, using Hellinger distance, to analyze the compressed data. Using large sample methods and numerical experiments, it is demonstrated that a routine use of robust estimation procedure is feasible. The role of double limits in understanding the efficiency and robustness is brought out, which is of independent interest. Full article
Show Figures

Figure 1

23 pages, 378 KiB  
Article
Composite Tests under Corrupted Data
by Michel Broniatowski, Jana Jurečková, Ashok Kumar Moses and Emilie Miranda
Entropy 2019, 21(1), 63; https://0-doi-org.brum.beds.ac.uk/10.3390/e21010063 - 14 Jan 2019
Cited by 4 | Viewed by 3766
Abstract
This paper focuses on test procedures under corrupted data. We assume that the observations Z i are mismeasured, due to the presence of measurement errors. Thus, instead of Z i for i = 1 , , n, we observe [...] Read more.
This paper focuses on test procedures under corrupted data. We assume that the observations Z i are mismeasured, due to the presence of measurement errors. Thus, instead of Z i for i = 1 , , n, we observe X i = Z i + δ V i, with an unknown parameter δ and an unobservable random variable V i. It is assumed that the random variables Z i are i.i.d., as are the X i and the V i. The test procedure aims at deciding between two simple hyptheses pertaining to the density of the variable Z i, namely f 0 and g 0. In this setting, the density of the V i is supposed to be known. The procedure which we propose aggregates likelihood ratios for a collection of values of δ. A new definition of least-favorable hypotheses for the aggregate family of tests is presented, and a relation with the Kullback-Leibler divergence between the sets f δ δ and g δ δ is presented. Finite-sample lower bounds for the power of these tests are presented, both through analytical inequalities and through simulation under the least-favorable hypotheses. Since no optimality holds for the aggregation of likelihood ratio tests, a similar procedure is proposed, replacing the individual likelihood ratio by some divergence based test statistics. It is shown and discussed that the resulting aggregated test may perform better than the aggregate likelihood ratio procedure. Full article
Show Figures

Figure 1

9 pages, 284 KiB  
Article
Likelihood Ratio Testing under Measurement Errors
by Michel Broniatowski, Jana Jurečková and Jan Kalina
Entropy 2018, 20(12), 966; https://0-doi-org.brum.beds.ac.uk/10.3390/e20120966 - 13 Dec 2018
Cited by 6 | Viewed by 3287
Abstract
We consider the likelihood ratio test of a simple null hypothesis (with density f 0 ) against a simple alternative hypothesis (with density g 0 ) in the situation that observations X i are mismeasured due to the presence of measurement errors. Thus [...] Read more.
We consider the likelihood ratio test of a simple null hypothesis (with density f 0 ) against a simple alternative hypothesis (with density g 0 ) in the situation that observations X i are mismeasured due to the presence of measurement errors. Thus instead of X i for i = 1 , , n , we observe Z i = X i + δ V i with unobservable parameter δ and unobservable random variable V i . When we ignore the presence of measurement errors and perform the original test, the probability of type I error becomes different from the nominal value, but the test is still the most powerful among all tests on the modified level. Further, we derive the minimax test of some families of misspecified hypotheses and alternatives. The test exploits the concept of pseudo-capacities elaborated by Huber and Strassen (1973) and Buja (1986). A numerical experiment illustrates the principles and performance of the novel test. Full article
20 pages, 1084 KiB  
Article
Asymptotic Properties for Methods Combining the Minimum Hellinger Distance Estimate and the Bayesian Nonparametric Density Estimate
by Yuefeng Wu and Giles Hooker
Entropy 2018, 20(12), 955; https://0-doi-org.brum.beds.ac.uk/10.3390/e20120955 - 11 Dec 2018
Cited by 1 | Viewed by 2820
Abstract
In frequentist inference, minimizing the Hellinger distance between a kernel density estimate and a parametric family produces estimators that are both robust to outliers and statistically efficient when the parametric family contains the data-generating distribution. This paper seeks to extend these results to [...] Read more.
In frequentist inference, minimizing the Hellinger distance between a kernel density estimate and a parametric family produces estimators that are both robust to outliers and statistically efficient when the parametric family contains the data-generating distribution. This paper seeks to extend these results to the use of nonparametric Bayesian density estimators within disparity methods. We propose two estimators: one replaces the kernel density estimator with the expected posterior density using a random histogram prior; the other transforms the posterior over densities into a posterior over parameters through minimizing the Hellinger distance for each density. We show that it is possible to adapt the mathematical machinery of efficient influence functions from semiparametric models to demonstrate that both our estimators are efficient in the sense of achieving the Cramér-Rao lower bound. We further demonstrate a Bernstein-von-Mises result for our second estimator, indicating that its posterior is asymptotically Gaussian. In addition, the robustness properties of classical minimum Hellinger distance estimators continue to hold. Full article
Show Figures

Figure 1

14 pages, 5219 KiB  
Article
Convex Optimization via Symmetrical Hölder Divergence for a WLAN Indoor Positioning System
by Osamah Abdullah
Entropy 2018, 20(9), 639; https://0-doi-org.brum.beds.ac.uk/10.3390/e20090639 - 25 Aug 2018
Cited by 7 | Viewed by 2992
Abstract
Modern indoor positioning system services are important technologies that play vital roles in modern life, providing many services such as recruiting emergency healthcare providers and for security purposes. Several large companies, such as Microsoft, Apple, Nokia, and Google, have researched location-based services. Wireless [...] Read more.
Modern indoor positioning system services are important technologies that play vital roles in modern life, providing many services such as recruiting emergency healthcare providers and for security purposes. Several large companies, such as Microsoft, Apple, Nokia, and Google, have researched location-based services. Wireless indoor localization is key for pervasive computing applications and network optimization. Different approaches have been developed for this technique using WiFi signals. WiFi fingerprinting-based indoor localization has been widely used due to its simplicity, and algorithms that fingerprint WiFi signals at separate locations can achieve accuracy within a few meters. However, a major drawback of WiFi fingerprinting is the variance in received signal strength (RSS), as it fluctuates with time and changing environment. As the signal changes, so does the fingerprint database, which can change the distribution of the RSS (multimodal distribution). Thus, in this paper, we propose that symmetrical Hölder divergence, which is a statistical model of entropy that encapsulates both the skew Bhattacharyya divergence and Cauchy–Schwarz divergence that are closed-form formulas that can be used to measure the statistical dissimilarities between the same exponential family for the signals that have multivariate distributions. The Hölder divergence is asymmetric, so we used both left-sided and right-sided data so the centroid can be symmetrized to obtain the minimizer of the proposed algorithm. The experimental results showed that the symmetrized Hölder divergence consistently outperformed the traditional k nearest neighbor and probability neural network. In addition, with the proposed algorithm, the position error accuracy was about 1 m in buildings. Full article
Show Figures

Figure 1

24 pages, 3321 KiB  
Article
Robust Relative Error Estimation
by Kei Hirose and Hiroki Masuda
Entropy 2018, 20(9), 632; https://0-doi-org.brum.beds.ac.uk/10.3390/e20090632 - 24 Aug 2018
Cited by 7 | Viewed by 4232
Abstract
Relative error estimation has been recently used in regression analysis. A crucial issue of the existing relative error estimation procedures is that they are sensitive to outliers. To address this issue, we employ the γ -likelihood function, which is constructed through γ -cross [...] Read more.
Relative error estimation has been recently used in regression analysis. A crucial issue of the existing relative error estimation procedures is that they are sensitive to outliers. To address this issue, we employ the γ -likelihood function, which is constructed through γ -cross entropy with keeping the original statistical model in use. The estimating equation has a redescending property, a desirable property in robust statistics, for a broad class of noise distributions. To find a minimizer of the negative γ -likelihood function, a majorize-minimization (MM) algorithm is constructed. The proposed algorithm is guaranteed to decrease the negative γ -likelihood function at each iteration. We also derive asymptotic normality of the corresponding estimator together with a simple consistent estimator of the asymptotic covariance matrix, so that we can readily construct approximate confidence sets. Monte Carlo simulation is conducted to investigate the effectiveness of the proposed procedure. Real data analysis illustrates the usefulness of our proposed procedure. Full article
Show Figures

Figure 1

20 pages, 313 KiB  
Article
Non-Quadratic Distances in Model Assessment
by Marianthi Markatou and Yang Chen
Entropy 2018, 20(6), 464; https://0-doi-org.brum.beds.ac.uk/10.3390/e20060464 - 14 Jun 2018
Cited by 9 | Viewed by 3272
Abstract
One natural way to measure model adequacy is by using statistical distances as loss functions. A related fundamental question is how to construct loss functions that are scientifically and statistically meaningful. In this paper, we investigate non-quadratic distances and their role in assessing [...] Read more.
One natural way to measure model adequacy is by using statistical distances as loss functions. A related fundamental question is how to construct loss functions that are scientifically and statistically meaningful. In this paper, we investigate non-quadratic distances and their role in assessing the adequacy of a model and/or ability to perform model selection. We first present the definition of a statistical distance and its associated properties. Three popular distances, total variation, the mixture index of fit and the Kullback-Leibler distance, are studied in detail, with the aim of understanding their properties and potential interpretations that can offer insight into their performance as measures of model misspecification. A small simulation study exemplifies the performance of these measures and their application to different scientific fields is briefly discussed. Full article
20 pages, 368 KiB  
Article
Robust Estimation for the Single Index Model Using Pseudodistances
by Aida Toma and Cristinca Fulga
Entropy 2018, 20(5), 374; https://0-doi-org.brum.beds.ac.uk/10.3390/e20050374 - 17 May 2018
Cited by 4 | Viewed by 3933
Abstract
For portfolios with a large number of assets, the single index model allows for expressing the large number of covariances between individual asset returns through a significantly smaller number of parameters. This avoids the constraint of having very large samples to estimate the [...] Read more.
For portfolios with a large number of assets, the single index model allows for expressing the large number of covariances between individual asset returns through a significantly smaller number of parameters. This avoids the constraint of having very large samples to estimate the mean and the covariance matrix of the asset returns, which practically would be unrealistic given the dynamic of market conditions. The traditional way to estimate the regression parameters in the single index model is the maximum likelihood method. Although the maximum likelihood estimators have desirable theoretical properties when the model is exactly satisfied, they may give completely erroneous results when outliers are present in the data set. In this paper, we define minimum pseudodistance estimators for the parameters of the single index model and using them we construct new robust optimal portfolios. We prove theoretical properties of the estimators, such as consistency, asymptotic normality, equivariance, robustness, and illustrate the benefits of the new portfolio optimization method for real financial data. Full article
Show Figures

Figure 1

32 pages, 449 KiB  
Article
A Generalized Relative (α, β)-Entropy: Geometric Properties and Applications to Robust Statistical Inference
by Abhik Ghosh and Ayanendranath Basu
Entropy 2018, 20(5), 347; https://0-doi-org.brum.beds.ac.uk/10.3390/e20050347 - 06 May 2018
Cited by 5 | Viewed by 3637
Abstract
Entropy and relative entropy measures play a crucial role in mathematical information theory. The relative entropies are also widely used in statistics under the name of divergence measures which link these two fields of science through the minimum divergence principle. Divergence measures are [...] Read more.
Entropy and relative entropy measures play a crucial role in mathematical information theory. The relative entropies are also widely used in statistics under the name of divergence measures which link these two fields of science through the minimum divergence principle. Divergence measures are popular among statisticians as many of the corresponding minimum divergence methods lead to robust inference in the presence of outliers in the observed data; examples include the ϕ -divergence, the density power divergence, the logarithmic density power divergence and the recently developed family of logarithmic super divergence (LSD). In this paper, we will present an alternative information theoretic formulation of the LSD measures as a two-parameter generalization of the relative α -entropy, which we refer to as the general ( α , β ) -entropy. We explore its relation with various other entropies and divergences, which also generates a two-parameter extension of Renyi entropy measure as a by-product. This paper is primarily focused on the geometric properties of the relative ( α , β ) -entropy or the LSD measures; we prove their continuity and convexity in both the arguments along with an extended Pythagorean relation under a power-transformation of the domain space. We also derive a set of sufficient conditions under which the forward and the reverse projections of the relative ( α , β ) -entropy exist and are unique. Finally, we briefly discuss the potential applications of the relative ( α , β ) -entropy or the LSD measures in statistical inference, in particular, for robust parameter estimation and hypothesis testing. Our results on the reverse projection of the relative ( α , β ) -entropy establish, for the first time, the existence and uniqueness of the minimum LSD estimators. Numerical illustrations are also provided for the problem of estimating the binomial parameter. Full article
15 pages, 329 KiB  
Article
Minimum Penalized ϕ-Divergence Estimation under Model Misspecification
by M. Virtudes Alba-Fernández, M. Dolores Jiménez-Gamero and F. Javier Ariza-López
Entropy 2018, 20(5), 329; https://0-doi-org.brum.beds.ac.uk/10.3390/e20050329 - 30 Apr 2018
Cited by 4 | Viewed by 2734
Abstract
This paper focuses on the consequences of assuming a wrong model for multinomial data when using minimum penalized ϕ -divergence, also known as minimum penalized disparity estimators, to estimate the model parameters. These estimators are shown to converge to a well-defined limit. An [...] Read more.
This paper focuses on the consequences of assuming a wrong model for multinomial data when using minimum penalized ϕ -divergence, also known as minimum penalized disparity estimators, to estimate the model parameters. These estimators are shown to converge to a well-defined limit. An application of the results obtained shows that a parametric bootstrap consistently estimates the null distribution of a certain class of test statistics for model misspecification detection. An illustrative application to the accuracy assessment of the thematic quality in a global land cover map is included. Full article
28 pages, 959 KiB  
Article
Robustness Property of Robust-BD Wald-Type Test for Varying-Dimensional General Linear Models
by Xiao Guo and Chunming Zhang
Entropy 2018, 20(3), 168; https://0-doi-org.brum.beds.ac.uk/10.3390/e20030168 - 05 Mar 2018
Cited by 2 | Viewed by 3434
Abstract
An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This [...] Read more.
An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This paper re-examines this issue by allowing for a diverging number of parameters combined with a broader array of robust error measures, called “robust- BD ”, for the class of “general linear models”. Under regularity conditions, we derive the influence function of the robust- BD parameter estimator and demonstrate that the robust- BD Wald-type test enjoys the robustness of validity and efficiency asymptotically. Specifically, the asymptotic level of the test is stable under a small amount of contamination of the null hypothesis, whereas the asymptotic power is large enough under a contaminated distribution in a neighborhood of the contiguous alternatives, thus lending supports to the utility of the proposed robust- BD Wald-type test. Full article
Show Figures

Figure 1

322 KiB  
Article
Composite Likelihood Methods Based on Minimum Density Power Divergence Estimator
by Elena Castilla, Nirian Martín, Leandro Pardo and Konstantinos Zografos
Entropy 2018, 20(1), 18; https://0-doi-org.brum.beds.ac.uk/10.3390/e20010018 - 31 Dec 2017
Cited by 6 | Viewed by 3945
Abstract
In this paper, a robust version of the Wald test statistic for composite likelihood is considered by using the composite minimum density power divergence estimator instead of the composite maximum likelihood estimator. This new family of test statistics will be called Wald-type test [...] Read more.
In this paper, a robust version of the Wald test statistic for composite likelihood is considered by using the composite minimum density power divergence estimator instead of the composite maximum likelihood estimator. This new family of test statistics will be called Wald-type test statistics. The problem of testing a simple and a composite null hypothesis is considered, and the robustness is studied on the basis of a simulation study. The composite minimum density power divergence estimator is also introduced, and its asymptotic properties are studied. Full article
725 KiB  
Article
Robust-BD Estimation and Inference for General Partially Linear Models
by Chunming Zhang and Zhengjun Zhang
Entropy 2017, 19(11), 625; https://0-doi-org.brum.beds.ac.uk/10.3390/e19110625 - 20 Nov 2017
Cited by 1 | Viewed by 4027
Abstract
The classical quadratic loss for the partially linear model (PLM) and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD)” estimators of both the parametric and nonparametric components in [...] Read more.
The classical quadratic loss for the partially linear model (PLM) and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD)” estimators of both the parametric and nonparametric components in the general partially linear model (GPLM), which allows the distribution of the response variable to be partially specified, without being fully known. Using the local-polynomial function estimation method, we propose a computationally-efficient procedure for obtaining “robust-BD” estimators and establish the consistency and asymptotic normality of the “robust-BD” estimator of the parametric component β o . For inference procedures of β o in the GPLM, we show that the Wald-type test statistic W n constructed from the “robust-BD” estimators is asymptotically distribution free under the null, whereas the likelihood ratio-type test statistic Λ n is not. This provides an insight into the distinction from the asymptotic equivalence (Fan and Huang 2005) between W n and Λ n in the PLM constructed from profile least-squares estimators using the non-robust quadratic loss. Numerical examples illustrate the computational effectiveness of the proposed “robust-BD” estimators and robust Wald-type test in the appearance of outlying observations. Full article
Show Figures

Figure 1

327 KiB  
Article
Robust and Sparse Regression via γ-Divergence
by Takayuki Kawashima and Hironori Fujisawa
Entropy 2017, 19(11), 608; https://0-doi-org.brum.beds.ac.uk/10.3390/e19110608 - 13 Nov 2017
Cited by 20 | Viewed by 5424
Abstract
In high-dimensional data, many sparse regression methods have been proposed. However, they may not be robust against outliers. Recently, the use of density power weight has been studied for robust parameter estimation, and the corresponding divergences have been discussed. One such divergence is [...] Read more.
In high-dimensional data, many sparse regression methods have been proposed. However, they may not be robust against outliers. Recently, the use of density power weight has been studied for robust parameter estimation, and the corresponding divergences have been discussed. One such divergence is the γ -divergence, and the robust estimator using the γ -divergence is known for having a strong robustness. In this paper, we extend the γ -divergence to the regression problem, consider the robust and sparse regression based on the γ -divergence and show that it has a strong robustness under heavy contamination even when outliers are heterogeneous. The loss function is constructed by an empirical estimate of the γ -divergence with sparse regularization, and the parameter estimate is defined as the minimizer of the loss function. To obtain the robust and sparse estimate, we propose an efficient update algorithm, which has a monotone decreasing property of the loss function. Particularly, we discuss a linear regression problem with L 1 regularization in detail. In numerical experiments and real data analyses, we see that the proposed method outperforms past robust and sparse methods. Full article
Show Figures

Figure 1

Review

Jump to: Editorial, Research

12 pages, 294 KiB  
Review
ϕ-Divergence in Contingency Table Analysis
by Maria Kateri
Entropy 2018, 20(5), 324; https://0-doi-org.brum.beds.ac.uk/10.3390/e20050324 - 27 Apr 2018
Cited by 11 | Viewed by 2994
Abstract
The ϕ -divergence association models for two-way contingency tables is a family of models that includes the association and correlation models as special cases. We present this family of models, discussing its features and demonstrating the role of ϕ -divergence in building this [...] Read more.
The ϕ -divergence association models for two-way contingency tables is a family of models that includes the association and correlation models as special cases. We present this family of models, discussing its features and demonstrating the role of ϕ -divergence in building this family. The most parsimonious member of this family, the model of ϕ -scaled uniform local association, is considered in detail. It is implemented and representative examples are commented on. Full article
Back to TopTop