Next Article in Journal
An FDA-Based Approach for Clustering Elicited Expert Knowledge
Next Article in Special Issue
A Flexible Multivariate Distribution for Correlated Count Data
Previous Article in Journal
Assessment of Climate Change in Italy by Variants of Ordered Correspondence Analysis
Article

Bayesian Bandwidths in Semiparametric Modelling for Nonnegative Orthant Data with Diagnostics

1
Laboratoire de Mathématiques de Besançon UMR 6623 CNRS-UBFC, Université Bourgogne Franche-Comté, 16 Route de Gray, 25030 Besançon CEDEX, France
2
Laboratoire d’Analyse Numérique Informatique et de BIOmathématique, Université Joseph KI-ZERBO, Ouagadougou 03 BP 7021, Burkina Faso
*
Author to whom correspondence should be addressed.
Current address: Laboratoire Sciences et Techniques, Université Thomas SANKARA, Ouagadougou 12 BP 417, Burkina Faso.
Academic Editor: Eddy Kwessi
Received: 28 January 2021 / Revised: 19 February 2021 / Accepted: 1 March 2021 / Published: 4 March 2021
(This article belongs to the Special Issue Directions in Statistical Modelling)

Abstract

Multivariate nonnegative orthant data are real vectors bounded to the left by the null vector, and they can be continuous, discrete or mixed. We first review the recent relative variability indexes for multivariate nonnegative continuous and count distributions. As a prelude, the classification of two comparable distributions having the same mean vector is done through under-, equi- and over-variability with respect to the reference distribution. Multivariate associated kernel estimators are then reviewed with new proposals that can accommodate any nonnegative orthant dataset. We focus on bandwidth matrix selections by adaptive and local Bayesian methods for semicontinuous and counting supports, respectively. We finally introduce a flexible semiparametric approach for estimating all these distributions on nonnegative supports. The corresponding estimator is directed by a given parametric part, and a nonparametric part which is a weight function to be estimated through multivariate associated kernels. A diagnostic model is also discussed to make an appropriate choice between the parametric, semiparametric and nonparametric approaches. The retention of pure nonparametric means the inconvenience of parametric part used in the modelization. Multivariate real data examples in semicontinuous setup as reliability are gradually considered to illustrate the proposed approach. Concluding remarks are made for extension to other multiple functions.
Keywords: associated kernel; Bayesian selector; dispersion index; model diagnostics; multivariate distribution; variation index; weighted distribution associated kernel; Bayesian selector; dispersion index; model diagnostics; multivariate distribution; variation index; weighted distribution

1. Introduction

The d-variate nonnegative orthant data on T d + [ 0 , ) d are real d-vectors bounded to the left by the null vector 0 d , and they can be continuous, discrete (e.g., count, categorical) or mixed. For simplicity, we here assume either T d + = [ 0 , ) d for semicontinuous or T d + = N d : = { 0 , 1 , 2 , } d for counting; and, we then omit both setups of categorial and mixed which can be a mix of discrete and continuous data (e.g., [1]) or other time scales (see, e.g., [2]). Modeling such datasets of T d + requires nonnegative orthant distributions which are generally not easy to handle in practical data analysis. The baseline parametric distribution (e.g., [3,4,5]) for the analysis of nonnegative continuous data is the exponential distribution (e.g., in Reliability) and that of count data is the Poisson one. However, there intrinsic assumptions of the two first moments are often not realistic for many applications. The nonparametric topic of associated kernels, which is adaptable to any support T d + of probability density or mass function (pdmf), is widely studied in very recent years. We can refer to [6,7,8,9,10,11,12,13,14,15] for general results and more specific developments on associated kernel orthant distributions using classical cross-validation and Bayesian methods to select bandwidth matrices. Thus, a natural question of flexible semiparametric modeling now arises for all these multivariate orthant datasets.
Indeed, we first need a review of the recent relative variability indexes for multivariate semicontinuous ([16]) and count ([17]) distributions. The infinite number and complexity of multivariate parametric distributions require the study of different indexes for comparisons and discriminations between them. Simple classifications of two comparable distributions are done through under-, equi- and over-variability with respect to the reference distribution. We refer to [18] and references therein for univariate categorical data which does not yet have its multivariate version. We then survey multivariate associated kernels that can accommodate any nonnegative orthant dataset. Most useful families shall be pointed out, mainly as a product of univariate associated kernels and including properties and constructions. We shall focus on bandwidth matrix selections by Bayesian methods. Finally, we have to introduce a flexible semiparametric approach for estimating multivariate nonnegative orthant distributions. Following Hjort and Glad [19] for classical kernels, the corresponding estimator shall be directed by a given parametric part, and a nonparametric part which is a weight function to be estimated through multivariate associated kernels. What does it mean for a diagnostic model to make an appropriate choice between the parametric, semiparametric and nonparametric approaches in this multivariate framework? Such a discussion is to highlight practical improvements on standard nonparametric methods for multivariate semicontinuous datasets, through the use of a reasonable parametric-start description. See, for instance, [20,21,22] for univariate count datasets.
In this paper, the main goal is to introduce a family of semiparametric estimators with multivariate associated kernels for both semicontinuous and count data. They are meant to be flexible compromises between grueling parametric and fuzzy nonparametric approaches. The rest of the paper is organized as follows. Section 2 presents a brief review of the relative variability indexes for multivariate nonnegative orthant distributions, by distinguishing the dispersion for counting and the variation for semicontinuous. Section 3 displays a short panoply of multivariates associated kernels which are useful for semicontinuous and for counting datasets. Properties are reviewed with new proposals, including both appropriated Bayesian methods of bandwidths selections. In Section 4, we introduce the semiparametric kernel estimators with a d-variate parametric start. We also investigate the corresponding diagnostic model. Section 5 is devoted to numerical illustrations, especially for uni- and multivariate semicontinuous datasets. In Section 6, we make some final remarks in order to extend to other multiple functions, as regression. Eventually, appendixes are exhibited for technical proofs and illustrations.

2. Relative Variability Indexes for Orthant Distributions

Let X = ( X 1 , , X d ) be a nonnegative orthant d-variate random vector on T d + [ 0 , ) d , d 1 . We use the following notations: var X = ( var X 1 , , var X d ) is the elementwise square root of the variance vector of X ; diag var X = diag d ( var X j ) is the d × d diagonal matrix with diagonal entries var X j and 0 elsewhere; and, cov X = ( cov ( X i , X j ) ) i , j { 1 , , d } denotes the covariance matrix of X which is a d × d symmetric matrix with entries cov ( X i , X j ) such that cov ( X i , X i ) = var X i is the variance of X i . Then, one has
cov X = ( diag var X ) ( ρ X ) ( diag var X ) ,
where ρ X = ρ ( X ) is the correlation matrix of X ; see, e.g., Equations (2)–(36) [23]. It is noteworthy that there are many multivariate distributions with exponential (resp., Poisson) margins. Therefore, we denote a generic d-variate exponential distribution by E d ( μ , ρ ) , given specific positive mean vector μ 1 : = ( μ 1 1 , , μ d 1 ) and correlation matrix ρ = ( ρ i j ) i , j { 1 , , d } . Similarly, a generic d-variate Poisson distribution is given by P d ( μ , ρ ) , with positive mean vector μ : = ( μ 1 , , μ d ) and correlation matrix ρ . See, e.g., Appendix A for more extensive exponential and Poisson models with possible behaviours in the negative correlation setup. The uncorrelated or independent d-variate exponential and Poisson will be written as E d ( μ ) and P d ( μ ) , respectively, for ρ = I d the d × d unit matrix. Their respective d-variate probability density function (pdf) and probability mass function (pmf) are the product of d univariate ones.
According to [16] and following the recent univariate unification of the well-known (Fisher) dispersion and the (Jørgensen) variation indexes by Touré et al. [24], the relative variability index of d-variate nonnegative orthant distributions can be written as follows. Let X and Y be two random vectors on the same support T d + [ 0 , ) d and assume m : = E X = E Y , Σ X : = cov X and V F Y ( m ) : = cov ( Y ) fixed, then the relative variability index of X with respect to Y is defined as the positive quantity
RWI Y ( X ) : = tr [ Σ X W F Y + ( m ) ] 1 ,
where “ tr ( · ) ” stands for the trace operator and W F Y + ( m ) is the unique Moore-Penrose inverse of the associated matrix W F Y ( m ) : = [ V F Y ( m ) ] 1 / 2 [ V F Y ( m ) ] / 2 to V F Y ( m ) . From (2), RWI Y ( X ) 1 means the over- (equi- and under-variability) of X compared to Y is realized if RWI Y ( X ) > 1 ( RWI Y ( X ) = 1 and RWI Y ( X ) < 1 , respectively).
The expression (2) of RWI does not appear to be very easy to handle in this general formulation on T d + [ 0 , ) d , even the empirical version and interpretations. We now detail both multivariate cases of counting ([17]) and of semicontinuous ([16]). An R package is recently provided in [25].

2.1. Relative Dispersion Indexes for Count Distributions

For T d + = N d , let W F Y ( m ) = m m be the d × d matrix of rank 1. Then, Σ X W F Y + ( m ) of (2) is also of rank 1 and has only one positive eigenvalue, denoted by
GDI ( X ) : = E X ( cov X ) E X E X E X 1
and called generalized dispersion index of X compared to Y P d ( E X ) with E Y = E X = m by [17]. For d = 1 , GDI ( X 1 ) = var X 1 / E X 1 = DI ( X 1 ) is the (Fisher) dispersion index with respect to the Poisson distribution. To derive this interpretation of GDI, we successively decompose the denominator of (3) as
E X E X = E X ( diag E X ) E X = [ ( diag E X ) E X ] ( I d ) [ ( diag E X ) E X ]
and the numerator of (3) by using also (1) as
E X ( cov X ) E X = [ ( diag var X ) E X ] ( ρ X ) [ ( diag var X ) E X ] .
Thus, GDI ( X ) makes it possible to compare the full variability of X (in the numerator) with respect to its expected uncorrelated Poissonian variability (in the denominator) which depends only on E X . In other words, the count random vector X is over- (equi- and under-dispersed) with respect to P d ( E X ) if GDI ( X ) > 1 ( GDI ( X ) = 1 and GDI ( X ) < 1 , respectively). This is a generalization in multivariate framework of the well-known (univariate) dispersion index by [17]. See, e.g., [17,26] for illustrative examples. We can modify GDI ( X ) to MDI ( X ) , as marginal dispersion index, by replacing cov X in (3) with diag var X to obtain dispersion information only coming from the margins of X .
More generally, for two count random vectors X and Y on the same support T d + N d with E X = E Y and GDI ( Y ) > 0 , the relative dispersion index is defined by
RDI Y ( X ) : = GDI ( X ) GDI ( Y ) = [ ( diag var X ) E X ] ( ρ X ) [ ( diag var X ) E X ] [ ( diag var Y ) E Y ] ( ρ Y ) [ ( diag var Y ) E Y ] 1 ;
i.e., the over- (equi- and under-dispersion) of X compared to Y is realized if GDI ( X ) > GDI ( Y ) ( GDI ( X ) = GDI ( Y ) and GDI ( X ) < GDI ( Y ) , respectively). Obviously, GDI is a particular case of RDI with any general reference than P d . Consequently, many properties of GDI are easily extended to RDI.

2.2. Relative Variation Indexes for Semicontinuous Distributions

Assuming here T d + = [ 0 , ) d and W F Y ( m ) = m m another d × d matrix of rank 1. Then, we also have that Σ X W F Y + ( m ) of (2) is of rank 1. Similar to (3), the generalized variation index of X compared to E d ( E X ) is defined by
GVI ( X ) : = E X ( cov X ) E X ( E X E X ) 2 1 ;
i.e., X is over- (equi- and under-varied) with respect to E d ( E X ) if GVI ( X ) > 1 ( GVI ( X ) = 1 and GVI ( X ) < 1 , respectively); see [16]. Remark that when d = 1 , GVI ( X 1 ) = var X 1 / ( E X 1 ) 2 = VI ( X 1 ) is the univariate (Jørgensen) variation index which is recently introduced by Abid et al. [27]. From (4) and using again (1) for rewritting the numerator of (6) as
E X ( cov X ) E X = [ ( diag var X ) E X ] ( ρ X ) [ ( diag var X ) E X ] ,
GVI ( X ) of (6) can be interpreted as the ratio of the full variability of X with respect to its expected uncorrelated exponential E d ( E X ) variability which depends only on E X . Similar to MDI ( X ) , we can define MVI ( X ) from GVI ( X ) . See [16] for properties, numerous examples and numerical illustrations.
The relative variation index is defined, for two semicontinuous random vectors X and Y on the same support T d + = [ 0 , ) d with E X = E Y and GVI ( Y ) > 0 , by
RVI Y ( X ) : = GVI ( X ) GVI ( Y ) = [ ( diag var X ) E X ] ( ρ X ) [ ( diag var X ) E X ] [ ( diag var Y ) E Y ] ( ρ Y ) [ ( diag var Y ) E Y ] 1 ;
i.e., the over- ( equi- and under-variation) of X compared to Y is carried out if GVI ( X ) > GVI ( Y ) ( GVI ( X ) = GVI ( Y ) and GVI ( X ) < GVI ( Y ) , respectively). Of course, RVI generalizes GVI for multivariate semicontinuous distributions. For instance, one refers to [16] for more details on its discriminating power in multivariate parametric models from two first moments.

3. Multivariate Orthant Associated Kernels

Nonparametric techniques through associated kernels represent an alternative approach for multivariate orthant data. Let X 1 , , X n be independent and identically distributed (iid) nonnegative orthant d-variate random vectors with an unknown joint pdmf f on T d + [ 0 , ) d , for d 1 . Then the multivariate associated kernel estimator f n ˜ of f is expressed as
f ˜ n ( x ) = 1 n i = 1 n K x , H ( x i ) , x = ( x 1 , , x d ) T d + ,
where H is a given d × d bandwidth matrix (i.e., symmetric and positive definite) such that H H n 0 d (the d × d null matrix) as n , and K x , H ( · ) is a multivariate (orthant) associated kernel, parameterized by x and H ; see, e.g., [10]. More precisely, we have the following refined definition.
Definition 1.
Let T d + be the support of the pdmf to be estimated, x T d + a target vector and H a bandwidth matrix. A parameterized pdmf K x , H ( · ) on support S x , H T d + is called “multivariate orthant associated kernel” if the following conditions are satisfied:
x S x , H , E Z x , H = x + A ( x , H ) x a n d cov Z x , H = B ( x , H ) 0 d + ,
where Z x , H denotes the corresponding orthant random vector with pdmf K x , H such that vector A ( x , H ) 0 (the d-dimensional null vector) and positive definite matrix B ( x , H ) 0 d + as H 0 d (the d × d null matrix), and 0 d + stands for a symmetric matrix with entries u i j for i , j = 1 , , d such that u i j [ 0 , 1 ) .
This definition exists in the univariate count case of [21,28] and encompasses the multivariate one by [10]. The choice of the orthant associated kernel satisfying lim H 0 d Cov ( Z x , H ) = 0 d assures the convergence of its corresponding estimator named of the second order. Otherwise, the convergence of its corresponding estimator is not guarantee for u i j ( 0 , 1 ) , a right neighborhood of 0, in Definition 1 and it is said a consistent first-order smoother; see, e.g., [28] for discrete kernels. In general, d-under-dispersed count associated kernels are appropriated for both small and moderate sample sizes; see, e.g., [28] for univariate cases. As for the selection of the bandwidth H , it is very crucial because it controls the degree of smoothing and the form of the orientation of the kernel. As a matter of fact, a simplification can be obtained by considering a diagonal matrix H = diag d ( h j ) . Since it is challenging to obtain a full multivariate orthant distribution K x , H ( · ) for building a smoother, several authors suggest the product of univariate orthant associated kernels,
K x , H ( · ) = j = 1 d K x j , h j ( · ) ,
where K x j , h j , j = 1 , , d , belong either to the same family or to different families of univariate orthant associated kernels. The following two subsections are devoted to the summary of discrete and semicontinuous univariate associated kernels.
Before showing some main properties of the associated kernel estimator (8), let us recall that the family of d-variate classical (symmetric) kernels K on S d R d (e.g., [29,30,31]) can be also presented as (classical) associated kernels. Indeed, from (8) and writting for instance
K x , H ( · ) = ( det H ) 1 / 2 K H 1 / 2 ( x · )
where “det” is the determinant operator, one has S x , H = x H 1 / 2 S d , A ( x , H ) = 0 and B ( x , H ) = H 1 / 2 I d H 1 / 2 = H . In general, one uses the classical (associated) kernels for smoothing continuous data or pdf having support T d = R d .
The purely nonparametric estimator (8) with multivariate associated kernel, f ˜ n of f, is generally defined up to the normalizing constant C n . Several simulation studies (e.g., Table 3.1 in [10]) are shown that C n = C n ( K , H ) (depending on samples, associated kernels and bandwidths) is approximatively 1. Without loss of generality, one here assumes C n = 1 as for all classical (associated) kernel estimators of pdf. The following proposition finally proves its mean behavior and variability through the integrated bias and integrated variance of f ˜ n , respectively. In what follows, let us denote by ν the reference measure (Lebesgue or counting) on the nonnegative orthant set T d + and also on any set T d R d .
Proposition 1.
Let C n : = T d f ˜ n ( x ) ν ( d x ) = C n ( K , H ) . Then, for all n 1 :
E ( C n ) = 1 + T d Bias { f ˜ n ( x ) } ν ( d x ) a n d var ( C n ) = T d var { f ˜ n ( x ) } ν ( d x ) .
Proof. 
Let n 1 . One successively has
E ( C n ) = T d f ( x ) + E { f ˜ n ( x ) } f ( x ) ν ( d x ) = T d f ( x ) ν ( d x ) + T d E { f ˜ n ( x ) } f ( x ) ν ( d x ) ,
which leads to the first result because f is a pdmf on T d . The second result on var ( C n ) is trivial. □
The following general result is easily deduced from Proposition 1. To the best of our knwoledge, it appears to be new and interesting in the framework of the pdmf (associated) kernel estimators.
Corollary 1.
If C n = 1 , for all n 1 , then: T d Bias { f ˜ n ( x ) } ν ( d x ) = 0 and T d var { f ˜ n ( x ) } ν ( d x ) = 0 .
In particular, Corollary 1 holds for all classical (associated) kernel estimators. The two following properties on the corresponding orthant multivariate associated kernels shall be needed subsequently.
(K1)
There exists the second moment of K x , H :
μ j 2 ( K x , H ) : = S x , H T d + u j 2 K x , H ( u ) ν ( d u ) < , j = 1 , , d .
(K2)
There exists a real largest number r = r ( K x , H ) > 0 and 0 < c ( x ) < such that
| | K x , H | | 2 2 : = S x , H T d + { K x , H ( u ) } 2 ν ( d u ) c ( x ) ( det H ) r .
In fact, (K1) is a necessary condition for smoothers to have a finite variance and (K2) can be deduced from the continuous univariate cases (e.g., [32]) and also from the discrete ones (e.g., [28]).
We now establish both general asymptotic behaviours of the pointwise bias and variance of the nonparametric estimator (8) on the nonnegative orthant set T d + ; its proof is given in Appendix B. For that, we need the following assumptions by endowing T d + with the Euclidean norm | | · | | and the associated inner product · , · such that a , b = a b .
(a1) 
The unknown pdmf f is a bounded function and twice differentiable or finite difference in T d + and f ( x ) and H f ( x ) , which denote, respectively, the gradient vector (in the continuous or discrete sense, respectively) and the corresponding Hessian matrix of the function f at x .
(a2) 
There exists a positive real number r > 0 such that | | K x , H n | | 2 2 ( det H n ) r c 1 ( x ) > 0 as n .
Note that (a2) is obviously a consequence of (K2).
Proposition 2.
Under the assumption (a1) on f, then the estimator f ˜ n in (8) of f verifies
Bias { f ˜ n ( x ) } = f ( x ) , A x , H n + 1 2 tr H f x B ( x , H n ) + A x , H n T A x , H n + o tr B ( x , H n ) ,
for any x T d + . Moreover, if (a2) holds then
var { f ˜ n ( x ) } = 1 n f ( x ) | | K x , H n | | 2 2 + o 1 n ( det H n ) r .
For d = 1 and according to the proof of Proposition 2, one can easily write E f ^ n ( x ) as follows:
E f ^ n ( x ) = E f ( Z x , h ) = k 0 1 k ! E Z x , h E Z x , h k f ( k ) ( E Z x , h ) ,
where f ( k ) is the kth derivative or finite difference of the pdmf f under the existence of the centered moment of order k 2 of Z x , h .
Concerning bandwidth matrix selections in a multivariate associated kernel estimator (8), one generally use the cross-validation technique (e.g., [10,20,28,33,34]). However, it is tedious and less precise. Many papers have recently proposed Bayesian approaches (e.g., [6,7,13,14,35,36] and references therein). In particular, they have recommended local Bayesian for discrete smoothing of pmf (e.g., [6,7,37]) and adaptive one for continuous smoothing of pdf (e.g., [13,35,36]).
Denote M the set of positive definite [diagonal] matrices [from (9), resp.] and let π be a given suitable prior distribution on M . Under the squared error loss function, the Bayes estimator of H is the mean of the posterior distribution. Then, the local Bayesian bandwidth at the target x T d + takes the form
H ˜ ( x ) : = M H π ( H ) f ˜ n ( x ) d H M f ˜ n ( x ) π ( H ) d H 1 , x T d + ,
and the adaptive Bayesian bandwidth for each observation X i T d + of X is given by
H ˜ i : = M H i π ( H i ) f ˜ n , H i , i ( X i ) d H i M f ˜ n , H i , i ( X i ) π ( H i ) d H i 1 , i = 1 , , n ,
where f ˜ n , H i , i ( X i ) is the leave-one-out associated kernel estimator of f ( X i ) deduced from (8) as
f ˜ n , H i , i ( X i ) : = 1 n 1 = 1 , i n K X i , H i ( X ) .
Note that the well-known and classical (global) cross-validation bandwidth matrix H ˜ C V and the global Bayesian one H ˜ B are obtained, respectively, from (14) as
H ˜ C V : = arg min H M T d + { f ˜ n ( x ) } 2 ν ( d x ) 2 n i = 1 n f ˜ n , H , i ( X i )
and
H ˜ B : = M H π ( H ) d H i = 1 n f ˜ n , H , i ( X i ) d H M π ( H ) d H i = 1 n f ˜ n , H , i ( X i ) 1 .

3.1. Discrete Associated Kernels

We only present three main and useful families of univariate discrete associated kernels for (9) and satisfying (K1) and (K2).
Example 1
(categorical). For fixed c { 2 , 3 , } , the number of categories and T 1 + = { 0 , 1 , , c 1 } , one defines the Dirac discrete uniform (DirDU) kernel by
K x , h D i r D U ( u ) = ( 1 h ) 𝟙 u = x h c 1 1 𝟙 u = x ,
for x { 0 , 1 , , c 1 } , h ( 0 , 1 ] , with S x : = { 0 , 1 , , c 1 } = T 1 + , A ( x , h ) = h { c / 2 x x / ( c 1 ) } and B ( x , h ) = h { c ( 2 c 1 ) / 6 + x 2 x c + x 2 / ( c 1 ) } h 2 { c / 2 x x / ( c 1 ) } 2 .
It was introduced in the multivariate setup by Aitchison and Aitken [38] and investigated as a discrete associated kernel which is symmetric to the target x by [28] in univariate case; see [7] for a Bayesian approach in multivariate setup. Note here that its normalized constant is always 1 = C n .
Example 2
(symmetric count). For fixed m N and T 1 + Z , the symmetric count triangular kernel is expressed as
K x , h S C T r i a n g ( u ) = ( m + 1 ) h | u x | h P ( m , h ) 𝟙 { x , x ± 1 , , x ± m } ( u ) ,
for x T 1 + , h > 0 , with S x : = { x , x ± 1 , , x ± m } , P ( m , h ) = ( 2 m + 1 ) ( m + 1 ) h 2 = 0 m h , A ( x , h ) = 0 and
B ( x , h ) = 1 P ( m , h ) m ( 2 m + 1 ) ( m + 1 ) h + 1 3 2 = 0 m h + 2 h m ( 2 m 2 + 3 m + 1 ) 3 log ( m + 1 ) 2 = 1 m 2 log + O ( h 2 ) ,
where ≃ holds for h sufficiently small.
It was first proposed by Kokonendji et al. [33] and then completed in [39] with an asymmetric version for solving the problem of boundary bias in count kernel estimation.
Example 3
(standard count). Let T 1 + N , the standard binomial kernel is defined by
K x , h B i n o m i a l ( u ) = ( x + 1 ) ! u ! ( x + 1 u ) ! x + h x + 1 u 1 h x + 1 x + 1 u 𝟙 { 0 , 1 , , x + 1 } ( u ) ,
for x T 1 + , h ( 0 , 1 ] , with S x : = { 0 , 1 , , x + 1 } , A ( x , h ) = h and B ( x , h ) = ( x + h ) ( 1 h ) / ( x + 1 ) x / ( x + 1 ) [ 0 , 1 ] as h 0 .
Here, B ( x , h ) tends to x / ( x + 1 ) [ 0 , 1 ) when h 0 and the new Definition 1 holds. This first-order and under-dispersed binomial kernel is introduced in [28] which becomes very useful for smoothing count distribution through small or moderate sample size; see, e.g., [6,7,37] for Bayesian approaches and some references therein. In addition, we have the standard Poisson kernel where K x , h P o i s s o n follows the equi-dispersed Poisson distribution with mean x + h , S x : = N = : T 1 + , A ( x , h ) = h and B ( x , h ) = x + h x N as h 0 . Recently, Huang et al. [40] have introduced the Conway-Maxwell-Poisson kernel by exploiting its under-dispersed part and its second-order consistency which can be improved via the mode-dispersion approach of [41]; see also Section 2.4 in [42].

3.2. Semicontinuous Associated Kernels

Now, we point out eight main and useful families of univariate semicontinuous associated kernels for (9) and satisfying (K1) and (K2). Which are gamma (G) of [43] (see also [44]), inverse gamma (Ig) (see also [45]) and log-normal 2 (LN2) by [41], inverse Gaussian (IG) and reciprocal inverse Gaussian by [46] (see also [47]), log-normal 1 (LN1) and Birnbaum–Saunders by [48] (see also [49,50]), and Weibull (W) of [51] (see also [50]). It is noteworthy that the link between LN2 of [41] and LN1 of [48] is through changing ( x , h ) to ( x exp ( h 2 ) , 2 log ( 1 + h ) . Several other semicontinuous could be constructed by using the mode-dispersion technique of [41] from any semicontinuous distribution which is unimodal and having a dispersion parameter. Recently, one has the scaled inverse chi-squared kernel of [52].
Table 1 summarizes these eight semicontinuous univariate associated kernels with their ingredients of Definition 1 and order of preference (O.) obtained graphically. In fact, the heuristic classification (O.) is done through the behavior of the shape and scale of the associated kernel around the target x at the edge as well as inside; see Figure 1 for edge and Figure 2 for inside. Among these eight kernels, we thus have to recommend the five first univariate associated kernels of Table 1 for smoothing semicontinuous data. This approach could be improved by a given dataset; see, e.g., [53] for cumulative functions.

4. Semiparametric Kernel Estimation with d-Variate Parametric Start

We investigate the semiparametric orthant kernel approach which is a compromise between the pure parametric and the nonparametric methods. This concept was proposed by Hjort and Glad [19] for continuous data, treated by Kokonendji et al. [20] for discrete univariate data and, recently, studied by Kokonendji et al. [21] with an application to radiation biodosimetry.
Without loss of generality, we here assume that any d-variate pdmf f can be formulated (e.g., [54] for d = 1 ) as
f ( x ) = w ( x ; θ ) p d ( x ; θ ) , x T d + ,
where p d ( · ; θ ) is the non-singular parametric part according to a reference d-variate distribution with corresponding unknown parameters θ = ( θ 1 , , θ k ) and w ( · ; θ ) : = f ( · ) / p d ( · ; θ ) is the unknown orthant weight function part, to be estimated with a multivariate orthant associated kernel. The weight function at each point can be considered as the local multiplicative correction factor aimed to accommodate any pointwise departure from the reference d-variate distribution. However, one cannot consider the best fit of parametric models as the start distribution in this semiparametric approach. Because the corresponding weight function is close to zero and becomes a noise which is unappropriated to smooth by an associated kernel, especially for the continuous cases.
Let X 1 , , X n be iid nonnegative orthant d-variate random vectors with unknown pdmf f on T d + [ 0 , ) d . The semiparametric estimator of (15) with (9) is expressed as follows:
f ^ n ( x ) = p d ( x ; θ ^ n ) 1 n i = 1 n 1 p d ( X i ; θ ^ n ) K x , H ( X i ) = 1 n i = 1 n p d ( x ; θ ^ n ) p d ( X i ; θ ^ n ) K x , H ( X i ) , x T d + ,
where θ ^ n is the estimated parameter of θ . From (16), we then deduce the nonparametric orthant associated kernel estimate
w ˜ n ( x ; θ ^ n ) = 1 n i = 1 n 1 p d ( X i ; θ ^ n ) K x , H ( X i )
of the weight function x w ( x ; θ n ^ ) which depends on θ ^ n . One can observe that Proposition 1 also holds for f ^ n ( · ) = p d ( · ; θ ^ n ) w ˜ n ( · ; θ ^ n ) . However, we have to prove below the analogous fact to Proposition 2.

4.1. Known d-Variate Parametric Model

Let p d ( · ; θ 0 ) be a fixed orthant distribution in (15) with θ 0 known. Writing f ( x ) = p d ( x ; θ 0 ) w ( x ) , we estimate the nonparametric weight function w by w ˜ n ( x ) = n 1 i = 1 n K x , H ( X i ) / p d ( X i ; θ 0 ) with an orthant associated kernel method, resulting in the estimator
f ^ n ( x ) = p d ( x ; θ 0 ) w ˜ n ( x ) = 1 n i = 1 n p d ( x ; θ 0 ) p d ( X i ; θ 0 ) K x , H ( X i ) , x T d + .
The following proposition is proven in Appendix B.
Proposition 3.
Under the assumption (a1) on f ( · ) = p d ( · ; θ 0 ) w ( · ) , then the estimator f ^ n ( · ) = p d ( · ; θ 0 ) w ˜ n ( · ) in (18) of f satisfies
Bias { f ^ n ( x ) } = p d ( x ; θ 0 ) w ( x ) f ( x ) { p d ( x ; θ 0 ) } 1 + w ( x ) , A x , H n + 1 2 p d ( x ; θ 0 ) tr H w x B ( x , H n ) + A x , H n T A x , H n + 1 + o tr B ( x , H n ) ,
for any x T d + . Furthermore, if (a2) holds then one has var { f ^ n ( x ) } = var { f ˜ n ( x ) } of (11).
It is expected that the bias here is quite different from that of (10).

4.2. Unknown d-Variate Parametric Model

Let us now consider the more realistic and practical semiparametric estimator f ^ n ( · ) = p d ( · ; θ ^ n ) w ˜ n ( · ; θ ^ n ) presented in (16) of f ( · ) = p d ( · ; θ ) w ( · ; θ ) in (15) such that the parametric estimator θ ^ n of θ can be obtained by the maximum likelihood method; see [19] for quite a general estimator of θ . In fact, if the d-variate parametric model p d ( · ; θ ) is misspecified then this θ ^ n converges in probability to the pseudotrue value θ 0 satisfying
θ 0 : = arg min θ x T d + f ( x ) log [ f ( x ) / p d ( x ; θ ) ] ν ( d x )
from the Kullback–Leibler divergence (see, e.g., [55]).
By writting p 0 ( · ) : = p d ( · ; θ 0 ) this best d-variate parametric approximant, but this p 0 ( · ) is not explicitly expressible as the one in (18). According to [19] (see also [20]), we can represent the proposed estimator f ^ n ( · ) = p d ( · ; θ ^ n ) w ˜ n ( · ; θ ^ n ) in (16) as
f ^ n ( x ) 1 n i = 1 n p 0 ( x ) p 0 ( X i ) K x , H ( X i ) , x T d + .
Thus, the following result provides approximate bias and variance. We omit its proof since it is analogous to the one of Proposition 3.
Proposition 4.
Let p 0 ( · ) : = p d ( · ; θ 0 ) be the best d-variate approximant of the unknown pdmf f ( · ) = p d ( · ; θ ) w ( · ; θ ) as (15) under the Kullback–Leibler criterion, and let w ( · ) : = f ( · ) / p 0 ( · ) be the corresponding d-variate weight function. As n and under the assumption (a1) on f, then the estimator f ^ n ( · ) = p d ( · ; θ ^ n ) w ˜ n ( · ; θ ^ n ) in (16) of f and refomulated in (19) satisfies
Bias { f ^ n ( x ) } = p 0 ( x ) w ( x ) f ( x ) { p 0 ( x ) } 1 + w ( x ) , A x , H n + 1 2 p 0 ( x ) tr H w x B ( x , H n ) + A x , H n T A x , H n + 1 + o tr B ( x , H n ) + n 2 ,
for any x T d + . Furthermore, if (a2) holds then we have var { f ^ n ( x ) } = var { f ˜ n ( x ) } of (11).
Once again, the bias is different from that of (10). Thus, the proposed semiparametric estimator f ^ n in (16) of f can be shown to be better (or not) than the traditional nonparametric one f ˜ n in (8). The following subsection provides a practical solution.

4.3. Model Diagnostics

The estimated weight function w ˜ n ( x , θ ^ n ) given in (17) provides useful information for model diagnostics. The d-variate weight function w ( · ) is equal one if the d-variate parametric start model p d ( · ; θ 0 ) is indeed the true pdmf. Hjort and Glad [19] proposed to check this adequacy by examining a plot of the weight function for various potential models with pointwise confidence bands to see wether or not w ( x ) = 1 is reasonable. See also [20,21] for univariate count setups.
In fact, without technical details here we use the model diagnostics for verifying the adequacy of the model by examining a plot of x w ˜ n ( x ; θ ^ n ) or
W ˜ n ( x ) : = log w ˜ n ( x ; θ ^ n ) = log [ f ^ n ( x ) / p d ( x ; θ ^ n ) ]
for all x = X i , i = 1 , , n , with a pointwise confidence band of ± 1.96 for large n; that is to see how far away it is from zero. More precisely, for instance, W ˜ n is < 5 % for pure nonparametric, it belongs to [ 5 % , 95 % ] for semiparametric, and it is > 95 % for full parametric models. It is noteworthy that the retention of pure nonparametric means the inconvenience of parametric part considered in this approach; hence, the orthant dataset is left free.

5. Semicontinuous Examples of Application with Discussions

For a practical implementation of our approach, we propose to use the popular multiple gamma kernels as (9) by selecting the adaptive Bayesian procedure of [13] to smooth w ˜ n ( x ; θ ^ n ) . Hence, we shall gradually consider d-variate semicontinuous cases with d = 1 , 2 , 3 for real datasets. All computations and graphics have been done with the R software [56].

5.1. Adaptive Bayesian Bandwidth Selection for Multiple Gamma Kernels

From Table 1, the function G x , h ( · ) is the gamma kernel [43] given on the support S x , h = [ 0 , ) = T 1 + with x 0 and h > 0 :
G x , h ( u ) = u x / h Γ 1 + x / h h 1 + x / h exp u h 𝟙 [ 0 , ) ( u ) ,
where 𝟙 E denotes the indicator function of any given event E. This gamma kernel G x , h ( · ) appears to be the pdf of the gamma distribution, denoted by G ( 1 + x / h , h ) with shape parameter 1 + x / h and scale parameter h. The multiple gamma kernel from (9) is written as K x , H ( · ) = j = 1 d G x j , h j ( · ) with H = diag d h j .
For applying (13) and (14) in the framework of semiparametric estimator f ^ n in (16), we assume that each component h i = h i ( n ) , = 1 , , d , of H i has the univariate inverse gamma prior I g ( α , β ) distribution with same shape parameters α > 0 and, eventually, different scale parameters β > 0 such that β = ( β 1 , , β d ) . We here recall that the pdf of I g ( α , β ) with α , β > 0 is defined by
I g α , β ( u ) = β α Γ ( α ) u α 1 exp ( β / u ) 𝟙 ( 0 , ) ( u ) , = 1 , , d .
The mean and the variance of the prior distribution (21) for each component h i of the vector H i are given by β / ( α 1 ) for α > 1 , and β 2 / ( α 1 ) 2 ( α 2 ) for α > 2 , respectively. Note that for a fixed β > 0 , = 1 , , d , and if α , then the distribution of the bandwidth vector H i is concentrated around the null vector 0 .
From those considerations, the closed form of the posterior density and the Bayesian estimator of H i are given in the following proposition which is proven in Appendix B.
Proposition 5.
For fixed i { 1 , 2 , , n } , consider each observation X i = ( X i 1 , , X i d ) with its corresponding H i = diag d h i j of univariate bandwidths and defining the subset I i = k { 1 , , d } ; X i k = 0 and its complementary set I i c = { 1 , , d } ; X i ( 0 , ) . Using the inverse gamma prior I g α , β of (21) for each component h i of H i in the multiple gamma estimator with α > 1 / 2 and β = ( β 1 , , β d ) ( 0 , ) d , then:
(i) the posterior density is the following weighted sum of inverse gamma
π ( H i X i ) = p d ( X i ; θ ^ n ) D i ( α , β ) j = 1 , j i n 1 p d ( X j ; θ ^ n ) k I i C j k ( α , β k ) I g α + 1 , X j k + β k ( h i k ) I i c A i j ( α , β ) I g α + 1 / 2 , B i j ( β ) ( h i ) ,
with A i j ( α , β ) = [ Γ ( α + 1 / 2 ) ] / ( β α X i 1 / 2 2 π [ B i j ( β ) ] α + 1 / 2 ) , B i j ( β ) = X i log ( X i / X j ) + X j X i + β , C j k ( α , β k ) = [ Γ ( α + 1 ) ] / [ β k α ( X j k + β k ) α + 1 ] , and D i ( α , β ) = p d ( X i ; θ ^ n ) j = 1 , j i n k I i A i j k ( α , β k ) I i c B i j ( β ) / p d ( X j ; θ ^ n ) ;
(ii) under the quadratic loss function, the Bayesian estimator H ^ i = diag d h ^ i m of H i in (16) is
h ^ i m = p d ( X i ; θ ^ n ) D i ( α , β ) j = 1 , j i n 1 p d ( X j ; θ ^ n ) k I i C j k ( α , β k ) I i c A i j ( α , β ) X j m + β m α 𝟙 { 0 } ( X i m ) + B i j m ( β m ) α 1 / 2 𝟙 ( 0 , ) ( X i m ) ,
for m = 1 , 2 , , d , with the previous notations of A i j ( α , β ) , B i j m ( β m ) , C j k ( α , β k ) et D i ( α , β ) .
Following Somé and Kokonendji [13] for nonparametric approach, we have to select the prior parameters α and β = ( β 1 , , β d ) of the multiple inverse gamma of I g ( α , β ) in (21) as follows: α = α n = n 2 / 5 > 2 and β > 0 , = 1 , , d , to obtain the convergence of the variable bandwidths to zero with a rate close to that of an optimal bandwidth. For practical use, we here consider each β = 1 .

5.2. Semicontinuous Datasets

The numerical illustrations shall be done through the following dataset of Table 2 which are recently used in [13] for non-parametric approach and only in the trivariate setup as semicontinuous data. It concerns three measurements (with n = 42 ) of drinking water pumps installed in the Sahel. The first variable X 1 represents the failure times (in months) and, also, it is recently used by Touré et al. [24]. The second variable X 2 refers to the distance (in kilometers) between each water pump and the repair center in the Sahel, while the third one X 3 stands for average volume (in m 3 ) of water per day.
Table 3 displays all empirical univariate, bivariate and trivariate variation (6) and dispersion (3) indexes from Table 2. Hence, each X j , ( X j , X k ) and ( X 1 , X 2 , X 3 ) is over-dispersed compared to the corresponding uncorrelated Poisson distribution. However, only ( X 1 , X 3 ) (resp. X 1 ) can be considered as a bivariate equi-variation (resp. univariate over-variation) with respect to the corresponding uncorrelated exponential distribution; and, other X j , ( X j , X k ) and ( X 1 , X 2 , X 3 ) are under-varied. In fact, we just compute dispersion indexes for curiosity since all values in Table 2 are positive integers; and, we here now omit the counting point of view in the remainder of the analysis.
Thus, we are gradually investing in semiparametric approaches for three univariates, three bivariates and only one trivariate from ( X 1 , X 2 , X 3 ) of Table 2.

5.3. Univariate Examples

For each univariate semicontinuous dataset X j , j = 1 , 2 , 3 , we have already computed the GVI in Table 3 which belongs in ( 0.01 , 1.95 ) 1 . This allows to consider our flexible semiparametric estimation f ^ n , j with an exponential E 1 ( μ j ) as start in (16) and using adaptive Bayesian bandwidth in gamma kernel of Proposition 5. Hence, we deduce the corresponding diagnostic percent W ˜ n , j from (20) for deciding an appropriate approach. In addition, we first present the univariate nonparametric estimation f ˜ n , j with adaptive Bayesian bandwidth in gamma kernel of [35] and then propose another parametric estimation of X j by the standard gamma model with shape ( a j ) and scale ( b j ) parameters.
Table 4 transcribes parameter maximum likelihood estimates of exponential and gamma models with diagnostic percent from Table 2. Figure 3 exhibits histogram, f ˜ n , j , f ^ n , j , exponential, gamma and diagnostic W ˜ n , j graphs for each univariate data X j . One can observe differences with the naked eye between f ˜ n , j and f ^ n , j although they are very near and with the same pace. The diagnostic W ˜ n , j graphics lead to conclude to semiparametric approach for X 2 and to full parametric models for X 3 and slightly also for X 1 . Thus, we have suggested the gamma model with two parameters for improving the starting exponential model; see, e.g., Table 2 in [54], for alternative parametric models.

5.4. Bivariate and Trivariate Examples

For the sake of flexibility and efficiency, we here analyze our proposed semiparametric estimation f ^ n with an uncorrelated exponential as a start in (16) and using adaptive Bayesian bandwidth in gamma kernel of Proposition 5. This concerns all bivariate and trivariate datasets from Table 2 for which their GVI are in ( 0.01 , 1.06 ) 1 from Table 3. All the computation times are alsmost instantaneous.
Table 5 reports the main numerical results of the corresponding correlations, MVI, parameter estimates and finally diagnostic percent W ˜ n from (20) that we intentionally omit to represent some graphics in three or four dimensions. However, Figure 4 displays some representative projections of W ˜ n . From Table 5, the cross empirical correlations are closed to 0 and all MVI are smaller than 1 which allows us to consider uncorrelated exponential start-models. The maximum likelihood method is also used for estimating the parameters μ j for getting the same results as in Table 4. Thus, the obtained numerical values of W ˜ n indicate semiparametric approaches for all bivariate datasets and the purely nonparametric method for the trivariate one; see [13] for more details on this nonparametric analysis. This progressive semiparametric analysis of the trivariate dataset of Table 2 shows the necessity of a suitable choice of the parametric start-models, which may take into account the correlation structure. Hence, the retention of pure nonparametric means the inconvenience of parametric part used in the modelization. Note that we could consider the Marshall–Olkin exponential distributions with nonnegative correlations as start-models; but, they are singular. See Appendix A for a brief review.

6. Concluding Remarks

In this paper, we have presented a flexible semiparametric approach for multivariate nonnegative orthant distributions. We have first recalled multivariate variability indexes GVI, MVI, RVI, GDI, MDI and RDI from RWI as a prelude to the second-order discrimination for these parametric distributions. We have then reviewed and provided new proposals to the nonparametric estimators through multivariate associated kernels; e.g., Proposition 1 and Corollary 1. Both effective adaptive and local Bayesian selectors of bandwidth matrices are suggested for semicontinuous and counting data, respectively.
All these previous ingredients were finally used to develop the semiparametric modeling for multivariate nonnegative orthant distributions. Numerical illustrations have been simply done for univariate and multivariate semicontinuous datasets with the uncorrelated exponential start-models after examining GVI and MVI. The adaptive Bayesian bandwidth selection (13) in multiple gamma kernel (Proposition 5) were here required for applications. Finally, the diagnostic models have played a very interesting role in helping to the appropriate approach, even if it means improving it later.
At the meantime, Kokonendji et al. [37] proposed an in-depth practical analysis of multivariate count datasets starting with multivariate (un)correlated Poisson models after reviewing GDI and RDI. They have also established an equivalent of our Proposition 5 for the local Bayesian bandwidth selection (12) by using the multiple binomial kernels from Example 3. As one of the many perspectives, one could consider the categorial setup with local Bayesian version of the multivariate associated kernel of Aitchison and Aitken [38] from Example 1 of the univariate case.
At this stage of analysis, all the main foundations are now available for working in a multivariate setup such as variability indexes, associated kernels, Bayesian selectors and model diagnostics. We just have to adapt them to each situation encountered. For instance, we have the semiparametric regression modeling; see, e.g., Abdous et al. [57] devoted to counting explanatory variables and [22]. An opportunity will be opened for hazard rate functions (e.g., [51]). The near future of other functional groups, such as categorical and mixed, can now be considered with objectivity and feasibility.

Author Contributions

Conceptualization (C.C.K., S.M.S.); Formal analysis (C.C.K., S.M.S.); Investigation (C.C.K., S.M.S.); Methodology (C.C.K., S.M.S.); Software (C.C.K., S.M.S.); Supervision (C.C.K.); Writing—original draft preparation (C.C.K., S.M.S.); Writing—review and editing (C.C.K., S.M.S.). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors are grateful to the Associate Editor and two anonymous referees for their constructive comments. We also sincerely thank Mohamed Elmi Assoweh for some interesting discussions.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GDIGeneralized dispersion index
GVIGeneralized variation index
iidIndependent and identically distributed
MDIMarginal dispersion index
MVIMarginal variation index
pdfProbability density function
pdmfProbability density or mass function
pmfProbability mass function
RDIRelative dispersion index
RVIRelative variation index
RWIRelative variability index

Appendix A. On a Broader d-Variate Parametric Models and the Marshall-Olkin Exponential

According to Cuenin et al. [58], taking p { 1 , 2 } in their multivariate Tweedie models of flexible dependence structure, another way to define the d-variate Poisson and exponential distributions is given by P d ( Λ ) and E d ( Λ ) , respectively. The d × d symmetric variation matrix Λ = ( λ i j ) i , j { 1 , , d } is such that λ i j = λ j i 0 , the mean of the corresponding marginal distribution is λ i i > 0 , and the non-negative correlation terms satisfy
ρ i j = λ i j λ i i λ j j [ 0 , min { R ( i , j ) , R ( j , i ) } ) ,
with R ( i , j ) = λ i i / λ j j ( 1 λ i i 1 i , j λ i ) ( 0 , 1 ) . Their constructions are perfectly defined having d ( d + 1 ) / 2 parameters as in P d ( μ , ρ ) and E d ( μ , ρ ) . Moreover, we attain the exact bounds of the correlation terms in (A1). Cuenin et al. [58] have pointed out the construction and simulation of the negative correlation structure from the positive one of (A1) by considering the inversion method.
The negativity of a correlation component is crucial for the phenomenon of under-variability in a bivariate/multivariate positive orthant model. Figure A1 (right) plots a limit shape of any bivariate positive orthant distribution with very strong negative correlation (in red), which is not the diagonal line of the upper bound ( + 1 ) of positive correlation (in blue); see, e.g., [58] for details on both bivariate orthant (i.e., continuous and count) models. Conversely, Figure A1 (left) represents the classic lower ( 1 ) and upper ( + 1 ) bounds of correlations on R 2 or finite support.
Figure A1. Support of bivariate distributions with maximum correlations (positive in blue and negative in red): model on R 2 (left) and also finite support; model on T 2 + [ 0 , ) 2 (right), without finite support.
Figure A1. Support of bivariate distributions with maximum correlations (positive in blue and negative in red): model on R 2 (left) and also finite support; model on T 2 + [ 0 , ) 2 (right), without finite support.
Stats 04 00013 g0a1
The d-variate exponential X = ( X 1 , , X d ) E d ( μ , μ 0 ) of Marshall and Olkin [59] is built as follows. Let Y 1 , , Y d and Z be univariate exponential random variables with parameters μ 1 > 0 , , μ d > 0 and μ 0 0 , respectively. Then, by setting X j : = Y j + Z for j = 1 , , d , one has E X j = 1 / ( μ j + μ 0 ) = var X j and cov ( X j , X ) = μ 0 / { ( μ j + μ 0 ) ( μ + μ 0 ) ( μ j + μ + μ 0 ) } for all j . Each correlation ρ ( X j , X ) = μ 0 / ( μ j + μ + μ 0 ) lies in [ 0 , 1 ] if and only if μ 0 0 . From its survival (or reliability) function
S ( x ; μ , μ 0 ) = exp μ 0 max ( x 1 , , x d ) j = 1 d μ j x j ,
its pdf can be written as
p d ( x ; μ , μ 0 ) = S ( x ; μ , μ 0 ) ( μ 0 + μ ) j = 1 , j d μ j if x : = max ( x 1 , , x d ) and x x j , j S ( x ; μ , μ 0 ) μ 0 μ j 1 μ j k if x j 1 , , x j k < x k + 1 = = x d S ( x ; μ , μ 0 ) μ 0 if x 1 = = x d > 0 .
It is not absolutely continuous with respect to the Lebesgue measure in T d + and has singularities corresponding to the cases where two or more of the x j ’s are equal. Karlis [60] has proposed a maximum likelihood estimation of parameters via an EM algorithm. Finally, Kokonendji et al. [16] have calculated
GVI ( X ) = 1 + μ 0 j = 1 d ( μ j + μ 0 ) 1 { j ( μ j + μ + μ 0 ) 1 ( μ + μ 0 ) 1 } { ( μ 1 + μ 0 ) 2 + + ( μ d + μ 0 ) 2 } 2 1 ( μ 0 0 ) .
and
MVI ( X ) = j = 1 d ( μ j + μ 0 ) 4 j = 1 d ( μ j + μ 0 ) 4 + 2 1 j < 1 ( μ j + μ 0 ) 2 ( μ + μ 0 ) 2 < 1 .
Hence, the Marshall–Olkin exponential model X E d ( μ , μ 0 ) is always under-varied with respect to the MVI and over- or equi-varied with respect to GVI. If μ 0 = 0 then E d ( μ , μ 0 ) is reduced to the uncorrolated E d ( μ ) with GVI ( Y ) = 1 . However, the assumption of non-negative correlations between components is sometimes unrealistic for some analyzes.

Appendix B. Proofs of Proposition 2, Proposition 3 and Proposition 5

Proof of Proposition 2.
From Definition 1, we get (see also [10] for more details)
E f ˜ n ( x ) f ( x ) = E K x , H n ( X j ) f ( x ) = S x , H n T d + K x , H n ( u ) f ( u ) ν ( d u ) f ( x ) = E f Z x , H n f ( x ) .
Next, using (A2), by a Taylor expansion of the function f ( · ) over the points Z x , H n and E Z x , H n , we get
f Z x , H n = f E Z x , H n + f E Z x , H n , Z x , H n E Z x , H n + 1 2 H f E Z x , H n Z x , H n E Z x , H n , Z x , H n E Z x , H n + Z x , H n E Z x , H n 2 o ( 1 ) = f E Z x , H n + f E Z x , H n , Z x , H n E Z x , H n + 1 2 tr H f E Z x , H n Z x , H n E Z x , H n Z x , H n E Z x , H n T + tr Z x , H n E Z x , H n Z x , H n E Z x , H n T o ( 1 ) ,
where o ( 1 ) is uniform in a neighborhood of x . Therefore, taking the expectation in both sides of (A3) and then substituting the result in (A2), we get
E f ˜ n ( x ) f ( x ) = f E Z x , H n f ( x ) + 1 2 tr H f E Z x , H n var Z x , H n + o tr var Z x , H n = f x + A f ( x ) + 1 2 tr H f x + A B ( x , H n ) + o tr B ( x , H n ) ,
where o tr B ( x , H n ) is uniform in a neighborhood of x . The second Taylor expansion of the function f ( · ) over the points x and x + A x , H n allows to conclude the bias (10).
About the variance term, f being bounded, we have E K x , H n ( X j ) = O ( 1 ) . It follows that:
var f ˜ n ( x ) = 1 n var K x , H n ( X j ) = 1 n S x , H n T d + K x , H n 2 ( u ) f ( u ) ν ( d u ) + O ( 1 ) = 1 n S x , H n T d + K x , H n 2 ( u ) f ( x ) + f ( x ) , x u + 1 2 ( x u ) T H f ( x ) ( x u ) + o | | x u | | 2 ν ( d u ) = 1 n f ( x ) | | K x , H n | | 2 2 + o 1 n ( det H n ) r .
Proof of Proposition 3.
Since one has Bias [ f ^ n ( x ) ] = p d ( x ; θ 0 ) E [ w ˜ n ( x ) ] f ( x ) and var [ f ^ n ( x ) ] = [ p d ( x ; θ 0 ) ] 2 var [ w ˜ n ( x ) ] , it is enough to calculate E [ w ˜ n ( x ) ] and var [ w ˜ n ( x ) ] following Proposition 2 applied to w ˜ n ( x ) = n 1 i = 1 n K x , H ( x i ) / p d ( X i ; θ 0 ) for all x T d + .
Indeed, one successively has
E [ w ˜ n ( x ) ] = E K x , H n ( X 1 ) / p d ( x 1 ; θ 0 ) = S x , H n T d + K x , H n ( u ) [ p d ( u ; θ 0 ) ] 1 f ( u ) ν ( d u ) = E w Z x , H n = w ( x ) + w ( x ) , A x , H n + 1 2 tr H w x B ( x , H n ) + A x , H n T A x , H n + o tr B ( x , H n ) ,
which leads to the announced result of Bias [ f ^ n ( x ) ] . As for var [ w ˜ n ( x ) ] , one also write
var w ˜ n ( x ) = 1 n var K x , H n ( X 1 ) / p d ( X 1 ; θ 0 ) = 1 n S x , H n T d + K x , H n 2 ( u ) [ p d ( u ; θ 0 ) ] 2 f ( u ) ν ( d u ) + O ( 1 ) = 1 n f ( x ) [ p d ( x ; θ 0 ) ] 2 | | K x , H n | | 2 2 + o 1 n ( det H n ) r
and the desired result of var [ f ^ n ( x ) ] is therefore deduced. □
Proof of Proposition 5.
We have to adapt Theorem 2.1 of Somé and Kokonendji [13] to this semiparametric estimator f ^ n in (16). First, the leave-one-out associated kernel estimator (14) becomes
f ^ n , H i , i ( X i ) : = p d ( X i ; θ ^ n ) n 1 = 1 , i n 1 p d ( X ; θ ^ n ) K X i , H i ( X ) .
Then, the posterior distribution deduced from (13) is exppressed as
π ( H i X i ) : = π ( H i ) f ^ n , H i , i ( X i ) M f ^ n , H i , i ( X i ) π ( H i ) d H i 1
and which leads to the result of Part (i) via Theorem 2.1 (i) in [13] for details. Consequently, we similarly deduce the adaptive Bayesian estimator H ^ i = diag d h ^ i m of Part (ii). □

References

  1. Somé, S.M.; Kokonendji, C.C.; Ibrahim, M. Associated kernel discriminant analysis for multivariate mixed data. Electr. J. Appl. Statist. Anal. 2016, 9, 385–399. [Google Scholar]
  2. Libengué Dobélé-Kpoka, F.G.B. Méthode Non-Paramétrique par Noyaux Associés Mixtes et Applications. Ph.D. Thesis, Université de Franche-Comté, Besançon, France, 2013. [Google Scholar]
  3. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Discrete Multivariate Distributions; Wiley: New York, NY, USA, 1997. [Google Scholar]
  4. Kotz, S.; Balakrishnan, N.; Johnson, N.L. Continuous Multivariate Distributions–Models and Applications, 2nd ed.; Wiley: New York, NY, USA, 2000. [Google Scholar]
  5. Balakrishnan, N.; Lai, C.-D. Continuous Bivariate Distributions, 2nd ed.; Springer: New York, NY, USA, 2009. [Google Scholar]
  6. Belaid, N.; Adjabi, S.; Kokonendji, C.C.; Zougab, N. Bayesian local bandwidth selector in multivariate associated kernel estimator for joint probability mass functions. J. Statist. Comput. Simul. 2016, 86, 3667–3681. [Google Scholar] [CrossRef]
  7. Belaid, N.; Adjabi, S.; Kokonendji, C.C.; Zougab, N. Bayesian adaptive bandwidth selector for multivariate discrete kernel estimator. Commun. Statist. Theory Meth. 2018, 47, 2988–3001. [Google Scholar] [CrossRef]
  8. Funke, B.; Kawka, R. Nonparametric density estimation for multivariate bounded data using two non-negative multiplicative bias correction methods. Comput. Statist. Data Anal. 2015, 92, 148–162. [Google Scholar] [CrossRef]
  9. Hirukawa, M. Asymmetric Kernel Smoothing—Theory and Applications in Economics and Finance; Springer: Singapore, 2018. [Google Scholar]
  10. Kokonendji, C.C.; Somé, S.M. On multivariate associated kernels to estimate general density functions. J. Korean Statist. Soc. 2018, 47, 112–126. [Google Scholar] [CrossRef]
  11. Ouimet, F. Density estimation using Dirichlet kernels. arXiv 2020, arXiv:2002.06956v2. [Google Scholar]
  12. Somé, S.M.; Kokonendji, C.C. Effects of associated kernels in nonparametric multiple regressions. J. Statist. Theory Pract. 2016, 10, 456–471. [Google Scholar] [CrossRef]
  13. Somé, S.M.; Kokonendji, C.C. Bayesian selector of adaptive bandwidth for multivariate gamma kernel estimator on [0,)d. J. Appl. Statist. 2021. forthcoming. [Google Scholar] [CrossRef]
  14. Zougab, N.; Adjabi, S.; Kokonendji, C.C. Comparison study to bandwidth selection in binomial kernel estimation using Bayesian approaches. J. Statist. Theory Pract. 2016, 10, 133–153. [Google Scholar] [CrossRef]
  15. Zougab, N.; Harfouche, L.; Ziane, Y.; Adjabi, S. Multivariate generalized Birnbaum—Saunders kernel density estimators. Commun. Statist. Theory Meth. 2018, 47, 4534–4555. [Google Scholar] [CrossRef]
  16. Kokonendji, C.C.; Touré, A.Y.; Sawadogo, A. Relative variation indexes for multivariate continuous distributions on [0,)k and extensions. AStA Adv. Statist. Anal. 2020, 104, 285–307. [Google Scholar] [CrossRef]
  17. Kokonendji, C.C.; Puig, P. Fisher dispersion index for multivariate count distributions: A review and a new proposal. J. Multiv. Anal. 2018, 165, 180–193. [Google Scholar] [CrossRef]
  18. Weiß, C.H. On some measures of ordinal variation. J. Appl. Statist. 2019, 46, 2905–2926. [Google Scholar] [CrossRef]
  19. Hjort, N.L.; Glad, I.K. Nonparametric density estimation with a parametric start. Ann. Statist. 1995, 23, 882–904. [Google Scholar] [CrossRef]
  20. Kokonendji, C.C.; Senga Kiessé, T.; Balakrishnan, N. Semiparametric estimation for count data through weighted distributions. J. Statist. Plann. Infer. 2009, 139, 3625–3638. [Google Scholar] [CrossRef]
  21. Kokonendji, C.C.; Zougab, N.; Senga Kiessé, T. Poisson-weighted estimation by discrete kernel with application to radiation biodosimetry. In Biomedical Big Data & Statistics for Low Dose Radiation Research—Extended Abstracts Fall 2015; Ainsbury, E.A., Calle, M.L., Cardis, E., Einbeck, J., Gómez, G., Puig, P., Eds.; Springer Birkhäuser: Basel, Switzerland, 2017; pp. 115–120. [Google Scholar]
  22. Senga Kiessé, T.; Zougab, N.; Kokonendji, C.C. Bayesian estimation of bandwidth in semiparametric kernel estimation of unknown probability mass and regression functions of count data. Comput. Statist. 2016, 31, 189–206. [Google Scholar] [CrossRef]
  23. Johnson, R.A.; Wichern, D.W. Applied Multivariate Statistical Analysis, 6th ed.; Pearson Prentice Hall: New Jersey, NJ, USA, 2007. [Google Scholar]
  24. Touré, A.Y.; Dossou-Gbété, S.; Kokonendji, C.C. Asymptotic normality of the test statistics for the unified relative dispersion and relative variation indexes. J. Appl. Statist. 2020, 47, 2479–2491. [Google Scholar] [CrossRef]
  25. Touré, A.Y.; Kokonendji, C.C. Count and Continuous Generalized Variability Indexes; The R Package GWI, R Foundation for Statistical Computing: Vienna, Austria, 2021. [Google Scholar]
  26. Arnold, B.C.; Manjunath, B.G. Statistical inference for distributions with one Poisson conditional. arXiv 2020, arXiv:2009.01296. [Google Scholar]
  27. Abid, R.; Kokonendji, C.C.; Masmoudi, A. Geometric Tweedie regression models for continuous and semicontinuous data with variation phenomenon. AStA Adv. Statist. Anal. 2020, 104, 33–58. [Google Scholar] [CrossRef]
  28. Kokonendji, C.C.; Senga Kiessé, T. Discrete associated kernels method and extensions. Statist. Methodol. 2011, 8, 497–516. [Google Scholar] [CrossRef]
  29. Scott, D.W. Multivariate Density Estimation—Theory, Practice, and Visualization; Wiley: New York, NY, USA, 1992. [Google Scholar]
  30. Silverman, B.W. Density Estimation for Statistics and Data Analysis; Chapman and Hall: London, UK, 1986. [Google Scholar]
  31. Zougab, N.; Adjabi, S.; Kokonendji, C.C. Bayesian estimation of adaptive bandwidth matrices in multivariate kernel density estimation. Comput. Statist. Data Anal. 2014, 75, 28–38. [Google Scholar] [CrossRef]
  32. Kokonendji, C.C.; Libengué Dobélé-Kpoka, F.G.B. Asymptotic results for continuous associated kernel estimators of density functions. Afr. Diaspora J. Math. 2018, 21, 87–97. [Google Scholar]
  33. Kokonendji, C.C.; Senga Kiessé, T.; Zocchi, S.S. Discrete triangular distributions and non-parametric estimation for probability mass function. J. Nonparam. Statist. 2007, 19, 241–254. [Google Scholar] [CrossRef]
  34. Wansouwé, W.E.; Somé, S.M.; Kokonendji, C.C. Ake: R Package Discrete continuous associated kernel estimations. R J. 2016, 8, 259–276. [Google Scholar] [CrossRef]
  35. Somé, S.M. Bayesian selector of adaptive bandwidth for gamma kernel density estimator on [0,). Commun. Statist. Simul. Comput. 2021. forthcoming. [Google Scholar] [CrossRef]
  36. Ziane, Y.; Zougab, N.; Adjabi, S. Adaptive Bayesian bandwidth selection in asymmetric kernel density estimation for nonnegative heavy-tailed data. J. Appl. Statist. 2015, 42, 1645–1658. [Google Scholar] [CrossRef]
  37. Kokonendji, C.C.; Belaid, N.; Abid, R.; Adjabi, S. Flexible semiparametric kernel estimation of multivariate count distribution with Bayesian bandwidth selection. Statist. Meth. Appl. 2021. forthcoming. [Google Scholar]
  38. Aitchison, J.; Aitken, C.G.G. Multivariate binary discrimination by the kernel method. Biometrika 1976, 63, 413–420. [Google Scholar] [CrossRef]
  39. Kokonendji, C.C.; Zocchi, S.S. Extensions of discrete triangular distribution and boundary bias in kernel estimation for discrete functions. Statist. Probab. Lett. 2010, 80, 1655–1662. [Google Scholar] [CrossRef]
  40. Huang, A.; Sippel, L.; Fung, T. A consistent second-order discrete kernel smoother. arXiv 2020, arXiv:2010.03302. [Google Scholar]
  41. Libengué Dobélé-Kpoka, F.G.B.; Kokonendji, C.C. The mode-dispersion approach for constructing continuous associated kernels. Afr. Statist. 2017, 12, 1417–1446. [Google Scholar] [CrossRef]
  42. Huang, A. On arbitrarily underdispersed Conway-Maxwell-Poisson distributions. arXiv 2020, arXiv:2011.07503. [Google Scholar]
  43. Chen, S.X. Probability density function estimation using gamma kernels. Ann. Inst. Statist. Math. 2000, 52, 471–480. [Google Scholar] [CrossRef]
  44. Hirukawa, M.; Sakudo, M. Family of the generalised gamma kernels: A generator of asymmetric kernels for nonnegative data. J. Nonparam. Statist. 2015, 27, 41–63. [Google Scholar] [CrossRef]
  45. Mousa, A.M.; Hassan, M.K.; Fathi, A. A new non parametric estimator for Pdf based on inverse gamma distribution. Commun. Statist. Theory Meth. 2016, 45, 7002–7010. [Google Scholar] [CrossRef]
  46. Scaillet, O. Density estimation using inverse and reciprocal inverse Gaussian kernels. J. Nonparam. Statist. 2004, 16, 217–226. [Google Scholar] [CrossRef]
  47. Igarashi, G.; Kakizawa, Y. Re-formulation of inverse Gaussian, reciprocal inverse Gaussian, and Birnbaum-Saunders kernel estimators. Statist. Probab. Lett. 2014, 84, 235–246. [Google Scholar] [CrossRef]
  48. Jin, X.; Kawczak, J. Birnbaum-Saunders and lognormal kernel estimators for modelling durations in high frequency financial data. Ann. Econ. Financ. 2003, 4, 103–124. [Google Scholar]
  49. Marchant, C.; Bertin, K.; Leiva, V.; Saulo, H. Generalized Birnbaum–Saunders kernel density estimators and an analysis of financial data. Comput. Statist. Data Anal. 2013, 63, 1–15. [Google Scholar] [CrossRef]
  50. Mombeni, H.A.; Masouri, B.; Akhoond, M.R. Asymmetric kernels for boundary modification in distribution function estimation. REVSTAT 2019. forthcoming. [Google Scholar]
  51. Salha, R.B.; Ahmed, H.I.E.S.; Alhoubi, I.M. Hazard rate function estimation using Weibull kernel. Open J. Statist. 2014, 4, 650–661. [Google Scholar] [CrossRef]
  52. Erçelik, E.; Nadar, M. A new kernel estimator based on scaled inverse chi-squared density function. Am. J. Math. Manag. Sci. 2020. forthcoming. [Google Scholar] [CrossRef]
  53. Lafaye de Micheaux, P.; Ouimet, F. A study of seven asymmetric kernels for the estimation of cumulative distribution functions. arXiv 2020, arXiv:2011.14893. [Google Scholar]
  54. Kokonendji, C.C.; Touré, A.Y.; Abid, R. On general exponential weight functions and variation phenomenon. Sankhy¯a A 2020. forthcoming. [Google Scholar] [CrossRef]
  55. White, H. Maximum likelihood estimation of misspecified models. Econometrica 1982, 50, 1–26. [Google Scholar] [CrossRef]
  56. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2020; Available online: http://cran.r-project.org/ (accessed on 28 January 2021).
  57. Abdous, B.; Kokonendji, C.C.; Senga Kiessé, T. Semiparametric regression for count explanatory variables. J. Statist. Plann. Inference 2012, 142, 1537–1548. [Google Scholar] [CrossRef]
  58. Cuenin, J.; Jørgensen, B.; Kokonendji, C.C. Simulations of full multivariate Tweedie with flexible dependence structure. Comput. Statist. 2016, 31, 1477–1492. [Google Scholar] [CrossRef]
  59. Marshall, A.W.; Olkin, I. A multivariate exponential distribution. J. Amer. Statist. Assoc. 1967, 62, 30–44. [Google Scholar] [CrossRef]
  60. Karlis, D. ML estimation for multivariate shock models via an EM algorithm. Ann. Inst. Statist. Math. 2003, 55, 817–830. [Google Scholar] [CrossRef]
Figure 1. Comparative graphics of the eight univariate semicontinuous associated kernels of Table 1 on the edge ( x = 0.3 ) with h = 0.1 and h = 0.4 .
Figure 1. Comparative graphics of the eight univariate semicontinuous associated kernels of Table 1 on the edge ( x = 0.3 ) with h = 0.1 and h = 0.4 .
Stats 04 00013 g001
Figure 2. Comparative graphics of the eight univariate semicontinuous associated kernels of Table 1 inside ( x = 2.3 ) with h = 0.1 and h = 0.4 .
Figure 2. Comparative graphics of the eight univariate semicontinuous associated kernels of Table 1 inside ( x = 2.3 ) with h = 0.1 and h = 0.4 .
Stats 04 00013 g002
Figure 3. Comparative graphs of estimates of X 1 , X 2 and X 3 with their corresponding diagnostics.
Figure 3. Comparative graphs of estimates of X 1 , X 2 and X 3 with their corresponding diagnostics.
Stats 04 00013 g003
Figure 4. Univariate projections of diagnostic graphs for bivariate and trivariate models.
Figure 4. Univariate projections of diagnostic graphs for bivariate and trivariate models.
Stats 04 00013 g004
Table 1. Eight semicontinuous univariate associated kernels on S x , h [ 0 , ) and classifyed by “O.”
Table 1. Eight semicontinuous univariate associated kernels on S x , h [ 0 , ) and classifyed by “O.”
O.Name K x , h ( u ) A ( x , h ) B ( x , h )
1LN2 [41] ( u h 2 π ) 1 exp log { x exp ( h 2 ) } log u / 2 h 2 x [ exp ( 3 h 2 / 2 ) 1 ] x 2 exp ( 3 h 2 ) [ exp ( h 2 ) 1 ]
2W [51] [ Γ ( h ) / x ] [ u Γ ( 1 + h ) / x ] 1 / h 1 exp [ u Γ ( 1 + h ) / x ] 1 / h 0 x 2 Γ ( 1 + 2 h ) / Γ 2 ( 1 + 2 h ) 1
3G [43] h 1 x / h u x / h exp ( u / h ) / Γ ( 1 + x / h ) h ( x + h ) h
4BS [48] ( u h 2 π ) 1 ( x u ) 1 / 2 + ( x / u 3 ) 1 / 2 exp ( 2 u / x x / u ) / 2 h x h / 2 x 2 h ( 2 + 5 h / 2 ) / 2
5Ig [41] h 1 1 / x h u 1 / x h exp ( 1 / u h ) / Γ ( 1 / x h 1 ) 2 x 2 h / ( 1 2 x h ) x 3 h / [ ( 1 3 x h ) ( 1 2 x h ) 2 ]
6RIG [46] ( 2 π u h ) 1 exp [ x h ] [ 2 ( x h ) / u u / ( x h ) ] / 2 h 0 ( x h ) h
7IG [46] ( 2 π h u 3 ) 1 exp [ 2 x / u u / x ) ] / 2 h x 0 x 3 h
8LN1 [48] ( u 8 π log ( 1 + h ) ) 1 exp [ log u log x ] 2 / [ 8 log ( 1 + h ) ] x h ( h + 2 ) x 2 ( 1 + h ) 4 [ ( 1 + h ) 4 1 ]
Γ ( v ) : = 0 s v 1 exp ( s ) d s is the classical gamma function with v > 0 .
Table 2. Drinking water pumps trivariate data measured in the Sahel with n = 42 .
Table 2. Drinking water pumps trivariate data measured in the Sahel with n = 42 .
X 1 : 23261871012014621547225712024621
X 2 : 979394100988496110121739093103116
X 3 : 2652223923263217103931425226
X 1 : 19422051212017113147111514
X 2 : 1148296947791117103991137910984118
X 3 : 26364336627153695211202537
X 1 : 111690116529510114471420
X 2 : 989394103109110891081019310213810396
X 3 : 251843432438640213415236837
Table 3. Empirical univariate (in diagonal), bivariate (off diagonal) and trivariate (at the corner) variation (6) and dispersion (3) indexes.
Table 3. Empirical univariate (in diagonal), bivariate (off diagonal) and trivariate (at the corner) variation (6) and dispersion (3) indexes.
GVI ^ 3 = 0.0533 X 1 X 2 X 3 GDI ^ 3 = 15.1229 X 1 X 2 X 3
X 1 1.9425 0.0557 1.0549 X 1 89.5860 14.3223 70.7096
X 2 0.0557 0.0167 0.0157 X 2 14.3223 1.6623 2.0884
X 3 1.0549 0.0157 0.2122 X 3 70.7096 2.0884 6.3192
Table 4. Parameter estimates of models and diagnostic percents of univariate datasets.
Table 4. Parameter estimates of models and diagnostic percents of univariate datasets.
Estimate μ ^ j W ˜ n , j (%) a ^ j b ^ j
X 1 0.0217 95.2381 0.7256 63.5618
X 2 0.0100 76.1905 56.9817 1.7470
X 3 0.0336 100.0000 3.7512 7.9403
Table 5. Correlations, MVI, parameter estimates and diagnostic percents of bi- and trivariate cases.
Table 5. Correlations, MVI, parameter estimates and diagnostic percents of bi- and trivariate cases.
Dataset ( X 1 , X 2 ) ( X 1 , X 3 ) ( X 2 , X 3 ) ( X 1 , X 2 , X 3 )
ρ ^ ( X j , X k ) 0.3090 0.2597 0.0245 det ρ ^ = 0.8325
MVI ^ 0.0720 0.9857 0.0155 0.0634
( μ ^ j ) ( 0.0217 , 0.0100 ) ( 0.0217 , 0.0336 ) ( 0.0100 , 0.0336 ) ( 0.0217 , 0.0100 , 0.0336 )
W ˜ n (%) 9.5238 52.3809 26.1005 0.0000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop