Next Article in Journal
Employing AI to Better Understand Our Morals
Next Article in Special Issue
On a 2-Relative Entropy
Previous Article in Journal
A Grouping Differential Evolution Algorithm Boosted by Attraction and Repulsion Strategies for Masi Entropy-Based Multi-Level Image Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kernel Estimation of Cumulative Residual Tsallis Entropy and Its Dynamic Version under ρ-Mixing Dependent Data

by
Muhammed Rasheed Irshad
1,
Radhakumari Maya
2,
Francesco Buono
3 and
Maria Longobardi
4,*
1
Department of Statistics, Cochin University of Science and Technology, Cochin 682 022, India
2
Department of Statistics, Govt. College for Women, Trivandrum 695 014, India
3
Dipartimento di Matematica e Applicazioni “Renato Caccioppoli”, Università degli Studi di Napoli Federico II, 80138 Naples, Italy
4
Dipartimento di Biologia, Università degli Studi di Napoli Federico II, 80138 Naples, Italy
*
Author to whom correspondence should be addressed.
Submission received: 19 November 2021 / Revised: 17 December 2021 / Accepted: 19 December 2021 / Published: 21 December 2021
(This article belongs to the Special Issue Measures of Information II)

Abstract

:
Tsallis introduced a non-logarithmic generalization of Shannon entropy, namely Tsallis entropy, which is non-extensive. Sati and Gupta proposed cumulative residual information based on this non-extensive entropy measure, namely cumulative residual Tsallis entropy (CRTE), and its dynamic version, namely dynamic cumulative residual Tsallis entropy (DCRTE). In the present paper, we propose non-parametric kernel type estimators for CRTE and DCRTE where the considered observations exhibit an ρ -mixing dependence condition. Asymptotic properties of the estimators were established under suitable regularity conditions. A numerical evaluation of the proposed estimator is exhibited and a Monte Carlo simulation study was carried out.

1. Introduction

Shannon [1] made his signature in statistics by introducing the concept of entropy, a measure of disorder in probability distribution. Associated with an absolutely continuous random variable X with probability density function (pdf) f ( x ) , cumulative distribution function (cdf) F ( x ) and survival function (sf) F ¯ ( x ) = 1 F ( x ) , Shannon entropy is defined as
ζ ( X ) = 0 + f ( x ) log f ( x ) d x ,
where log ( · ) is the natural logarithm with standard convention 0 log 0 = 0 . Nowadays, this measure has gained a peculiar place in sciences such as physics, chemistry, computer sciences, wavelet analysis, image recognition and fuzzy sets. Following the pioneering work of Shannon, the available literature has generated a significant amount of papers related to it, obtained by incorporating some additional parameters which make these entropies sensitive to different the shapes of probability distributions.
A vital generalization of Shannon entropy is Tsallis entropy, which was first introduced by Havrda and Charvát [2] in the status of cybernetics theory. Then, Tsallis [3] exploited its non-extensive features and described its paramount importance in physics. In parallel to Shannon entropy, it measures the disorder in macroscopic systems. For an absolutely continuous random variable X with pdf f ( x ) , the Tsallis entropy of order α is defined as
τ α ( X ) = 1 α 1 1 E [ ( f ( X ) ) α 1 ] = 1 α 1 1 0 + ( f ( x ) ) α d x , α 1 , α > 0 .
when α 1, τ α ( X ) ζ ( X ) . Tsallis’s idea was to bestow a new formula instead of a classical logarithm used in Shannon entropy. Tsallis entropy is relevant in various fields of science; it is used in a broad range of contexts in physics science such as: statistical physics [4]; astrophysics [5]; turbulence [6]; inverse problems [7]; or quantum physics [8]. Tsallis entropy is applied in the description of the fluctuation of magnetic field in solar wind, in mammograms and in the analysis of magnetic resonance imaging (MRI) (as can be seen in Cartwright [9]). In recent years, this entropy has prompted many authors to define new discrimination measures as well as dual versions of entropy measures (see [10]).
Rao et al. [11] proposed another measure of uncertainty, called cumulative residual entropy (CRE), which is obtained by writing a survival function in place of pdf in (1) and is given by
ν ( X ) = 0 + F ¯ ( x ) log F ¯ ( x ) d x .
The basic idea in this choice is that, in many situations, we prefer cumulative distribution function (cdf) over pdf. Moreover, a cdf exists in situations in which density does not exist such as in the case of a mixture density, combination of Gaussians and delta functions. The CRE is specifically applicable to describe the information in problems related to aging properties in reliability theory based on the mean residual life function. For other variants of CRE, one may refer to Rao [12], Psarrakos and Toomaj [13] and the references therein.
As in the scenario of introducing the concept of CRE, Sati and Gupta [14] introduced cumulative residual Tsallis entropy (CRTE) of order α , which is defined as
η α ( X ) = 1 α 1 1 0 + ( F ¯ ( x ) ) α d x , α 1 , α > 0 .
when α 1, η α ( X ) ν ( X ) . Since η α ( X ) is not applicable to a system that has survived for some units of time t, Sati and Gupta [14] proposed a dynamic version of CRTE based on the random variable X t = [ X t | X > t ] whose definition is given below.
The dynamic cumulative residual Tsallis entropy (DCRTE) of order α is defined as
η α ( X ; t ) = 1 α 1 1 t + F ¯ ( x ) F ¯ ( t ) α d x , α 1 , α > 0 ,
where F ¯ ( · ) is the sf of X. Khammar and Jahanshahi [15] developed a weighted form of CRTE and DCRTE and discussed many of its reliability properties. Sunoj et al. [16] discussed a quantile-based study of CRTE and certain characterization results using order statistics. Mohamed [17] studied the CRTE and DCRTE of concomitants of generalized order statistics. Recently, Toomaj and Atabay [18] elaborately elucidated certain new results based on CRTE. The huge increase in the number of articles on CRTE and DCRTE shows the remarkable importance of both these measures from a theoretical and applied perspective especially in the physical context. As far as statistical inferential aspects are concerned, to the best of our knowledge, not even a single work has been performed to date in the available literature. Hence, in this work, our main objective is to propose non-parametric estimators for CRTE and DCRTE using kernel type estimation where the observations under considerations are exhibiting some mode of dependence. Practically, it seems more realistic to replace the independence with some mode of dependence.
The study of non-parametric density estimation in the case of dependent data was started decades back. Bradley [19] discussed the weak consistency and asymptotic normality of the kernel density estimator f n under ρ -mixing. Masry [20] established a non-parametric recursive density estimator in the α -mixing context and studied some of its properties. Masry and Györfi [21] established the strong consistency of recursive density estimator under ρ -mixing. Boente [22] discussed the strong consistency of the non-parametric density estimator under ϕ -mixing and α -mixing processes. The mixing coefficients α -mixing, ϕ -mixing and ρ -mixing are defined by Rosenblatt [23], Ibragimov [24] and Kolmogorov and Rozanov [25], respectively. For more properties of the different mixing coefficients, see Bradley [26].
Rajesh et al. [27] discussed the local linear estimation of the residual entropy function of conditional distributions where underlying observations are assumed to be ρ -mixing. The kernel estimation of the Mathai–Haubold entropy function under α -mixing dependence conditions were studied by Maya and Irshad [28]. Recently, non-parametric estimation using kernel type estimation under α -mixing dependence conditions of residual extropy, past extropy and negative cumulative residual extropy functions were studied by Maya and Irshad [29], Irshad and Maya [30] and Maya et al. [31], respectively. Compared to the α -mixing, ρ -mixing is stronger, as can be seen in Kolmogorov and Rozanov [25]. In this work, we propose non-parametric estimators of CRTE and DCRTE using kernel type estimation based on the assumption that underlying lifetimes are assumed to be ρ -mixing.
Definition 1.
Let ( Ω , F , P ) be a probability space and F i k be the σ-algebra of events obtained by the random variables { X j ; i j k } . The stationary process { X j } is said to be asymptotically uncorrelated if
sup U L 2 F i V L 2 F i + k + | c o v ( U , V ) | v a r ( U ) v a r ( V ) = ρ ( k ) 0
as k + , where L 2 F a b denotes the collection of all second-order random variables measurable with respect to F a b , ρ ( k ) is called the maximal correlation coefficient or ρ-mixing coefficient (as can be seen in Kolmogorov and Rozanov [25]).
The rest of the paper is structured as follows. In Section 2, we propose non-parametric kernel type estimators for CRTE and DCRTE. Section 3 contains the expression for the bias and variances of the estimators proposed for CRTE and DCRTE and examines its consistency property. The mean consistently integrated in the quadratic mean and asymptotic normality of the proposed estimators are also discussed here in the form of several theorems. A numerical study on the asymptotic normality of the proposed estimators is given in Section 4.

2. Estimation

In this section, we propose non-parametric estimators for CRTE and DCRTE functions. Let { X i } be a strictly stationary process with univariate probability density function f ( x ) . Note that X i ’s need not be mutually independent, that is, the lifetimes are assumed to be ρ -mixing. Wegman and Davies [32] introduced a recursive density estimator of f ( x ) given by
f n * ( x ) = 1 n b n j = 1 n b j 1 2 K x X j b j ,
where K ( x ) satisfies the following conditions:
sup | K ( x ) | < + , + | K ( x ) | d x < + , lim | x | + | x K ( x ) | = 0 , + K ( x ) d x = 1 .
The bandwidth parameter b n satisfies b n 0 and n b n + as n + . Let x be a point of continuity of f. Suppose f is ( r + 1 ) times continuously differentiable at the point x such that:
sup u | f ( r + 1 ) ( u ) | = M < + .
Assume that:
+ | u | j | K ( u ) | d u < + , j = 1 , 2 , , r + 1 ,
and the bandwidth parameter b n satisfies:
1 n j = 1 n b j b n l + 1 2 β l + 1 2 < + , as n + , l = 0 , 1 , 2 , , r + 1 .
Then the mean and variance of f n * ( x ) are given by (see, Masry [20])
E f n * ( x ) β 0.5 f ( x ) + b n 2 c 2 2 β 0.5 f ( 2 ) ( x ) β 2.5
and:
V a r ( f n * ( x ) ) f ( x ) n b n C K ,
where c 2 = + u 2 K ( u ) d u and C K = + K 2 ( u ) d u . Equation (8) implies that f n * ( x ) is not an asymptotically unbiased estimator of f ( x ) . By simple scaling, we can find an asymptotically unbiased estimator of f ( x ) given by
f ^ n ( x ) = f n * ( x ) β 0.5 .
The bias and variance of f ^ n ( x ) are given by (see, Masry [20])
B i a s f ^ n ( x ) b n 2 c 2 2 β 0.5 f ( 2 ) ( x ) β 2.5
and:
V a r ( f ^ n ( x ) ) f ( x ) n b n β 0.5 2 C K .
We propose kernel estimators for CRTE and DCRTE functions that are, respectively, given by
η ^ α ( X ) = 1 α 1 1 0 + F ¯ ^ n α ( x ) d x
and:
η ^ α ( X ; t ) = 1 α 1 1 t + F ¯ ^ n α ( x ) d x F ¯ ^ n α ( t ) ,
where:
F ¯ ^ n ( t ) = t + f ^ n ( x ) d x .
is the non-parametric estimator of survival function F ¯ ( t ) .

3. Asymptotic Results

Here, we propose the expression for bias, variance and certain asymptotic results of the proposed estimators.
Theorem 1.
Let K ( x ) be a kernel satisfying the assumptions given in Section 2. Under ρ-mixing dependence conditions, we have:
B i a s F ¯ ^ n ( t ) β 2.5 2 β 0.5 b n 2 c 2 t + f ( 2 ) ( x ) d x ,
and:
V a r ( F ¯ ^ n ( t ) ) 1 n b n β 0.5 2 C K t + f ( x ) d x .
Proof. 
The proof of the theorem is similar to the proof of bias and variance of f ^ n ( x ) given in Masry [20] and hence omitted. □
Theorem 2.
Suppose η ^ α ( X ) is a non-parametric estimator of CRTE defined in (12) and η ^ α ( X ; t ) is a non-parametric estimator of DCRTE defined in (13). Then, for  α > 1 2 and α 1 :
1.
η ^ α ( X ) is a consistent estimator of η α ( X ) ;
2.
η ^ α ( X ; t ) is a consistent estimator of η α ( X ; t ) .
Proof. 
1. By using the Taylor’s series expansion, we obtain:
0 + F ¯ ^ n α ( x ) d x 0 + F ¯ α ( x ) d x + α 0 + F ¯ α 1 ( x ) F ¯ ^ n ( x ) F ¯ ( x ) d x .
Using the above equation, the bias and the variance of 0 + F ¯ ^ n α ( x ) d x are given by
B i a s 0 + F ¯ ^ n α ( x ) d x α β 2.5 2 β 0.5 b n 2 c 2 0 + x + f ( 2 ) ( y ) d y F ¯ α 1 ( x ) d x
and:
V a r 0 + F ¯ ^ n α ( x ) d x α 2 n b n β 0.5 2 C K 0 F ¯ 2 α 1 ( x ) d x .
The corresponding MSE is given by
M S E 0 + F ¯ ^ n α ( x ) d x α β 2.5 2 β 0.5 b n 2 c 2 0 + x + f ( 2 ) ( y ) d y F ¯ α 1 ( x ) d x 2 + α 2 n b n β 0.5 2 C K 0 + F ¯ 2 α 1 ( x ) d x .
From (18), as  n + :
M S E 0 + F ¯ ^ n α ( x ) d x 0 .
Therefore:
η ^ α ( X ) = 1 α 1 1 0 + F ¯ ^ n α ( x ) d x p 1 α 1 1 0 + F ¯ α ( x ) d x = η α ( X ) .
Hence, η ^ α ( X ) is a consistent estimator (in the probability sense) of η α ( X ) .
2. By using Taylor’s series expansion, the expressions for the bias of t + F ¯ ^ n α ( x ) d x and F ¯ ^ n α ( t ) are given by
B i a s t + F ¯ ^ n α ( x ) d x α β 2.5 2 β 0.5 b n 2 c 2 t + x + f ( 2 ) ( y ) d y F ¯ α 1 ( x ) d x ,
B i a s F ¯ ^ n α ( t ) α β 2.5 2 β 0.5 b n 2 c 2 F ¯ α 1 ( t ) t + f ( 2 ) ( y ) d y ,
whereas the variances are given by
V a r t + F ¯ ^ n α ( x ) d x α 2 n b n β 0.5 2 C K t + F ¯ 2 α 1 ( x ) d x ,
V a r F ¯ ^ n α ( t ) α 2 n b n β 0.5 2 C K F ¯ 2 α 1 ( t ) .
The corresponding MSE’s are given by
M S E t + F ¯ ^ n α ( x ) d x α β 2.5 2 β 0.5 b n 2 c 2 t + x + f ( 2 ) ( y ) d y F ¯ α 1 ( x ) d x 2 + α 2 n b n β 0.5 2 C K t + F ¯ 2 α 1 ( x ) d x
and:
M S E F ¯ ^ n α ( t ) α β 2.5 2 β 0.5 b n 2 c 2 F ¯ α 1 ( t ) t + f ( 2 ) ( y ) d y 2 + α 2 n b n β 0.5 2 C K F ¯ 2 α 1 ( t ) .
From (23) and (24), as n + :
M S E t + F ¯ ^ n α ( x ) d x 0 ,
and:
M S E F ¯ ^ n α ( t ) 0 .
Therefore:
η ^ α ( X ; t ) = 1 α 1 1 t + F ¯ ^ n α ( x ) d x F ¯ ^ n α ( t ) p 1 α 1 1 t + F ¯ α ( x ) d x F ¯ α ( t ) = η α ( X ; t ) .
Hence, η ^ α ( X ; t ) is a consistent estimator (in the probability sense) of η α ( X ; t ) . □
Proposition 1.
Let K ( x ) be a kernel satisfying the conditions given in Section 2. Then, the estimation error for DCRTE defined in (13) is given by
η ^ α ( X ; t ) η α ( X ; t ) 1 ( α 1 ) A α ( t ) M ^ α ( t ) A ^ α ( t ) M α ( t ) A α ( t ) ,
where M ^ α ( t ) = t + F ¯ ^ n α ( x ) d x , M α ( t ) = t + F ¯ α ( x ) d x , A ^ α ( t ) = F ¯ ^ n α ( t ) and A α ( t ) = F ¯ α ( t ) .
Proof. 
We have:
M ^ α ( t ) A ^ α ( t ) M α ( t ) A α ( t ) = 1 A α ( t ) M ^ α ( t ) A ^ α ( t ) M α ( t ) A α ( t ) A α ( t ) A ^ α ( t ) 1 + 1 A α ( t ) M ^ α ( t ) A ^ α ( t ) M α ( t ) A α ( t ) = 1 A α ( t ) M ^ α ( t ) A ^ α ( t ) M α ( t ) A α ( t ) 1 + O p ( 1 )
with A α ( t ) A ^ α ( t ) 1 = O p ( 1 ) , since A ^ α ( t ) p A α ( t ) .
Therefore:
M ^ α ( t ) A ^ α ( t ) M α ( t ) A α ( t ) 1 A α ( t ) M ^ α ( t ) A ^ α ( t ) M α ( t ) A α ( t ) .
We have:
η ^ α ( X ; t ) η α ( X ; t ) 1 ( α 1 ) M ^ α ( t ) A ^ α ( t ) M α ( t ) A α ( t ) .
By substituting (27) in (28), we obtain (25). □
Theorem 3.
Suppose η ^ α ( X ) is a non-parametric estimator of CRTE defined in (12) and η ^ α ( X ; t ) is a non-parametric estimator of DCRTE defined in (13). Then, the biases of η ^ α ( X ) and η ^ α ( X ; t ) are given as
B i a s ( η ^ α ( X ) ) α ( α 1 ) b n 2 c 2 β 2.5 2 β 0.5 0 + F ¯ α 1 ( x ) x + f ( 2 ) ( y ) d y d x ,
B i a s ( η ^ α ( X ; t ) ) α ( α 1 ) β 2.5 2 β 0.5 b n 2 c 2 F ¯ α ( t ) t + F ¯ α ( x ) d x F ¯ ( t ) t + f ( 2 ) ( y ) d y t + F ¯ α 1 ( x ) x + f ( 2 ) ( y ) d y d x ,
and the variances are given for α > 1 2 as
V a r ( η ^ α ( X ) ) α 2 ( α 1 ) 2 n b n β 0.5 2 C K 0 + F ¯ 2 α 1 ( x ) d x ,
V a r ( η ^ α ( X ; t ) ) α 2 ( α 1 ) 2 n b n β 0.5 2 F ¯ 2 α ( t ) C K t + F ¯ 2 α 1 ( x ) d x + t + F ¯ α ( x ) d x 2 F ¯ ( t ) .
Proof. 
By using Equations (16) and (17), we obtain the bias and variance of η ^ α ( X ) and by using Proposition 1 and Equations (19)–(22), we obtain the bias and variance of η ^ α ( X ; t ) . □
Theorem 4.
Suppose η ^ α ( X ) is a non-parametric estimator of CRTE as defined in (12) and η ^ α ( X ; t ) is a non-parametric estimator of DCRTE as defined in (13). Then, for  α > 1 2 and α 1 :
1.
η ^ α ( X ) is integratedly uniformly consistent in the quadratic mean estimator of η α ( X ) ;
2.
η ^ α ( X ; t ) is integratedly uniformly consistent in the quadratic mean estimator of η α ( X ; t ) .
Proof. 
1. Consider the mean integrated squared error (MISE) of the estimator η ^ α ( X ) . That is:
M I S E ( η ^ α ( X ) ) = E 0 + η ^ α ( X ) η α ( X ) 2 d x = 0 + E η ^ α ( X ) η α ( X ) 2 d x = 0 + V a r ( η ^ α ( X ) ) + [ B i a s ( η ^ α ( X ) ) ] 2 d x = 0 + M S E ( η ^ α ( X ) ) d x .
Using (29) and (31), we obtain:
M S E η ^ α ( X ) α ( α 1 ) b n 2 c 2 β 2.5 2 β 0.5 0 + F ¯ α 1 ( x ) x + f ( 2 ) ( y ) d y d x 2 + α 2 ( α 1 ) 2 n b n β 0.5 2 C K 0 + F ¯ 2 α 1 ( x ) d x .
From (34), as  n + :
M S E η ^ α ( X ) 0 .
Therefore, from (33), we have:
M I S E η ^ α ( X ) 0 , a s n + .
From (35), we can say that η ^ α ( X ) is integratedly uniformly consistent in quadratic mean estimator of η α ( X ) (as can be seen in Wegman [33]).
2. Consider the MISE of the estimator η ^ α ( X ; t ) —that is:
M I S E ( η ^ α ( X ; t ) ) = E 0 + η ^ α ( X ; t ) η α ( X ; t ) 2 d x = 0 + E η ^ α ( X ; t ) η α ( X ; t ) 2 d x = 0 + V a r ( η ^ α ( X ; t ) ) + [ B i a s ( η ^ α ( X ; t ) ) ] 2 d x = 0 + M S E ( η ^ α ( X ; t ) ) d x .
Using (30) and (32), we obtain:
M S E η ^ α ( X ; t ) α ( α 1 ) β 2.5 2 β 0.5 b n 2 c 2 F ¯ α ( t ) t + F ¯ α ( x ) d x F ¯ ( t ) t + f ( 2 ) ( y ) d y t + F ¯ α 1 ( x ) x + f ( 2 ) ( y ) d y d x 2 + α 2 ( α 1 ) 2 n b n β 0.5 2 F ¯ 2 α ( t ) C K t + F ¯ 2 α 1 ( x ) d x + t + F ¯ α ( x ) d x 2 F ¯ ( t ) .
From (37), as  n + :
M S E η ^ α ( X ; t ) 0 .
Therefore, from (36), we have:
M I S E η ^ α ( X ; t ) 0 , a s n + .
From (38), we can say that η ^ α ( X ; t ) is integratedly uniformly consistent in the quadratic mean estimator of η α ( X ; t ) (as can be seen in Wegman [33]). □
Theorem 5.
Suppose that η ^ α ( X ) is a non-parametric estimator of CRTE defined in (12) with α > 1 2 . Then, as  n + :
( n b n ) 1 2 η ^ α ( X ) η α ( X ) σ η
has a standard normal distribution where:
σ η 2 α 2 ( α 1 ) 2 β 0.5 2 C K 0 + F ¯ 2 α 1 ( x ) d x .
Proof. 
( n b n ) 1 2 η ^ n ( X ) η ( X ) = ( n b n ) 1 2 ( α 1 ) 0 + F ¯ ^ n α ( x ) d x 0 + F ¯ α ( x ) d x α ( n b n ) 1 2 ( α 1 ) 0 + F ¯ α 1 ( x ) x + ( f ^ n ( y ) f ( y ) ) d y d x .
By using the asymptotic normality of f ^ n ( x ) given in Masry [20], it is immediate that:
( n b n ) 1 2 η ^ α ( X ) η α ( X ) σ η
is asymptotically normal with a mean of zero, variance of 1 and σ η 2 given in (40). □
Theorem 6.
Suppose that η ^ α ( X ; t ) is a non-parametric estimator of DCRTE η α ( X ; t ) defined in (13) with α > 1 2 and α 1 . Then, as  n + :
( n b n ) 1 2 η ^ α ( X ; t ) η α ( X ; t ) σ η x
has a standard normal distribution where:
σ η x 2 α 2 ( α 1 ) 2 β 0.5 2 F ¯ 2 α ( t ) C K t + F ¯ 2 α 1 ( x ) d x + t + F ¯ α ( x ) d x 2 F ¯ ( t ) .
Proof. 
( n b n ) 1 2 η ^ n ( X ; t ) η ( X ; t ) = ( n b n ) 1 2 ( α 1 ) t + F ¯ ^ n α ( x ) d x F ¯ ^ n α ( t ) t + F ¯ α ( x ) d x F ¯ α ( t ) α ( n b n ) 1 2 ( α 1 ) F ¯ α ( t ) t + F ¯ α 1 ( x ) ( F ¯ ^ n α ( x ) F ¯ α ( x ) ) d x = α ( n b n ) 1 2 ( α 1 ) F ¯ α ( t ) t + F ¯ α 1 ( x ) x + ( f ^ n ( y ) f ( y ) ) d y d x .
By using the asymptotic normality of f ^ n ( x ) given in Masry [20], it is immediate that:
( n b n ) 1 2 η ^ α ( X ; t ) η α ( X ; t ) σ η x
is asymptotically normal with a mean of zero, variance of 1 and σ η x 2 given in (43). □

4. Numerical Evaluation of η ^ α ( X ) and Monte Carlo Simulation

In this section, a numerical evaluation of η ^ α ( X ) is given and a Monte Carlo simulation is carried out to support the asymptotic normality of the estimator given in (39). Let X be exponentially distributed with parameter λ (mean 1 / λ ). Then, the CRTE of order α of X is given by
η α ( X ) = λ α 1 λ α ( α 1 ) .
In order to obtain the desired estimator, it is necessary to fix a function K and a sequence { b n } n N which satisfy the assumptions given in Section 2. Here, we consider:
K ( x ) = 1 2 π exp x 2 2 , x R
b n = 1 n , n N .
By using these assumptions, it readily follows that:
β 0.5 = 4 3 ,
C K = 1 2 π
σ η 2 = α 2 C K ( α 1 ) 2 β 0.5 2 λ ( 2 α 1 ) = 9 α 2 32 π ( α 1 ) 2 λ ( 2 α 1 ) ,
where α > 1 2 and α 1 .
To fix the ideas, consider n = 50 and λ = 1 . Our goal is to check that:
( n b n ) 1 2 η ^ α ( X ) η α ( X ) σ η
has a standard normal distribution. By using the function exprnd of MATLAB, 500 samples of size n, whose parent distribution is exponential with parameter 1, are generated. These data satisfy the assumption in (6). Hence, by using the fixed parameters, the functions f ^ n and F ¯ ^ n are obtained for each sample and finally the kernel estimator of CRTE is computed. This procedure is repeated by choosing α = 1.5 , 2 and 3. Then, in order to check the asymptotic normality of the estimator in (39), the histogram in Figure 1 is displayed.

5. Conclusions

In this paper, non-parametric kernel type estimators for CRTE and DCRTE were proposed for observations which exhibit ρ -mixing dependence. The bias and the variance of the proposed estimators were evaluated. Moreover, it was proven that those estimators are consistent and a Monte Carlo simulation was carried out to show their asymptotic normality.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Francesco Buono and Maria Longobardi are members of the research group GNAMPA of the Istituto Nazionale di Alta Matematica (INdAM), are partially supported by MIUR - PRIN 2017, project ‘‘Stochastic Models for Complex Systems’’, no. 2017 JFFHSH. The present work was developed within the activities of the project 000009_ALTRI_CDA_75_2021_FRA_LINEA_B funded by “Programma per il finanziamento della ricerca di Ateneo - Linea B" of the University of Naples Federico II.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
cdfcumulative distribution function
CREcumulative residual entropy
CRTEcumulative residual Tsallis entropy
DCRTEdynamic cumulative residual Tsallis entropy
MISEmean integrated squared error
MSEmean squared error
pdfprobability density function
sfsurvival function

References

  1. Shannon, C. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–432. [Google Scholar] [CrossRef] [Green Version]
  2. Havrda, J.; Charvát, F. Quantification method of classification processes. Concept of structural α-entropy. Kybernetika 1967, 3, 30–35. [Google Scholar]
  3. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  4. Reis, M.S.; Freitas, J.C.C.; Orlando, M.T.D.; Lenzi, E.K.; Oliveira, I.S. Evidences for Tsallis non-extensivity on CMR manganites. Europhys. Lett. 2002, 58, 42–48. [Google Scholar] [CrossRef] [Green Version]
  5. Plastino, A.R.; Plastino, A. Stellar polytropes and Tsallis’ entropy. Phys. Lett. A 1993, 174, 384–386. [Google Scholar] [CrossRef]
  6. Arimitsu, T.; Arimitsu, N. Analysis of fully developed turbulence in terms of Tsallis statistics. Phys. Rev. E 2000, 61, 3237–3240. [Google Scholar] [CrossRef] [Green Version]
  7. de Lima, I.P.; da Silva, S.L.E.F.; Corso, G.; de Araújo, J.M. Tsallis Entropy, Likelihood, and the Robust Seismic Inversion. Entropy 2020, 22, 464. [Google Scholar] [CrossRef] [Green Version]
  8. Batle, J.; Plastino, A.R.; Casas, M.; Plastino, A. Conditional q-entropies and quantum separability: A numerical exploration. J. Phys. A 2002, 35, 10311–10324. [Google Scholar] [CrossRef] [Green Version]
  9. Cartwright, J. Roll over, Boltzmann. Phys. World 2014, 27, 31–35. [Google Scholar] [CrossRef]
  10. Balakrishnan, N.; Buono, F.; Longobardi, M. On Tsallis extropy with an application to pattern recognition. Stat. Probab. Lett. 2022, 180, 109241. [Google Scholar] [CrossRef]
  11. Rao, M.; Chen, Y.; Vemuri, B.C.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
  12. Rao, M. More on a new concept of entropy and information. J. Theor. Probab. 2005, 18, 967–981. [Google Scholar] [CrossRef]
  13. Psarrakos, G.; Toomaj, A. On the generalized cumulative residual entropy with application in actuarial science. J. Comput. Appl. Math. 2017, 309, 186–199. [Google Scholar] [CrossRef]
  14. Sati, M.M.; Gupta, N. Some characterization results on dynamic cumulative residual Tsallis entropy. J. Probab. Stat. 2015, 1–8. [Google Scholar] [CrossRef] [Green Version]
  15. Khammar, A.H.; Jahanshahi, S.M.A. On weighted cumullative residual Tsallis entropy and its dynamic version. Phys. A Stat. Mech. Appl. 2018, 491, 678–692. [Google Scholar] [CrossRef]
  16. Sunoj, S.M.; Krishnan, A.S.; Sankaran, P.G. A quantile-based study os cumulative residual Tsallis entropy measures. Phys. A Stat. Mech. Appl. 2018, 494, 410–421. [Google Scholar] [CrossRef]
  17. Mohamed, M.S. On cumulative residual Tsallis entropy and its dynamic version of concomitants of generalized order statistics. Commun. Stat. - Theory Methods 2020, 1–18. [Google Scholar] [CrossRef]
  18. Toomaj, A.; Atabay, H.A. Some new findings on the cumulative residual Tsallis entropy. J. Comput. Appl. Math. 2022, 400, 113669. [Google Scholar] [CrossRef]
  19. Bradley, R.C. Central limit theorems under weak dependence. J. Multivar. Anal. 1981, 11, 1–16. [Google Scholar] [CrossRef] [Green Version]
  20. Masry, E. Probability density estimation from sampled data. IEEE Trans. Inf. Theory 1986, 32, 254–267. [Google Scholar] [CrossRef]
  21. Masry, E.; Györfi, L. Strong consistency and rates for recursive probability density estimators of stationary processes. J. Multivar. Anal. 1987, 22, 79–93. [Google Scholar] [CrossRef] [Green Version]
  22. Boente, G. Consistency of a nonparametric estimate of a density function for dependent variables. J. Multivar. Anal. 1988, 25, 90–99. [Google Scholar] [CrossRef] [Green Version]
  23. Rosenblatt, M. A central limit theorem and a strong mixing condition. Proc. Natl. Acad. Sci. USA 1956, 42, 43–47. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Ibragimov, I.A. Some limit theorems for stochastic processes stationary in the strict sense. Dokl. Akad. Nauk. SSSR 1959, 125, 711–714. [Google Scholar]
  25. Kolmogorov, A.N.; Rozanov, Y.A. On strong mixing conditions for stationary Gaussian processes. Theory Probab. Its Appl. 1960, 5, 204–208. [Google Scholar] [CrossRef]
  26. Bradley, R.C. Basic properties of strong mixing conditions. A survey and some open question. Probab. Surv. 2005, 2, 107–144. [Google Scholar] [CrossRef] [Green Version]
  27. Rajesh, G.; Abdul-Sathar, E.I.; Maya, R. Local linear estimation of residual entropy function of conditional distribution. Comput. Stat. Data Anal. 2015, 88, 1–14. [Google Scholar] [CrossRef]
  28. Maya, R.; Irshad, M.R. Kernel estimation of Mathai-Haubold entropy and residual Mathai-Haubold entropy functions under α- mixing dependence condition. Am. J. Math. Manag. Sci. 2021, 1–12. [Google Scholar] [CrossRef]
  29. Maya, R.; Irshad, M.R. Kernel estimation of residual extropy function under α-mixing dependence condition. S. Afr. Stat. J. 2019, 53, 65–72. [Google Scholar]
  30. Irshad, M.R.; Maya, R. Nonparametric estimation of past extropy under α-mixing dependence condition. Ric. Math. 2021. [Google Scholar] [CrossRef]
  31. Maya, R.; Irshad, M.R.; Archana, K. Recursive and non-recursive kernel estimation of negative cumulative residual extropy function under α-mixing dependence condition. Ric. Math. 2021. [Google Scholar] [CrossRef]
  32. Wegman, E.J.; Davies, H.I. Remarks on some recursive estimators of a probability density. Ann. Stat. 1979, 7, 316–327. [Google Scholar] [CrossRef]
  33. Wegman, E.J. Nonparametric probability density estimation:I.Asummary of available metods. Technometrics 1972, 14, 533–546. [Google Scholar] [CrossRef]
Figure 1. Histogram of (39) with parameters given in Section 4 and different choices of α .
Figure 1. Histogram of (39) with parameters given in Section 4 and different choices of α .
Entropy 24 00009 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Irshad, M.R.; Maya, R.; Buono, F.; Longobardi, M. Kernel Estimation of Cumulative Residual Tsallis Entropy and Its Dynamic Version under ρ-Mixing Dependent Data. Entropy 2022, 24, 9. https://0-doi-org.brum.beds.ac.uk/10.3390/e24010009

AMA Style

Irshad MR, Maya R, Buono F, Longobardi M. Kernel Estimation of Cumulative Residual Tsallis Entropy and Its Dynamic Version under ρ-Mixing Dependent Data. Entropy. 2022; 24(1):9. https://0-doi-org.brum.beds.ac.uk/10.3390/e24010009

Chicago/Turabian Style

Irshad, Muhammed Rasheed, Radhakumari Maya, Francesco Buono, and Maria Longobardi. 2022. "Kernel Estimation of Cumulative Residual Tsallis Entropy and Its Dynamic Version under ρ-Mixing Dependent Data" Entropy 24, no. 1: 9. https://0-doi-org.brum.beds.ac.uk/10.3390/e24010009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop