Next Article in Journal
Experimental Study on Wave Characteristics of Stilling Basin with a Negative Step
Next Article in Special Issue
A Bayesian Surprise Approach in Designing Cognitive Radar for Autonomous Driving
Previous Article in Journal
Learning Visible Thermal Person Re-Identification via Spatial Dependence and Dual-Constraint Loss
Previous Article in Special Issue
Cumulative Residual q-Fisher Information and Jensen-Cumulative Residual χ2 Divergence Measures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Generalized Measure of Cumulative Residual Entropy

by
Sudheesh Kumar Kattumannil
1,*,
E. P. Sreedevi
2 and
Narayanaswamy Balakrishnan
3
1
Applied Statistics Unit, Indian Statistical Institute, Chennai 600029, India
2
Department of Statistics, SNGS College, Pattambi 679306, India
3
Department of Mathematics and Statistics, McMaster University, Hamilton, ON L8S 4K1, Canada
*
Author to whom correspondence should be addressed.
Submission received: 17 January 2022 / Revised: 19 March 2022 / Accepted: 21 March 2022 / Published: 23 March 2022
(This article belongs to the Special Issue Measures of Information II)

Abstract

:
In this work, we introduce a generalized measure of cumulative residual entropy and study its properties. We show that several existing measures of entropy, such as cumulative residual entropy, weighted cumulative residual entropy and cumulative residual Tsallis entropy, are all special cases of this generalized cumulative residual entropy. We also propose a measure of generalized cumulative entropy, which includes cumulative entropy, weighted cumulative entropy and cumulative Tsallis entropy as special cases. We discuss a generating function approach, using which we derive different entropy measures. We provide residual and cumulative versions of Sharma–Taneja–Mittal entropy and obtain them as special cases this generalized measure of entropy. Finally, using the newly introduced entropy measures, we establish some relationships between entropy and extropy measures.

1. Introduction

The uncertainty associated with a random variable can be evaluated using information measures. In many practical situations in lifetime data analysis, experimental physics, econometrics and demography, measuring the uncertainty associated with a random variable is very important. The seminal work on information theory started with the concept of Shannon entropy or differential entropy introduced by Shannon (1948) [1]. For an absolutely continuous non-negative random variable X, the differential entropy is given by
H ( X ) = E ( log f ( X ) ) = 0 f ( x ) log f ( x ) d x ,
where f ( x ) is the probability density function of X and “log” stands for the natural logarithm, with 0 log 0 taken as 0.
Several measures of entropy have been introduced in the literature since then, each one being suitable for some specific situations. The widely used measures of entropy are cumulative residual entropy (CRE) [2], cumulative entropy (CE) [3] and the corresponding weighed measures by Mirali et al. (2016) [4] and Mirali and Baratpour (2017) [5]. A unified formulation of entropy has been put forward by Balakrishnan et al. (2022) [6] recently. For a non-negative random variable X with distribution function F ( x ) , the cumulative residual entropy, which measures the uncertainty in the future of a lifetime of a system, is defined as
E ( X ) = 0 F ¯ ( x ) log F ¯ ( x ) d x ,
where F ¯ ( x ) = 1 F ( x ) is the survival function of X. Asadi and Zohrevand (2007) [7] gave a representation of (1) based on the mean residual life function as
E ( X ) = E ( r ( X ) ) ,
where r ( t ) is the mean residual life function of X at time t given by
r ( t ) = E ( X t | X > t ) = t F ¯ ( u ) d u F ¯ ( t ) .
Di Crescenzo and Longobardi (2009) [3] introduced cumulative entropy for estimating the uncertainty in the past lifetime of a system as
CE ( X ) = 0 F ( x ) log F ( x ) d x .
The weighted versions of E ( X ) and CE ( X ) have been studied in the literature as well. Weighted cumulative residual entropy, introduced by Mirali et al. (2016) [4], is defined as
E w ( X ) = 0 x F ¯ ( x ) log F ¯ ( x ) d x .
Mirali and Baratpour (2017) [5] introduced weighted cumulative entropy as
CE w ( X ) = 0 x F ( x ) log F ( x ) d x .
A detailed discussion on weighted entropies has been made by Suhov and Sekeh (2015) [8]. Some additional recent developments in this area were due to [9,10,11,12].
Recently, Kharazmi and Balakrishnan (2020) [13] proposed Jensen cumulative residual entropy, which is an extension of (1). Kharazmi and Balakrishnan (2020) [14] then studied cumulative residual and relative cumulative residual Fisher information measures and their properties. More general cumulative residual-information-generating and relative cumulative residual-information-generating measures have been introduced and studied by Kharazmi and Balakrishnan (2021) [15]. Fractional generalized cumulative entropy and its dynamic version have been proposed by Di Crescenzo et al. (2021) [16].
Several extensions of Shannon entropy are available in the literature, obtained by introducing some additional parameters so that these measures become sensitive to different characteristics and shapes of probability distributions. One important generalization of Shannon entropy is due to Tsallis (1988) [17], known as generalized Tsallis entropy of order α . For a continuous random variable X, the generalized Tsallis entropy of order α is defined as [17]
T α ( X ) = 1 α 1 1 0 f α ( x ) d x , α 1 .
Many extensions or modifications have also been provided for T α ( X ) . Sati and Gupta (2015) [18] proposed a cumulative Tsallis residual entropy of order α , and Rajesh and Sunoj (2019) [19] modified it and defined cumulative residual Tsallis entropy of order α as
C R T α ( X ) = 1 α 1 0 ( F ¯ ( x ) F ¯ α ( x ) ) d x , α > 0 , α 1 .
For a non-negative continuous random variable X, Chakraborty and Pradhan (2021) [20] defined weighted cumulative residual Tsallis entropy (WCRTE) of order α as
W C R T α ( X ) = 1 α 1 0 x ( F ¯ ( x ) F ¯ α ( x ) ) d x , α > 0 , α 1 .
They also introduced dynamic weighted cumulative residual Tsallis entropy of order α .
Calì et al. (2017) [9] introduced cumulative Tsallis past entropy as
C T α ( X ) = 1 α 1 0 ( F ( x ) F α ( x ) ) d x , α > 0 , α 1 .
Calì et al. (2021) [21] subsequently introduced a family of mean past weighted entropies of order α , using the concept of mean inactivity time. Chakraborty and Pradhan (2021) [20] defined weighted cumulative Tsallis entropy (WCTE) of order α as
W C T α ( X ) = 1 α 1 0 x ( F ( x ) F α ( x ) ) d x , α > 0 , α 1 .
They also studied dynamic weighted cumulative Tsallis entropy of order α . As is evident from the description above, several entropy measures are available in the literature. Recently, Balakrishnan et al. (2022) [22] have provided a unified formulation of entropy and demonstrated its applications.
In the present work, we define a generalized cumulative residual entropy and study its properties in Section 2. We show that cumulative residual entropy, weighted cumulative residual entropy, cumulative residual Tsallis entropy and weighted cumulative residual Tsallis entropy are all special cases of the proposed measure. We also propose a new generalized cumulative entropy measure and discuss some of its properties. We use the generating function approach to obtain some new entropy measures. In Section 3, we provide cumulative (residual) versions of Sharma–Taneja–Mittal entropy and obtain them as special cases of the generalized measures of entropy introduced in Section 2. In Section 4, we establish some relationships between entropy and extropy measures. Finally, we make some concluding remarks in Section 5.

2. Generalized Cumulative Entropy

In this section, we introduce generalized cumulative (residual) entropy measures. We then show that several entropy measures are special cases of the proposed entropies. Some generalizations of CRE and CE have been discussed in the literature and we now review these briefly. Drissi et al. (2008) [23] generalized the definition of CRE, given by Rao et al. (2004) [2], to the case of distributions with general support. They also showed that this generalized CRE can be used as an alternative to differential entropy. Kayal (2016) [24] introduced a generalization of CE proposed by Di Crescenzo and Longobardi (2009) [3] and their dynamic versions. Their definition is related to lower records and the reversed relevation transform. Psarrakos and Navarro (2013) [25] proposed a generalized cumulative residual entropy (GCRE), related to record values from a sequence of independent and identical random variables and with the relevation transform. Some properties and applications of the GCRE in actuarial risk measures have been discussed by Psarrakos and Toomaj (2017) [26]. Under some assumptions, Navarro and Psarrakos (2017) [27] proved that the GCRE function of a fixed order n uniquely determines the distribution function. Consequently, some characterizations of particular probability models have been obtained from this general result. Di Crescenzo and Toomaj (2017) [28] obtained some further results associated with generalized cumulative entropy, such as stochastic orders, bounds and characterization results. Some characterizations for the dynamic generalized cumulative entropy have also been derived. Recently, Di Crescenzo et al. (2021) [16] proposed the fractional generalized cumulative entropy and its dynamic version.
We introduce here generalized CRE and CE, which encompass most of the existing variations of these measures, as demonstrated below.

2.1. Generalized Cumulative Residual Entropy

Let X be a non-negative random variable with absolutely continuous distribution function F ( x ) and F ¯ ( x ) as the survival function. We assume that the mean μ = E ( X ) is finite.
Definition 1.
Let X be a non-negative random variable with absolutely continuous distribution function F. Further, let ϕ ( . ) be a function of X and w ( . ) be a weight function. Then, the generalized cumulative residual entropy of X is defined as
GE ( X ) = w ( u ) E ϕ ( X ) ϕ ( u ) | X > u d F ( u ) ,
where w and ϕ can be chosen arbitrarily under the existence of the above integral such that GE ( X ) becomes concave.
Entropy is defined as a measure of uncertainty associated with a model. Strict concavity implies that entropy will increase under averaging. The general definition of entropy we have given contains two arbitrary functions ϕ and w, and though it is difficult to state general conditions under which concavity of the general measure will hold, one can chose these functions so that GE ( X ) becomes concave.
However, with different choices of weight function w ( . ) and ϕ ( . ) , we can introduce several measures of entropy. First, we show that the new measure reduces to the cumulative residual entropy of Rao et al. (2004) [2] and the weighted cumulative residual entropy of Mirali et al. (2016) [4] for some specific choices of ϕ ( . ) and w ( . ) .
Using integration by parts, from (1), we obtain (see [29])
E ( X ) = 0 x [ log F ¯ ( x ) ] d F ( x ) E ( X ) .
Let us denote the hazard rate of X by
λ ( x ) = f ( x ) F ¯ ( x ) ,
where f is the density function of X. Then, the cumulative hazard function Λ ( x ) can be expressed as
Λ ( x ) = 0 x λ ( u ) d u = log F ¯ ( x ) .
Now, with w ( u ) = 1 and ϕ ( x ) = x , the expression in (9) becomes
GE ( X ) = 0 E ( X u | X > u ) d F ( u ) = 0 E ( X | X > u ) f ( u ) d u 0 u d F ( u ) = 0 f ( u ) F ¯ ( u ) u x d F ( x ) d u E ( X ) = 0 x 0 x f ( u ) F ¯ ( u ) d u d F ( x ) E ( X ) = 0 x 0 x λ ( u ) d u d F ( x ) E ( X ) = 0 x [ log F ¯ ( x ) ] d F ( x ) E ( X ) .
Thus, GE ( X ) reduces in this case to the cumulative residual entropy of Rao et al. (2004) [2].
The weighted cumulative residual entropy defined in (3) can be written as [29]
E w ( X ) = 1 2 0 x 2 [ log F ¯ ( x ) ] d F ( x ) 1 2 E ( X 2 ) .
If we choose w ( u ) = 1 / 2 and ϕ ( x ) = x 2 and proceed as above, we can show that (9) becomes
GE ( X ) = 1 2 0 E ( X 2 u 2 | X > u ) d F ( u ) = 1 2 0 x 2 [ log F ¯ ( x ) ] d F ( x ) 1 2 E ( X 2 ) .
Thus, GE ( X ) reduces in this case to the weighted cumulative residual entropy, E w ( X ) .
Next, we show that the cumulative residual Tsallis entropy of order α is a special case of GE ( X ) . An alternative representation of C R T α ( X ) is [19]
C R T α ( X ) = E ( r ( X ) F ¯ α 1 ( X ) ) ,
where r ( X ) = E ( X x | X > x ) . If we now choose w ( u ) = F ¯ α 1 ( u ) and ϕ ( x ) = x , (9) becomes
GE ( X ) = 0 E ( X u | X > u ) F ¯ α 1 ( u ) d F ( u ) = E ( r ( X ) F ¯ α 1 ( X ) ) .
Chakraborty and Pradhan (2021) [20] expressed a weighted version of cumulative residual Tsallis entropy of order α in (6) as
W C R T α ( X ) = 0 1 F ¯ ( t ) t x F ¯ ( x ) d x F ¯ α 1 ( t ) d F ( t ) .
Upon noting that the integral
1 F ¯ ( t ) t x F ¯ ( x ) d x = 1 F ¯ ( t ) t x x d F ( y ) d x = 1 F ¯ ( t ) t t y x d x d F ( y ) = 1 F ¯ ( t ) t 1 2 ( y 2 t 2 ) d F ( y ) = 1 2 E ( X 2 t 2 | X > t ) ,
(10) becomes
W C R T α ( X ) = 1 2 0 E ( X 2 t 2 | X > t ) F ¯ α 1 ( t ) d F ( t ) .
Now, for the choices of w ( u ) = F ¯ α 1 ( u ) and ϕ ( x ) = 1 2 x 2 , from (9), we obtain the above expression. Thus, W C R T α ( X ) is a special case of GE ( X ) as well. The special cases of GE ( X ) discussed here are all listed in Table 1.
Next, we derive expressions for GE ( X ) for some specific distributions.
Consider the exponential distribution with mean λ . Then, it is well-known that mean residual life is equal to mean. Thus, when w ( u ) = 1 , we have
GE ( X ) = 0 w ( u ) λ d F ( u ) = λ .
In general, GE ( X ) is a constant for any weight function. For the standard exponential, taking ϕ ( x ) = x 2 , we have
GE ( X ) = 0 w ( u ) E ( X 2 u 2 | X > u ) e u d u = 0 w ( u ) ( u + 1 ) e u d u .
Thus, GE ( X ) = 2 when w ( u ) = 1 and GE ( X ) = 1.5 when w ( u ) = F ¯ ( u ) .
Next, let us consider the standard uniform distribution with pdf f ( x ) = 1 , 0 < x < 1 . Then,
GE ( X ) = 0 1 w ( u ) 1 1 u u 1 ( 1 x ) d x d u = 0 1 w ( u ) 1 2 ( 1 u ) ( u 1 ) 2 d u = 0 1 w ( u ) ( 1 u ) d u .
We thus obtain the residual entropy as 1 4 . Additionally, when w ( u ) = 1 and ϕ ( x ) = x 2 / 2 , we get
GE ( X ) = 1 6 0 1 ( 1 u ) ( 2 u + 1 ) d u = 5 36 ,
as given in Example 5 of Balakrishnan et al. (2022) [29].

2.2. Generalized Cumulative Entropy

In this sub-section, we introduce a generalized cumulative entropy and discuss some of its properties.
Definition 2.
Let X be a non-negative random variable with absolutely continuous distribution function F. Further, let ϕ ( . ) be a function of X and w ( . ) be a weight function. Then, the generalized cumulative entropy is defined as
GCE ( X ) = 0 w ( u ) E ϕ ( u ) ϕ ( X ) | X u d F ( u ) ,
where w and ϕ can be chosen arbitrarily under the existence of the above integral such that GCE ( X ) becomes concave.
For the choices of w ( u ) = 1 and ϕ ( x ) = x , (11) reduces to CE ( X ) [3]. Similarly, GCE ( X ) in (11) reduces to the weighted cumulative entropy of Mirali and Baratpour (2017) [5] for the choices of w ( u ) = 1 / 2 and ϕ ( x ) = x 2 .
The reversed hazard rate function of X, denoted by h ( . ) , is defined as
h ( x ) = f ( x ) F ( x ) ,
which yields the cumulative reversed hazard rate function as
H ( x ) = x f ( u ) F ( u ) d u = log F ( x ) .
The cumulative entropy in (2) can be expressed as (see [29])
CE ( X ) = 0 x [ log F ( x ) ] d F ( x ) + E ( X ) .
Thus, by using the cumulative reversed hazard rate function, we can express
CE ( X ) = 0 x [ log F ( x ) ] d F ( x ) + E ( X ) = E ( X ) 0 x x f ( u ) F ( u ) d u d F ( x ) = E ( X ) 0 1 F ( u ) 0 u x d F ( x ) d F ( u ) = 0 E ( u X | X u ) d F ( u ) ,
which is the special case of the generalized cumulative entropy in (11) for the choices of w ( u ) = 1 and ϕ ( x ) = x . Proceeding similarly, we can show that GCE ( X ) reduces to the weighted cumulative entropy [5] for the choices of w ( u ) = 1 / 2 and ϕ ( x ) = x 2 .
Next, we show that the cumulative Tsallis entropy of order α is a special case of GCE ( X ) in (11). The mean inactivity time function of a random variable X, at time x, is defined as
m ( x ) = E ( x X | X x ) = 1 F ( x ) 0 x y d F ( y ) .
Using m ( x ) , C T α ( X ) can be expressed as [9]
C T α ( X ) = E ( m ( X ) F α 1 ( X ) ) .
Now, for the choices of w ( u ) = F α 1 ( u ) and ϕ ( x ) = x , (11) yields
GCE ( X ) = 0 F α 1 ( u ) E u X | X u d F ( u ) = E ( m ( X ) F α 1 ( X ) ) .
An alternative expression for the weighted cumulative Tsallis entropy of order α is given by [20]
W C T α ( X ) = 0 1 F ( t ) t x F ( x ) d x F α 1 ( t ) d F ( t ) .
As in Section 2, simple algebraic manipulations yield
W C T α ( X ) = 1 2 0 E ( t 2 X 2 | X t ) F α 1 ( t ) d F ( t ) .
Again, for the choices of w ( u ) = F α 1 ( u ) and ϕ ( x ) = 1 2 x 2 , (11) yields
GCE ( X ) = 1 2 0 E u 2 X 2 | X u F α 1 ( u ) d F ( u ) .
Thus, W C T α ( X ) is a special case of GCE ( X ) . In Table 2, we list the cumulative entropies derived from GCE ( X ) .
Next, we derive expressions for GCE ( X ) for some specific distributions. Consider the standard exponential distribution with mean λ = 1 . Then, for the choices of w ( u ) = 1 and ϕ ( x ) = x , we have
GCE ( X ) = 0 1 1 e u 0 u ( u x ) e x d x d F ( u ) = 0 1 1 e u ( u ( 1 e u ) e u d u = 0 1 1 e u u e u d u 1 = π 2 6 1 .
Next, let us consider the standard uniform distribution with pdf f ( x ) = 1 , 0 < x < 1 . Then, for the choices of w ( u ) = 1 and ϕ ( x ) = x , we obtain
GCE ( X ) = 0 1 1 u 0 u ( u x ) d x d u = 1 2 0 1 u d u = 1 / 4 .
These two examples have been presented earlier by Balakrishnan et al. (2022) [29].

2.3. Generating Function

We now introduce generating function related to generalized entropy measures discussed in the preceding sections.
Definition 3.
Let X be a non-negative random variable with absolutely continuous distribution function F. Further, let ϕ ( . ) be a function of X and w ( . ) be a weight function. We then define a generating function for the generalized cumulative residual entropy measure as
G f r e ( t ) = 0 w ( u ) E exp t ϕ ( X ) exp t ϕ ( u ) | X > u d F ( u ) .
Being a function of t, we can interpret G f r e ( t ) as a generating function for the general entropy measure introduced in Section 2. If we differentiate the expression in (13) with respect to t once, we obtain
G f r e ( t ) = 0 w ( u ) E ϕ ( X ) exp t ϕ ( X ) ϕ ( u ) exp t ϕ ( u ) | X > u d F ( u ) .
Now, by setting t = 0 in the above expression, we obtain the generalized entropy measure in (9). We, therefore, refer to it as generalized cumulative residual entropy of order 1. Higher-order derivatives with respect to t would similarly give rise to generalized cumulative residual entropies of orders 2, 3 and so on. For example, the generalized cumulative residual entropy of order two is given by
G f r e ( 2 ) = 0 w ( u ) E ϕ 2 ( X ) ϕ 2 ( u ) | X > u d F ( u ) .
For the choice of w ( u ) = 1 / 2 and ϕ ( x ) = x , G f r e ( 2 ) reduces to the weighted cumulative residual entropy of Mirali et al. (2016) [4].
In a similar manner, we define the generating function for the generalized cumulative entropy as follows.
Definition 4.
Let X be a non-negative random variable with absolutely continuous distribution function F. Further, let ϕ ( . ) be a function of X and w ( . ) be a weight function. We then define the generating function for the generalized cumulative entropy as
G f c e ( t ) = 0 w ( u ) E exp t ϕ ( u ) exp t ϕ ( X ) | X u d F ( u ) .
Once again, differentiating (15) with respect to t once and setting t = 0 , we obtain the generalized cumulative entropy (of order 1) measure in (11). From (15), we can similarly obtain the generalized cumulative entropy of order 2 to be
G f c e ( 2 ) = 0 w ( u ) E ϕ 2 ( u ) ϕ 2 ( X ) | X u d F ( u ) .
The weighted cumulative entropy of Mirali and Baratpour (2017) [5] can be obtained from (16) for the choices of w ( u ) = 1 / 2 and ϕ ( x ) = x . Naturally, higher-order derivatives with respect to t would give rise to generalized cumulative entropies of orders 3, 4 and so on.

3. Sharma–Taneja–Mittal Entropy

In this section, we introduce the cumulative (residual) versions of Sharma–Taneja–Mittal (STM) entropy. We then show that these are indeed special cases of the generalized residual and the past entropy introduced in Section 2.
Sharma and Taneja (1975) [30] and Mittal (1975) [31] independently introduced an entropy of the form
S α , β = 1 β α 0 ( f α ( x ) f β ( x ) ) d x , α , β > 0 , ( α , β ) ( 1 , 1 ) .
For different choices of α and β , from (17), we obtain some entropy measures discussed in the literature. In particular, for α = 1 κ and β = 1 + κ , we obtain Kaniadakis entropy [32], as a special case of Sharma-Mittal entropy. For more details, see [33] along with Table 1 of Ilic et al. (2021) [34].

3.1. Sharma–Taneja–Mittal Cumulative Residual Entropy

In this sub-section, we introduce the cumulative residual version of the STM entropy.
Definition 5.
Let X be a non-negative random variable with absolutely continuous survival function F ¯ . Then, the cumulative residual STM entropy is defined as
S R α , β = 1 β α 0 ( F ¯ α ( x ) F ¯ β ( x ) ) d x , α , β > 0 , ( α , β ) ( 1 , 1 ) .
We also introduce the weighted cumulative residual STM entropy as follows.
Definition 6.
Let X be a non-negative random variable with absolutely continuous survival function F ¯ . Then, the cumulative weighted residual STM entropy is defined as
S R W α , β = 1 β α 0 x ( F ¯ α ( x ) F ¯ β ( x ) ) d x , α , β > 0 , ( α , β ) ( 1 , 1 ) .
Next, we show that S R α , β and S R W α , β are indeed special cases of GE ( X ) . Consider
S R α , β = 1 β α 0 ( F ¯ α ( x ) F ¯ β ( x ) ) d x = 1 β α 0 ( F ¯ ( x ) F ¯ β ( x ) ( F ¯ ( x ) F ¯ α ( x ) ) ) d x .
Now, consider
0 ( F ¯ ( x ) F ¯ β ( x ) ) d x = 0 F ¯ ( x ) ( 1 F ¯ β 1 ( x ) ) d x = 0 F ¯ ( x ) 0 x ( β 1 ) F ¯ β 2 ( t ) d F ( t ) d x = ( β 1 ) 0 F ¯ β 2 ( t ) x F ¯ ( x ) d x d F ( t ) = ( β 1 ) 0 F ¯ β 1 ( t ) 1 F ¯ ( t ) t F ¯ ( x ) d x d F ( t ) = ( β 1 ) 0 F ¯ β 1 ( t ) r ( t ) d F ( t ) = ( β 1 ) E ( r ( X ) F ¯ β 1 ( X ) ) .
Similarly, we obtain
0 ( F ¯ ( x ) F ¯ α ( x ) ) d x = ( α 1 ) E ( r ( X ) F ¯ α 1 ( X ) ) .
Upon substituting (21) and (22) in (20), we obtain
S R α , β = 1 β α E r ( X ) ( β 1 ) F ¯ β 1 ( X ) ( α 1 ) F ¯ α 1 ( X ) .
Now, for the choices of w ( u ) = 1 β α ( ( β 1 ) F ¯ β 1 ( u ) ( α 1 ) F ¯ α 1 ( u ) ) and ϕ ( x ) = x , from (9), we obtain S R α , β .
By following the same steps as performed for (23), the cumulative weighted residual STM entropy can be expressed as
S R W α , β = 1 β α 0 E ( X 2 x 2 | X > x ) ( β 1 ) F ¯ β 1 ( x ) ( α 1 ) F ¯ α 1 ( x ) d F ( x ) .
Again, for the choices of w ( u ) = 1 β α ( ( β 1 ) F ¯ β 1 ( u ) ( α 1 ) F ¯ α 1 ( u ) ) and ϕ ( x ) = x 2 / 2 , from (9), we arrive at (24).

3.2. Sharma–Taneja–Mittal Cumulative Entropy

In this sub-section, we introduce cumulative and weighted cumulative STM entropies, and then show that they are indeed special cases of the generalized entropy.
Definition 7.
Let X be a non-negative random variable with absolutely continuous distribution function F. Then, the cumulative STM entropy is defined as
S P α , β = 1 β α 0 ( F α ( x ) F β ( x ) ) d x , α , β > 0 , ( α , β ) ( 1 , 1 ) .
In this case, the weighted cumulative STM entropy is defined as follows.
Definition 8.
Let X be a non-negative random variable with absolutely continuous distribution function F. Then, the cumulative weighted STM entropy is defined as
S P W α , β = 1 β α 0 x ( F α ( x ) F β ( x ) ) d x , α , β > 0 , ( α , β ) ( 1 , 1 ) .
Of course, in the special cases of α = 1 κ and β = 1 + κ , the above definitions would result in the corresponding generalized versions of Kaniadakis entropy. Following the same steps as those used to obtain alternative expressions for S P α , β and S P W α , β , we can express S P α , β and S P W α , β , respectively, as
S P α , β = 1 β α E m ( X ) ( β 1 ) F ¯ β 1 ( X ) ( 1 α ) F ¯ α 1 ( X ) ,
S P W α , β = 1 β α 0 E ( x 2 X 2 | X x ) ( β 1 ) F ¯ β 1 ( x ) ( 1 α ) F ¯ α 1 ( x ) d F ( x ) .
Now, let w ( u ) = 1 β α ( ( β 1 ) F ¯ β 1 ( u ) ( α 1 ) F ¯ α 1 ( u ) ) . Then, from (11), we obtain S P α , β and S P W α , β in (27) and (27) by taking ϕ ( x ) = x and ϕ ( x ) = x 2 / 2 , respectively.

4. Connection between Entropy and Extropy

Apart from entropy, extropy and its properties have also been studied for quantifying the uncertainty associated with a random variable X. Using the new entropy measures introduced in the preceding sections, we establish some relationships between entropy and extropy measures in this section.
For a non-negative random variable X, extropy is defined as [35]
J ( X ) = 1 2 0 f 2 ( x ) d x .
Now, we briefly discuss some recent developments associated with extropy measures. Jahanshahi et al. [36] defined the cumulative residual extropy as
CRJ ( X ) = 1 2 0 F ¯ 2 ( x ) d x ,
and the cumulative extropy is defined as [37]
CJ ( X ) = 1 2 0 ( 1 F 2 ( x ) ) d x .
Sudheesh and Sreedevi (2022) [38] discussed non-parametric estimation of CRJ ( X ) and CJ ( X ) for right-censored data.
Recently, Balakrishnan et al. [29], Bansal and Gupta [39] and Sathar and Nair [40,41,42] introduced different weighted versions of extropy. The weighted version of the survival extropy is given by [41]
J ( X , w ) = 1 2 0 x F ¯ 2 ( x ) d x .
These authors also introduced the weighted version of the cumulative extropy as
H ( X , w ) = 1 2 0 x ( 1 F 2 ( x ) ) d x ,
and Sathar and Nair [42] subsequently defined dynamic survival extropy as
J t ( X ) = 1 2 F ¯ 2 ( t ) t F ¯ 2 ( x ) d x .
For various properties of J t ( X ) , one may see [36]. Sudheesh and Sreedevi (2022) [38] proposed simple alternative expressions for different extropy measures. Using these expressions, they established relationships between different dynamic and weighted extropy measures, and reliability concepts. In particular, they expressed J t ( X ) as
J t ( X ) = 1 2 E min ( X 1 , X 2 ) t | min ( X 1 , X 2 ) > t .
Thus, 2 J t ( X ) is the mean residual life function of a series system having two identical components.
Sathar and Nair [41] defined weighted dynamic survival extropy as
J t ( X , w ) = 1 2 F ¯ 2 ( t ) t x F ¯ 2 ( x ) d x .
Kundu (2021) [43] introduced dynamic cumulative extropy as
H t ( X ) = 1 2 F 2 ( t ) 0 t F 2 ( x ) d x ,
while Sathar and Nair [41] defined the weighted dynamic cumulative extropy as
H t ( X , w ) = 1 2 F 2 ( x ) 0 t x F 2 ( x ) d x .
Sudheesh and Sreedevi (2022) [38] expressed H t ( X ) as
H t ( X ) = 1 2 E t max ( X 1 , X 2 ) | max ( X 1 , X 2 ) t .
Thus, 2 H t ( X ) is the mean past life function of a parallel system having two identical components, where the mean past life function of a random variable X is defined as E ( t X | X t ) .
Next, we establish some connections between different entropy and extropy measures. For the choice of w ( u ) = F ¯ ( u ) and ϕ ( x ) = x , from (9), we obtain
GE ( X ) = 0 F ¯ ( u ) F ( u ) d u = 0 F ¯ ( u ) d u 0 F ¯ 2 ( u ) d u .
Thus, using (28), we have the relationship
GE ( X ) = 2 CRJ ( X ) + E ( X ) .
Again, for the choices of w ( u ) = F ¯ ( u ) and ϕ ( x ) = x 2 , from (9), we obtain
GE ( X ) = 0 2 u F ¯ ( u ) F ( u ) d u = 0 2 u F ¯ ( u ) d u 0 2 u F ¯ 2 ( u ) d u .
For a non-negative random variable X, we have
E ( X 2 ) = 0 x 2 d F ( x ) = 0 2 x F ¯ ( x ) d x .
Thus, in this case, we obtain the relationship
GE ( X ) = 4 CRJ ( X , w ) + E ( X 2 ) .
For the choices of w ( u ) = F ( u ) and ϕ ( x ) = x , from (12), we obtain
GCE ( X ) = 0 F ¯ ( u ) F ( u ) d u .
Using the identity 1 F 2 ( x ) = F ¯ ( x ) + F ¯ ( x ) F ( x ) , from (29) and (32), we have the relationship
GCE ( X ) = 2 CJ ( X ) + E ( X ) .
Additionally, for the choices of w ( u ) = F ( u ) and ϕ ( x ) = x 2 , from (12), we obtain
GCE ( X ) = 0 2 u F ¯ ( u ) F ( u ) d u .
Thus, in this case, we obtain the relationship
GCE ( X ) = 4 CJ ( X , w ) + E ( X 2 ) .
Let X 1 and X 2 be two independent and identical random variables having the same distribution function F. Let Z = min ( X 1 , X 2 ) be the lifetime of a series system having two identical components. Using (9), we define the generalized residual entropy associated with Z as
GE ( Z ) = 2 w ( u ) E ϕ ( Z ) ϕ ( u ) | Z > u F ¯ ( u ) d F ( u ) ,
where ϕ ( . ) is a function of Z and w ( . ) is a weight function. Now, for the choices of w ( u ) = 1 4 and ϕ ( z ) = z , from (33), we obtain
GE ( Z ) = 1 2 E min ( X 1 , X 2 ) u | min ( X 1 , X 2 ) > u F ¯ ( u ) d F ( u ) .
Thus, the generalized residual entropy associated with Z is the weighted average of the dynamic survival extropy in (30).
Next, let Z = max ( X 1 , X 2 ) be the lifetime of a parallel system having two identical components. Then, the generalized cumulative entropy associated with Z is defined as
GCE ( Z ) = 0 2 w ( u ) E ϕ ( u ) ϕ ( X ) | X u F ( u ) d F ( u ) .
Again, for the choices of w ( u ) = 1 4 and ϕ ( z ) = z , from (34), we obtain
GCE ( Z ) = 1 2 0 E u max ( X 1 , X 2 ) | max ( X 1 , X 2 ) u F ( u ) d F ( u ) .
Thus, the generalized cumulative entropy associated with Z is the weighted average of the dynamic cumulative extropy in (31).

5. Concluding Remarks

In this work, we have introduced two general measures of entropy, viz., generalized cumulative residual entropy and generalized cumulative entropy. Several entropy measures known in the literature were all shown to be special cases of these generalized measures. Cumulative residual entropy, weighted cumulative residual entropy, cumulative residual Tsallis entropy and weighted cumulative residual Tsallis entropy are all special cases of the generalized cumulative residual entropy. Cumulative entropy, weighted cumulative entropy, cumulative Tsallis entropy and weighted cumulative Tsallis entropy are all special cases of the generalized cumulative entropy.
We have presented a generating function approach to obtain generalized measures of higher-order. We have shown that the generalized cumulative residual entropy of order two reduces to the weighted cumulative residual entropy of Mirali et al. (2016) [4]. Moreover, the weighted cumulative entropy of Mirali and Baratpour (2017) [5] is a special case of the generalized cumulative entropy of order two. We have also established some relationships between entropy and extropy measures.
In information theory literature, conditional entropy is the amount of information required to describe the outcome of one random variable Y, given the value of another random variable X. Conditional entropy, as a measure of information, can be defined through any measure, such as the Shannon entropy measure (denoted by H ( Y | X ) ). The conditional entropy defined through Shannon entropy measure, for example, is given by
H ( Y | X ) = 0 0 f ( x , y ) log f ( x , y ) f ( x ) d y d x .
In this way, we can define the conditional entropy measures even in a generalized form. The generalized versions we have introduced in the present work can thus be extended to conditional entropy notions. However, we plan to carry out a detailed study of this in our future work. It will be of interest to develop some inferential methods for these measures as well. We are currently working in these directions and hope to report these finding in a future paper.

Author Contributions

Methodology, S.K.K., E.P.S. and N.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Our sincere thanks go to the anonymous reviewers for their valuable suggestions and comments on an earlier version of this manuscript, which resulted in this improved version.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CECumulative entropy
CRECumulative residual entropy
STMSharma–Taneja–Mittal
WCRTEWeighted cumulative residual Tsallis entropy
WCTEweighted cumulative Tsallis entropy

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  2. Rao, M.; Chen, Y.; Vemuri, B.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
  3. Di Crescenzo, A.; Longobardi, M. On cumulative entropies. J. Stat. Plan. Inference 2009, 139, 4072–4087. [Google Scholar] [CrossRef]
  4. Mirali, M.; Baratpour, S.; Fakoor, V. On weighted cumulative residual entropy. Commun. Stat.-Theory Methods 2016, 46, 2857–2869. [Google Scholar] [CrossRef]
  5. Mirali, M.; Baratpour, S. Some results on weighted cumulative entropy. J. Iran. Stat. Soc. 2017, 16, 21–32. [Google Scholar]
  6. Balakrishnan, N.; Buono, F.; Longobardi, M. A unified formulation of entropy and its application. Phys. A Stat. Mech. Its Appl. 2022, 127214. [Google Scholar] [CrossRef]
  7. Asadi, M.; Zohrevand, Y. On the dynamic cumulative residual entropy. J. Stat. Plan. Inference 2007, 137, 1931–1941. [Google Scholar] [CrossRef]
  8. Suhov, Y.; Sekeh, S.Y. Weighted cumulative entropies: An extension of CRE and CE. arXiv 2015, arXiv:1507.07051. [Google Scholar]
  9. Calì, C.; Longobardi, M.; Ahmadi, J. Some properties of cumulative Tsallis entropy. Phys. A Stat. Mech. Its Appl. 2017, 486, 1012–1021. [Google Scholar] [CrossRef] [Green Version]
  10. Calì, C.; Longobardi, M.; Navarro, J. Properties for generalized cumulative past measures of information. Probab. Eng. Inform. Sci. 2020, 34, 92–111. [Google Scholar] [CrossRef]
  11. Tahmasebi, S. Weighted extensions of generalized cumulative residual entropy and their applications. Commun. Stat.-Theory Methods 2020, 49, 5196–5219. [Google Scholar] [CrossRef]
  12. Toomaj, A.; Di Crescenzo, A. Connections between weighted generalized cumulative residual entropy and variance. Mathematics 2020, 8, 1072. [Google Scholar] [CrossRef]
  13. Kharazmi, O.; Balakrishnan, N. Jensen-information generating function and its connections to some well-known information measures. Stat. Probab. Lett. 2020, 170, 108995. [Google Scholar] [CrossRef]
  14. Kharazmi, O.; Balakrishnan, N. Cumulative residual and relative cumulative residual Fisher information and their properties. IEEE Trans. Inf. Theory 2020, 67, 6306–6312. [Google Scholar] [CrossRef]
  15. Kharazmi, O.; Balakrishnan, N. Cumulative and relative cumulative residual information generating measures and associated properties. Commun. Stat.-Theory Methods 2021, 1–14. [Google Scholar] [CrossRef]
  16. Di Crescenzo, A.; Kayal, S.; Meoli, A. Fractional generalized cumulative entropy and its dynamic version. Commun. Nonlinear Sci. Numer. Simul. 2021, 102, 105899. [Google Scholar] [CrossRef]
  17. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  18. Sati, M.M.; Gupta, N. Some characterization results on dynamic cumulative residual Tsallis entropy. J. Probab. Stat. 2015, 2015, 1155. [Google Scholar] [CrossRef] [Green Version]
  19. Rajesh, G.; Sunoj, S.M. Some properties of cumulative Tsallis entropy of order α. Stat. Pap. 2019, 60, 933–943. [Google Scholar] [CrossRef]
  20. Chakraborty, S.; Pradhan, B. On weighted cumulative Tsallis residual and past entropy measures. Commun. Stat.-Simul. Comput. 2021, 1–15. [Google Scholar] [CrossRef]
  21. Calì, C.; Longobardi, M.; Psarrakos, G. A family of weighted distributions based on the mean inactivity time and cumulative past entropies. Ric. Mat. 2021, 70, 395–409. [Google Scholar] [CrossRef]
  22. Balakrishnan, N.; Buono, F.; Longobardi, M. On cumulative entropies in terms of moments of order statistics. Methodol. Comput. Appl. Probab. 2022, 24, 345–359. [Google Scholar] [CrossRef]
  23. Drissi, N.; Chonavel, T.; Boucher, J.M. Generalized cumulative residual entropy for distributions with unrestricted supports. Res. Lett. Signal Process. 2008, 2008, 79060. [Google Scholar] [CrossRef] [Green Version]
  24. Kayal, S. On generalized cumulative entropies. Probab. Eng. Inform. Sci. 2016, 30, 640–662. [Google Scholar] [CrossRef]
  25. Psarrakos, G.; Navarro, J. Generalized cumulative residual entropy and record values. Metrika 2013, 76, 623–640. [Google Scholar] [CrossRef]
  26. Psarrakos, G.; Toomaj, A. On the generalized cumulative residual entropy with applications in actuarial science. J. Comput. Appl. Math. 2017, 309, 186–199. [Google Scholar] [CrossRef]
  27. Navarro, J.; Psarrakos, G. Characterizations based on generalized cumulative residual entropy functions. Commun. Stat.-Theory Methods 2017, 46, 1247–1260. [Google Scholar] [CrossRef]
  28. Di Crescenzo, A.; Toomaj, A. Further results on the generalized cumulative entropy. Kybernetika 2017, 53, 959–982. [Google Scholar] [CrossRef]
  29. Balakrishnan, N.; Buono, F.; Longobardi, M. On weighted extropies. Commun. Stat.-Theory Methods 2020, 1–31. [Google Scholar] [CrossRef]
  30. Sharma, B.D.; Taneja, I.J. Entropy of type (α,β) and other generalized measures in information theory. Metrika 1975, 22, 205–215. [Google Scholar] [CrossRef]
  31. Mittal, D.P. On some functional equations concerning entropy, directed divergence and inaccuracy. Metrika 1975, 22, 35–45. [Google Scholar] [CrossRef]
  32. Kaniadakis, G. Non-linear kinetics underlying generalized statistics. Phys. A Stat. Mech. Its Appl. 2001, 296, 405–425. [Google Scholar] [CrossRef] [Green Version]
  33. Lopes, A.M.; Machado, J.A.T. A review of fractional order entropies. Entropy 2020, 22, 1374. [Google Scholar] [CrossRef] [PubMed]
  34. Ilić, V.M.; Korbel, J.; Gupta, S.; Scarfone, A.M. An overview of generalized entropic forms (a). EPL (Europhys. Lett.) 2021, 133, 50005. [Google Scholar] [CrossRef]
  35. Lad, F.; Sanfilippo, G.; Agro, G. Extropy: Complementary dual of entropy. Stat. Sci. 2015, 30, 40–58. [Google Scholar] [CrossRef]
  36. Jahanshahi, S.M.A.; Zarei, H.; Khammar, A.H. On cumulative residual extropy. Probab. Eng. Inf. Sci. 2020, 34, 605–625. [Google Scholar] [CrossRef]
  37. Tahmasebi, S.; Toomaj, A. On negative cumulative extropy with applications. Commun. Stat.-Theory Methods 2020, 1–23. [Google Scholar] [CrossRef]
  38. Sudheesh, K.K.; Sreedevi, E.P. Non-parametric estimation of cumulative (residual) extropy with censored observations. Stat. Probab. Lett. 2022, 185, 109434. [Google Scholar] [CrossRef]
  39. Bansal, S.; Gupta, N. Weighted extropies and past extropy of order statistics and k-record values. Commun. Stat.-Theory Methods 2020, 1–24. [Google Scholar] [CrossRef]
  40. Sathar, E.A.; Nair, R.D. On dynamic weighted extropy. J. Comput. Appl. Math. 2021, 393, 113507. [Google Scholar] [CrossRef]
  41. Sathar, E.A.; Nair, R.D. A study on weighted dynamic survival and failure extropies. Commun. Stat.-Theory Methods 2021, 1–20. [Google Scholar] [CrossRef]
  42. Sathar, E.A.; Nair, R.D. On dynamic survival extropy. Commun. Stat.-Theory Methods 2021, 50, 1295–1313. [Google Scholar] [CrossRef]
  43. Kundu, C. On cumulative residual (past) extropy of extreme order statistics. Commun. Stat.-Theory Methods 2021, 1–18. [Google Scholar] [CrossRef]
Table 1. Special cases of generalized residual entropy.
Table 1. Special cases of generalized residual entropy.
Entropy Measure w ( u ) ϕ ( x )
Cumulative residual entropy1x
Weighted cumulative residual entropy 1 2 x 2
Cumulative residual Tsallis entropy F ¯ α 1 ( u ) x
Weighted cumulative Tsallis residual entropy F ¯ α 1 ( u ) x 2 2
Table 2. Special cases of generalized entropy.
Table 2. Special cases of generalized entropy.
Entropy Measure w ( u ) ϕ ( x )
Cumulative entropy1x
Weighted cumulative entropy 1 2 x 2
Cumulative Tsallis entropy F α 1 ( u ) x
Weighted cumulative Tsallis entropy F α 1 ( u ) x 2 2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kattumannil, S.K.; Sreedevi, E.P.; Balakrishnan, N. A Generalized Measure of Cumulative Residual Entropy. Entropy 2022, 24, 444. https://0-doi-org.brum.beds.ac.uk/10.3390/e24040444

AMA Style

Kattumannil SK, Sreedevi EP, Balakrishnan N. A Generalized Measure of Cumulative Residual Entropy. Entropy. 2022; 24(4):444. https://0-doi-org.brum.beds.ac.uk/10.3390/e24040444

Chicago/Turabian Style

Kattumannil, Sudheesh Kumar, E. P. Sreedevi, and Narayanaswamy Balakrishnan. 2022. "A Generalized Measure of Cumulative Residual Entropy" Entropy 24, no. 4: 444. https://0-doi-org.brum.beds.ac.uk/10.3390/e24040444

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop