Next Article in Journal
Thermal–Statistical Odd–Even Fermions’ Staggering Effect and the Order–Disorder Disjunction
Next Article in Special Issue
A Comparative Study of Functional Connectivity Measures for Brain Network Analysis in the Context of AD Detection with EEG
Previous Article in Journal
Entropy in Landscape Ecology: A Quantitative Textual Multivariate Review
Previous Article in Special Issue
Entropy: From Thermodynamics to Information Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Conditional Tsallis Entropy

1
CINTESIS—Centre for Health Technology and Services Research, Faculty of Medicine, University of Porto, 4200-450 Porto, Portugal
2
MEDCIDS—Department of Community Medicine, Information and Decision in Health, Faculty of Medicine, University of Porto, 4200-450 Porto, Portugal
3
ADiT-LAB, Instituto Politécnico de Viana do Castelo, Rua Escola Industrial e Comercial Nun’Álvares, 4900-347 Viana do Castelo, Portugal
4
LASIGE, Faculdade de Ciências da Universidade de Lisboa, Campo Grande, 1749-016 Lisboa, Portugal
5
Departamento de Informática, Faculdade de Ciências da Universidade de Lisboa, Campo Grande, 1749-016 Lisboa, Portugal
6
Instituto de Telecomunicações, Av. Rovisco Pais, n 1, 1049-001 Lisboa, Portugal
7
Computer Science Department, Faculty of Sciences, University of Porto, Rua do Campo Alegre, 4169-007 Porto, Portugal
*
Author to whom correspondence should be addressed.
Submission received: 14 September 2021 / Revised: 21 October 2021 / Accepted: 27 October 2021 / Published: 29 October 2021
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)

Abstract

:
There is no generally accepted definition for conditional Tsallis entropy. The standard definition of (unconditional) Tsallis entropy depends on a parameter α that converges to the Shannon entropy as α approaches 1. In this paper, we describe three proposed definitions of conditional Tsallis entropy suggested in the literature—their properties are studied and their values, as a function of α , are compared. We also consider another natural proposal for conditional Tsallis entropy and compare it with the existing ones. Lastly, we present an online tool to compute the four conditional Tsallis entropies, given the probability distributions and the value of the parameter α .

1. Introduction

Tsallis entropy [1], (The name Tsallis entropy used in this paper, to identify the quantity presented in Equation (3), is not consensual in the community, given that before Tsallis presented it in 1988, and as he himself acknowledges, other authors had already introduced it [2,3,4].) a generalization of Shannon entropy [5,6], was extensively studied by Constantino Tsallis in 1988, and provides an alternative way of dealing with several characteristics of nonextensive physical systems, given that the information about the intrinsic fluctuations in the physical system can be characterized by the nonextensivity parameter α . It can be applied to many scientific fields, such as physics [7], economics [8], computer science [9,10], and biology [11]. We refer the reader to Reference [12] for a more extensive bibliography on applications of Tsallis entropy. Furthermore, we refer the reader to Reference [13] for a survey on the most significant areas of application of the most usual entropy measures, including Shannon [6], Rényi [14], and Tsallis entropies [1,2,3,4].
It is known that, as the parameter α approaches 1, the Tsallis entropy corresponds to the Shannon entropy. Unlike for Shannon entropy, but similar to Rényi entropy (yet another generalization of Shannon entropy developed by Alfréd Rényi in 1961 [14], which also depends on a parameter α and converges to Shannon entropy when α approaches 1), there is no commonly accepted definition for the conditional Tsallis entropy: several versions have been proposed and used in the literature [15,16]. In this work, we revisit the notion of conditional Tsallis entropy by studying some natural and desirable properties in the existing proposals (see for instance References [15,16]): when α 1 , the usual conditional Shannon entropy should be recovered, the conditional Tsallis entropy should not exceed the unconditional Tsallis entropy, and the conditional Tsallis entropy should have values between 0 and the maximum value of the unconditional version.
The use of entropies in different fields, especially in the field of information theory and its connection to communication, allowed the development of several useful information measures, such as mutual information, symmetry of information, and information distances. See, for example, References [17,18,19] for some recent work related to the aforementioned information measures.
Depending on the entropy measure used, all of these have been applied in many different areas of knowledge, such as physics [20], information theory [21,22], complexity theory [23,24,25], security [26,27,28], biology [29,30,31,32], finances [33], and medicine [34,35,36], among others. The conditional Tsallis entropy, as suggested in Reference [37], can be directly applied to information theory, especially coding theory. Furthermore, since Tsallis entropy can be applied in many areas (see, for example, Reference [12]), the study of conditional Tsallis entropies is quite promising. This paper analyzes several definitions of conditional Tsallis entropy, with the intent of providing the reader with a description of the properties that each approach satisfies.
Continuing from previous works [37,38], we introduce a new natural definition for conditional Tsallis entropy as a possible alternative to the existing ones. Our new proposal does not intend to be the ultimate version of conditional Tsallis entropy, but an alternative to the existing ones, with its own properties that, in settings, such as biomedical applications, might be useful for defining information distances or other significant measurements. None of the known definitions contain all of the desired properties for a conditional version. In particular, the one presented here (as it takes the maximum over the marginal distributions) does not converge to the Shannon entropy when α 1 —it behaves similar to a parameterized entropy, and is akin to the one proposed in Reference [38] as an alternative to Rényi’s conditional entropy, another generalization of Shannon entropy.
The paper is organized as follows. In the next section, we present the definitions necessary for the rest of the paper, namely Shannon entropy and Tsallis entropy. In Section 3, we provide several definitions for the conditional Tsallis entropy in both existing literature and our proposal. In Section 4, we establish several results, comparing the definitions presented previously. In Section 5, we explore some features of each variant for the conditional Tsallis entropy. Finally, in Section 6, we present the conclusions and future work.

2. Preliminaries

In the remainder of the paper, we use the standard notation for entropies and for probability distributions according to Reference [5]. For the sake of simplicity of notation, we use the notation log for the logarithm in base 2. We call the reader’s attention to the fact that, whenever we say that one entropy converges to another, it is always up to logarithmic factor that depends only on the choice of cardinality of the alphabet.
The Shannon entropy of X is the expectation of the surprise of an occurrence,
H ( X ) = x P ( X = x ) log P ( X = x ) .
The conditional Shannon entropy, H ( Y | X ) , is the expectation over x of the entropy of the distribution P ( Y | X = x ) ,
H ( Y | X ) = E x , y log 1 P ( Y = y | X = x ) .
It is easy to derive the chain rule  H ( X , Y ) = H ( X ) + H ( Y | X ) : to get the average information contained in ( X , Y ) , we may first get the average information contained in X, and add to it the average information of Y, given X.
The Tsallis entropy [1] was firstly introduced in [2,3] and is defined for a random variable X by:
T α ( X ) = 1 α 1 1 x P ( X = x ) α , ( for α > 0 , α 1 ) .
It is straightforward to show that, when the parameter α converges to 1, the value of the entropy converges to the Shannon entropy.

3. Conditional Tsallis Entropy: Four Definitions

We consider three definitions for conditional Tsallis entropy that already exist in the literature and introduce a new proposal. All definitions consider a positive parameter α .
Definition 1.
Let Z = ( X , Y ) be a random vector. One can define the following variants of conditional Tsallis entropy:
1. 
Definition of T α ( Y | X ) from Reference [15]
T α ( Y | X ) = x P ( X = x ) α T α ( Y | x )
= 1 α 1 x P ( X = x ) α 1 y P ( Y = y | X = x ) α .
One can easily verify that T α ( X , Y ) = T α ( Y | X ) + T α ( X ) and, therefore, it satisfies the chain rule.
2. 
Definition of S α ( Y | X ) from [16] (Definition 2.8)
S α ( Y | X ) = x P ( X = x ) T α ( Y | X = x )
= x P ( X = x ) 1 α 1 1 y P ( Y = y | X = x ) α
= 1 α 1 x P ( X = x ) 1 y P ( Y = y | X = x ) α .
3. 
Definition of S α ( Y | X ) from [16] (Definition 2.10)
S α ( Y | X ) = 1 α 1 1 x , y P ( X = x , Y = y ) α x P ( X = x ) α .
The first definition presented proposes that the conditional Tsallis entropy should be weighed by the probability of sampling X = x with parameter α , while the second one proposes that one uniformly weighs only the probability of sampling X = x . Therefore, notice that for the first definition presented, the value of α largely affects the value of the conditional Tsallis entropy. The idea for the third proposal is to distribute evenly the influence of the parameter α by the entire joint distribution.
Next, we present another possible definition of the conditional Tsallis entropy. This definition is based on Definition III.6 of [38] and captures the intuitive notion of defining the conditional entropy, by taking the maximum over all possible marginal distributions. Note that this definition is analogous to an existing one for the Rényi entropy; however, as we will show later, this proposal does not satisfy some of the expected basic properties.
Definition 2
(Definition of T α ( Y | X ) ).
T α ( Y | X ) = 1 α 1 max x 1 y P ( Y = y | X = x ) α .
We opted to use different notations for the variants of the conditional Tsallis entropy in the last definition, to better distinguish them in the rest of the paper. In particular, we follow the same approach as in Reference [38].
The following expressions will be useful later.
Theorem 1.
Let Z = ( X , Y ) be a random vector. The following identities are true:
T α ( Y | X ) = max x T α ( Y | X = x ) ( f o r α > 1 )
T α ( Y | X ) = min x T α ( Y | X = x ) ( f o r α < 1 ) .

4. Comparison of the Definitions

We now compare the above four definitions of the conditional Tsallis entropy by comparing whether or not the definition satisfies some common properties of an entropy measure. In the next theorem, we report two simple facts with straightforward proofs. We leave the details for the interested reader to check.
Theorem 2.
For any fixed joint probability distribution P ( X , Y ) ,
(i) 
T α ( Y | X ) , S α ( Y | X ) and S α ( Y | X ) , as functions of α, are continuous and differentiable;
(ii) 
T α ( Y | X ) , as a function of α, is continuous for all α 1 .
The following results provide the possible comparisons (in terms of values) between the proposed definitions. For the sake of organization, we split the comparison by types of entropy.
First we compare T α ( Y | X ) with S α ( Y | X ) .
Theorem 3.
For all joint probability distributions P ( X , Y ) and for every α > 0 ,
i f α < 1 : S α ( Y | X ) T α ( Y | X )
i f α = 1 : S α ( Y | X ) = T α ( Y | X ) = H ( Y | X )
i f α > 1 : S α ( Y | X ) T α ( Y | X ) .
Proof. 
Consider first the case α < 1 . In this case, we have that P ( X = x ) α P ( X = x ) . Thus,
P ( X = x ) α × T α ( Y | X = x ) P ( X = x ) × T α ( Y | X = x ) x P ( X = x ) α × T α ( Y | X = x ) x P ( X = x ) × T α ( Y | X = x ) T α ( Y | X ) S α ( Y | X ) .
For the case α = 1 , see the proof of Theorem 8.
The case α > 1 is similar to the previous one, but this time, the conclusion follows, since for α > 1 , P ( X = x ) α P ( X = x ) . □
In the next theorem we provide the comparison between T α ( Y | X ) and S α ( Y | X ) .
Theorem 4.
For all joint probability distributions P ( X , Y ) and for every α > 0 ,
i f α 1 : T α ( Y | X ) S α ( Y | X )
i f α > 1 : T α ( Y | X ) S α ( Y | X ) .
Proof. 
Consider first the case α < 1 . In this case, we have that T α ( Y | X ) = min x T α ( Y | X = x ) . So,
S α ( Y | X ) = x P ( X = x ) · T α ( Y | X = x )
x ( P ( X = x ) · min x T α ( Y | X = x ) )
= min x T α ( Y | X = x ) · x P ( X = x )
= min x T α ( Y | X = x )
= T α ( Y | X ) .
The proof of the case α > 1 is similar to the previous one but this time, the conclusion follows from the fact that, for α > 1 , T α ( Y | X ) = max x T α ( Y | X = x ) . □
As a consequence of the two previous results and the definitions, we can derive the relation between T α and T α .
Corollary 1.
For all joint probability distributions P ( X , Y ) and for every α > 0 ,
i f α 1 : T α ( Y | X ) T α ( Y | X )
i f α > 1 : T α ( Y | X ) T α ( Y | X ) .
The proof follows directly from Theorems 3 and 4. Now, we derive the relation between S α ( Y | X ) and T α ( Y | X ) .
Theorem 5.
For all joint probability distributions P ( X , Y ) and for every α > 0 ,
i f α 1 : S α ( Y | X ) T α ( Y | X )
i f α > 1 : S α ( Y | X ) T α ( Y | X ) .
Proof. 
Consider first the case α < 1 . Proving that S α ( Y | X ) T α ( Y | X ) , by definition, is the same to prove:
1 x , y P ( X = x , Y = y ) α x P ( x ) α α 1 max x 1 y P ( y | x ) α α 1 .
As α < 1 , we have that 1 α 1 < 0 . Thus, proving Equation (27) is the same, proves that:
1 x , y P ( X = x , Y = y ) α x P ( X = x ) α max x 1 y P ( Y = y | X = x ) α x , y P ( X = x , Y = y ) α x P ( X = x ) α min x y P ( Y = y | X = x ) α x , y P ( X = x , Y = y ) α x P ( X = x ) α × min x y P ( Y = y | X = x ) α .
Now, the result follows by observing that the last inequality is true, since, for α < 1 and for every x, we have that
min x y P ( Y = y | X = x ) α y P ( Y = y , X = x ) α .
The case α > 1 is proved in a similar manner. □
Now, we derive the relation between T α ( Y | X ) and S α ( Y | X ) .
Theorem 6.
For all joint probability distributions P ( X , Y ) and for every α > 0 ,
i f α 1 : T α ( Y | X ) S α ( Y | X )
i f α > 1 : T α ( Y | X ) S α ( Y | X ) .
Proof. 
Consider first the case α < 1 . Thus,
T α ( Y | X ) S α ( Y | X ) 1 α 1 x P ( X = x ) α 1 y P ( Y = y | X = x ) α 1 α 1 1 x , y P ( X = x , Y = y ) α x P ( X = x ) α
x P ( X = x ) α 1 y P ( Y = y | X = x ) α 1 x , y P ( X = x , Y = y ) α x P ( X = x ) α x P ( X = x ) α x P ( X = x ) α y P ( X = x , Y = y ) α P ( X = x ) α 1 x , y P ( X = x , Y = y ) α x P ( X = x ) α x P ( X = x ) α x , y P ( X = x , Y = y ) α x P ( X = x ) α x , y P ( X = x , Y = y ) α x P ( X = x ) α .
The result follows by observing that the last inequality is true, since for α < 1 , we have that:
P ( X = x ) α > P ( X = x )
and consequently,
x P ( X = x ) α > 1 .
The proof of the case α > 1 is similar to the previous one. □
Finally, we show that the values of S α and S α are incomparable in the sense that there are probability distributions for which S α is greater than S α and there are probability distributions for which S α is greater than S α .
Theorem 7.
The values of S α ( Y | X ) and of S α ( Y | X ) are incomparable, i.e., for each n 2 and α 1
P ( X , Y ) : S α ( Y | X ) < S α ( Y | X )
P ( X , Y ) : S α ( Y | X ) > S α ( Y | X ) .
Proof. 
For Statement (32) and α < 1 , consider the following joint probability distribution:
X \ Y 1 2 1 0.0625 0.0625 2 0.0125 0.8625
S 0.25 ( Y | X ) 0.513
S 0.25 ( Y | X ) 0.629
For Statement (32) and α > 1 , consider the following joint probability distribution:
X \ Y 1 2 1 0.1125 0.0125 2 0.4375 0.4375
S 2.5 ( Y | X ) 0.396
S 2.5 ( Y | X ) 0.429
For Statement (33) and α < 1 , consider the following joint probability distribution:
X \ Y 1 2 1 0.125 0 2 0.5 0.375
S 0.25 ( Y | X ) 0.792
S 0.25 ( Y | X ) 0.560
Finally, for Statement (33) and α > 1 , consider the following joint probability distribution:
X \ Y 1 2 1 0.0625 0.0625 2 0.0125 0.8625
S 1.25 ( Y | X ) 0.125
S 1.25 ( Y | X ) 0.099

5. Properties of the Conditional Tsallis Entropies

In this section, we investigate some properties of the proposals considered. In particular, we show that there are probability distributions and α 1 for which the conditional Tsallis entropies are bigger than the unconditional Tsallis entropy.
Theorem 8.
For any fixed joint probability distribution P ( X , Y ) ,
lim α 1 T α ( Y | X ) = H ( Y | X )
lim α 1 S α ( Y | X ) = H ( Y | X )
lim α 1 S α ( Y | X ) = H ( Y | X )
where H ( Y | X ) is the conditional Shannon entropy. In general, it is not true that lim α 1 T α ( Y | X ) = H ( Y | X ) .
Proof. 
The second equation is easy to derive directly from the definition of conditional probability and from Equation (2). Furthermore, using Equation (6) we can also easily obtain (using the previous derivation) that Equation (42) is also true.
The third equation was proven in Reference [16].
Now, it is only left to prove the last statement of the theorem, i.e., in general
lim α 1 T α ( Y | X ) H ( Y | X ) .
From Equations (6) and (11) it is easy to check that T α ( Y | X ) is the expectation over x of T α ( Y | x ) , while T α ( Y | X ) is the maximum over x of T α ( Y | x ) .
The function T α ( Y | x ) depends on the conditional probabilities P ( Y = y | X = x ) . Therefore, there are joint probability distributions P ( X = x , Y = y ) , such that:
lim α 1 T α ( Y | X ) lim α 1 T α ( Y | X ) = H ( Y | X ) .
Contrary to the Shannon entropy, the value of any conditional Tsallis entropy may exceed the corresponding unconditional Tsallis entropy for all proposals.
Theorem 9.
There are probability distributions P ( X , Y ) and values of α, such that:
T α ( Y | X ) > T α ( Y )
S α ( Y | X ) > T α ( Y )
S α ( Y | X ) > T α ( Y )
T α ( Y | X ) > T α ( Y ) .
Proof. 
Consider the following joint probability distribution:
X = x \ Y = y 1 2 1 0.45 0.45 2 0.1 0.0
For this distribution we have:
T 0.5 ( Y ) 0.824
T 0.5 ( Y | X ) 1.047
S 0.5 ( Y | X ) 0.828
T 3 ( Y ) 0.371
S 3 ( Y | X ) 0.374
T 3 ( Y | X ) 0.375

Bounds on Conditional Tsallis Entropy

As mentioned in the Introduction, one of the properties of the (conditional) Shannon entropy for discrete variables is to be bounded by the number of elements of the support of the distribution. Furthermore, it is well known that the unconditional Tsallis entropy is always between 0 and m 1 α 1 α , where m is the number of elements in the support of the distribution. In this subsection, we derive bounds for the conditional Tsallis entropies based on the number of elements in the support of each distribution.
Theorem 10.
Let Z = ( X , Y ) be any joint random vector defined over sets of size m each. Then,
0 S α ( Y | X ) m 1 α 1 α
0 T α ( Y | X ) m 1 α 1 α .
Moreover all of these lower and upper bounds may be reached by suitable probability distributions P ( X , Y ) .
Proof. 
The Inequalities (57) follow from the fact that S α ( Y | X ) is the expectation of the unconditional Tsallis entropy.
For Inequalities (58), recall that Equation (10) can be written, for α < 1 , as Equation (12). Note that, for all x, the values T α ( Y | X = x ) are the (unconditional) Tsallis entropies of the marginal distribution, and are all defined in a set of cardinality m.
So, by definition of T α , for some particular x, we have T α ( Y | X ) = T α ( Y | X = x ) . The case α > 1 is similar. So, independently of α , for every probability distributions P ( X ) and P ( Y ) defined over set with m elements, we have 0 T α ( Y | X ) m 1 α 1 α , since the same bound applies for the unconditional version or any its marginal distributions. □
Theorem 11.
Let Z = ( X , Y ) be any joint random vector defined over sets of size m each. Then,
i f α > 1 : 0 T α ( Y | X ) m 1 α 1 α .
For α < 1 , in general, the inequality does not hold.
Proof. 
Consider first the case α > 1 . The result follows directly from Inequalities (15) and (57).
In order to prove that the inequality does not hold for all α < 1 , consider α = 0.1 and the following joint probability distribution:
X = x \ Y = y 1 2 3 1 1 / 9 1 / 9 0 2 1 / 9 1 / 9 1 / 9 3 0 1 / 9 1 / 3
Notice that m 1 α 1 α 2.987 and T 0.1 ( Y | X ) 3.371 . For any other α < 1 , one can construct similarly a joint probability distribution for which the inequality is also violated. □
Theorem 12.
Let Z = ( X , Y ) be any joint random vector defined over sets of size m each. Then,
i f α > 1 : 0 S α ( Y | X ) m 1 α 1 α .
Proof. 
The result follows directly from the Inequalities (26) and (58). □
We conjecture that the above theorem also holds for α < 1 . For example, the inequality is true for all uniform probability distribution over n variables.
We now show that, for any fixed joint probability distribution P ( X , Y ) , three of the forms of conditional Tsallis entropy studied in this paper are non-increasing functions of α . First, we state a simple theorem.
Lemma 1.
If f 1 ( x ) ,…, f m ( x ) are non-increasing real functions, then the function max i ( f i ( x ) ) is also a non-increasing function.
Theorem 13.
For every probability distribution P ( X , Y ) ,
1. 
T α ( Y | X ) is a non-increasing function of α.
2. 
S α ( Y | X ) is a non-increasing function of α.
3. 
T α ( Y | X ) is a non-increasing function of α.
Proof. 
1.
First consider the case α > 1 , and consider the function d T α ( Y | X ) , the derivative of the function T α ( Y | X ) in order to α :
d T α ( Y | X ) d α = 1 + x P ( X = x ) α ( α 1 ) 2 x α P ( Y = y | X = x ) α 1 log α α 1 .
It is easy to see that, since α > 1 , d T α ( X ) d α < 0 . Therefore, the function T α ( Y | X ) is a non-increasing function of α .
Consider now the case α < 1 and assume that α , α are such that α < α < 1 . In order to prove that T α ( Y | X ) is non-increasing we have to show that T α ( X ) T α ( X ) , i.e.,:
1 1 x P ( X = x ) α α 1 1 1 x P ( X = x ) α α 1 1 + x P ( X = x ) α 1 α 1 + x P ( X = x ) α 1 α 1 x P ( X = x ) α 1 α 1 x P ( X = x ) α 1 α
Notice that, since α < α < 1 , Then 1 1 α < 1 1 α and, therefore, 1 x P ( X = x ) α 1 x P ( X = x ) α . So, the last inequality is true.
2.
This part of the result follows from the fact that S α ( Y | X ) is the expectation of unconditional Tsallis entropies; see Equation (6).
3.
Suppose that α > 1 . The proof is a direct consequence of Equation (11) and Lemma 1. The case α < 1 can be proven in a similar way.
It is easy to show that S does not fulfill the property of the last theorem.
Theorem 14.
There exists probability distributions ( X , Y ) and α < α for which S α ( Y | X ) S α ( Y | X ) .
Proof. 
Consider the following joint probability distribution:
X = x \ Y = y 1 2 1 0.45 0.45 2 0.1 0.0
We have:
S 0.2 ( Y | X ) 0.563
S 0.5 ( Y | X ) 0.621 .
We developed a small application that, given two probability distributions, computes the values of all conditional Tsallis entropies considered in the paper. The application is self-contained and its use is extremely simple. There are two use case examples that the reader can use in order to try the calculator. The interested reader can find it in the following link: http://gloss.di.fc.ul.pt/tryit/Tsallis (accessed on 28 October 2021).

6. Conclusions

In this paper, we studied the definitions for the conditional Tsallis entropy existing in the literature. We also considered a possible alternative definition for it. This new proposal is a natural approach to consider as a possible definition. It defines the conditional value as the maximum value of all marginal distributions. Due to this fact, and similar to what happens with the Rényi entropy, this definition was also analyzed, although it was never considered in the literature before. The relationships between the four definitions, described in this work, are summarized in Figure 1.
As we understand, it would be expectable that a proposal for conditional Tsallis entropy would satisfy the following properties:
  • Chain Rule;
  • Convergence to Shannon entropy as the parameter α tended to 1;
  • Its value would be between 0 and the upper bound of the unconditional version.
In Table 1, we summarize the properties that the four proposals have (we also added the property of being a non-increasing function with α ). To conclude, we can say that none of the proposals fulfill all of the properties. The definition T α ( Y | X ) is the candidate that fulfills more properties.
For future work, since all definitions focus on possible different aspects of the entropy, it would be important to consider a deeper study in this area and its possible applications, aiming to develop a theory that would emphasize the best proposal for each area, or eventually present an ultimate version for the conditional Tsallis entropy that would satisfy all of the desirable properties.

Author Contributions

Conceptualization, A.T., A.S. and L.A.; methodology, A.T., A.S. and L.A.; validation, A.T., A.S. and L.A.; formal analysis, A.T. and A.S.; investigation, A.T., A.S. and L.A.; writing—original draft preparation, A.T. and A.S.; writing—review and editing A.T., A.S. and L.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by FCT—Fundação para a Ciência e a Tecnologia, within CINTESIS, R&D Unit (reference UIDB/4255/2020), within Instituto de Telecomunicações (IT) Research Unit ref. UIDB/EEA/50008/2020 and within LASIGE Research Unit, ref. UIDB/00408/2020 and ref. UIDP/00408/2020. It was also supported by the projects Predict PTDC/CCI-CIF/29877/2017, QuantumMining POCI-01-0145-FEDER-031826 funded by FCT through national funds, by the European Regional Development Fund (FEDER), through the Competitiveness and Internationalization Operational Programme (COMPETE 2020), from EU H2020-SU-ICT-03-2018 project no. 830929 CyberSec4Europe (cybersec4europe.eu), and also the project “Safe Cities”, reference POCI-01-0247-FEDER-041435, financed by Fundo Europeu de Desenvolvimento Regional (FEDER),through COMPETE 2020 and Portugal 2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  2. Daróczy, Z. Generalized information functions. Inf. Control 1970, 16, 36–51. [Google Scholar] [CrossRef] [Green Version]
  3. Havrda, J.; Charvat, F. Quantification method of classification processes. concept of structural α-entropy. IEEE Trans. Inf. Theory 1967, 3, 30–35. [Google Scholar]
  4. Wehrl, A. General properties of entropy. Rev. Mod. Phys. 1978, 50, 221–260. [Google Scholar] [CrossRef]
  5. Cover, T.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
  6. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef] [Green Version]
  7. Tsallis, C. The Nonadditive Entropy Sq and Its Applications in Physics and Elsewhere: Some Remarks. Entropy 2011, 13, 1765–1804. [Google Scholar] [CrossRef]
  8. Borland, R.O.L.; Tsallis, C. Distributions of high-frequency stock-market observables. In Nonextensive Entropy—Interdisciplinary Applications; Gell-Mann, M., Tsallis, C., Eds.; Oxford University Press: New York, NY, USA, 2004. [Google Scholar]
  9. Ibrahim, R.W.; Darus, M. Analytic Study of Complex Fractional Tsallis’ Entropy with Applications in CNNs. Entropy 2018, 20, 722. [Google Scholar] [CrossRef] [Green Version]
  10. Mohanalin, B.; Kalra, P.K.; Kumar, N. A novel automatic microcalcification detection technique using Tsallis entropy and a type II fuzzy index. Comput. Math. Appl. 2010, 60, 2426–2432. [Google Scholar]
  11. Tamarit, F.A.; Cannas, S.A.; Tsallis, C. Sensitivity to initial conditions in the Bak-Sneppen model of biological evolution. Eur. Phys. J. B 1998, 1, 545–548. [Google Scholar] [CrossRef]
  12. Group of Statistical Physics. Available online: http://tsallis.cat.cbpf.br/biblio.htm (accessed on 8 November 2018).
  13. Ribeiro, M.; Henriques, T.; Castro, L.; Souto, A.; Antunes, L.; Costa-Santos, C.; Teixeira, A. The Entropy Universe. Entropy 2021, 23, 222. [Google Scholar] [CrossRef]
  14. Rényi, A. On measures of information and entropy. Berkeley Symp. Math. Statist. Prob. 1961, 1, 547–561. [Google Scholar]
  15. Furuichi, S. Information theoretical properties of Tsallis entropies. J. Math. Phys. 2006, 47, 023302. [Google Scholar] [CrossRef] [Green Version]
  16. Manije, S.; Gholamreza, M.; Mohammad, A. Conditional Tsallis Entropy. Cyb. Inf. Technol. 2013, 13, 37–42. [Google Scholar] [CrossRef]
  17. Heinrich, F.; Ramzan, F.; Rajavel, F.A.; Schmitt, A.O.; Gültas, M. MIDESP: Mutual Information-Based Detection of Epistatic SNP Pairs for Qualitative and Quantitative Phenotypes. Biology 2021, 10, 921. [Google Scholar] [CrossRef] [PubMed]
  18. Oggier, F.; Datta, A.A. Renyi entropy driven hierarchical graph clustering. PeerJ Comput. Sci. 2021, 7, e366. [Google Scholar] [CrossRef]
  19. Tao, M.; Wang, S.; Chen, H.; Wang, X. Information space of multi-sensor networks. Inf. Sci. 2021, 565, 128–245. [Google Scholar] [CrossRef]
  20. Jozsa, R.; Schlienz, J. Distinguishability of states and von Neumann entropy. Phys. Rev. A 2000, 62, 012301. [Google Scholar] [CrossRef] [Green Version]
  21. Hassani, H.; Unger, S.; Entezarian, M. Information content measurement of esg factors via entropy and its impact on society and security. Information 2021, 12, 391. [Google Scholar] [CrossRef]
  22. Shannon, C.E. Communication theory of secrecy systems. Bell Syst. Tech. J. 1949, 28, 656–715. [Google Scholar] [CrossRef]
  23. Bhotto, M.Z.A.; Antoniou, A. A new normalized minimum-error entropy algorithm with reduced computational complexity. In Proceedings of the 2009 IEEE International Symposium on Circuits and Systems, Taipei, Taiwan, 24–27 May 2009; pp. 2561–2564. [Google Scholar] [CrossRef]
  24. Teixeira, A.; Matos, A.; Souto, A.; Antunes, L. Entropy measures vs. Kolmogorov complexity. Entropy 2011, 13, 595–611. [Google Scholar] [CrossRef]
  25. Teixeira, A.; Souto, A.; Matos, A.; Antunes, L. Entropy measures vs. algorithmic information. In Proceedings of the 2010 IEEE International Symposium on Information Theory, Austin, TX, USA, 13–18 June 2010; pp. 1413–1417. [Google Scholar] [CrossRef] [Green Version]
  26. Edgar, T.; Manz, D. Chapter 2-Science and Cyber Security. In Research Methods for Cyber Security; Syngress: Amsterdam, The Netherlands, 2017; pp. 33–62. [Google Scholar]
  27. Huang, L.; Shen, Y.; Zhang, G.; Luo, H. Information system security risk assessment based on multidimensional cloud model and the entropy theory. In Proceedings of the 2015 IEEE 5th International Conference on Electronics Information and Emergency Communication, Beijing, China, 14–16 May 2015; pp. 11–15. [Google Scholar]
  28. Lu, R.; Shen, H.; Feng, Z.; Li, H.; Zhao, W.; Li, X. HTDet: A clustering method using information entropy for hardware Trojan detection. Tsinghua Sci. Technol. 2021, 26, 48–61. [Google Scholar] [CrossRef]
  29. Firman, T.; Balázsi, G.; Ghosh, K. Building Predictive Models of Genetic Circuits Using the Principle of Maximum Caliber. Biophys J. 2017, 113, 2121–2130. [Google Scholar] [CrossRef]
  30. Jost, L. Entropy and diversity. Oikos 2006, 113, 363–375. [Google Scholar] [CrossRef]
  31. Roach TNF. Use and Abuse of Entropy in Biology: A Case for Caliber. Entropy 2020, 22, 1335. [Google Scholar] [CrossRef] [PubMed]
  32. Simpson, E. Measurement of diversity. Nature 1949, 163, 688. [Google Scholar] [CrossRef]
  33. Yin, Y.; Shang, P. Weighted permutation entropy based on different symbolic approaches for financial time series. Phys. A Stat. Mech. Its Appl. 2016, 443, 137–148. [Google Scholar] [CrossRef]
  34. Castiglioni, P.; Parati, G.; Faini, A. Information-Domain Analysis of Cardiovascular Complexity: Night and Day Modulations of Entropy and the Effects of Hypertension. Entropy 2019, 21, 550. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Polizzotto, N.R.; Takahashi, T.; Walker, C.P.; Cho, R.Y. Wide Range Multiscale Entropy Changes through Development. Entropy 2016, 18, 12. [Google Scholar] [CrossRef] [Green Version]
  36. Prabhu, K.P.; Martis, R.J. Diagnosis of Schizophrenia using Kolmogorov Complexity and Sample Entropy. In Proceedings of the 2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 2–4 July 2020; pp. 1–4. [Google Scholar] [CrossRef]
  37. Fehr, S.; Berens, S. On the Conditional Rényi Entropy. IEEE Trans. Inf. Theory 2014, 60, 6801–6810. [Google Scholar] [CrossRef]
  38. Teixeira, A.; Matos, A.; Antunes, L. Conditional Rényi Entropies. IEEE Trans. Inf. Theory 2012, 58, 4273–4277. [Google Scholar] [CrossRef]
Figure 1. Summary of the relations between the several proposals for the definition of conditional Tsallis entropy.
Figure 1. Summary of the relations between the several proposals for the definition of conditional Tsallis entropy.
Entropy 23 01427 g001
Table 1. Summary of the proved properties of all proposed conditional entropies. The question mark indicates that the property is not known to be fulfilled.
Table 1. Summary of the proved properties of all proposed conditional entropies. The question mark indicates that the property is not known to be fulfilled.
f ( Y | X ) T α ( Y | X ) S α ( Y | X ) S α ( Y | X ) T α ( Y | X )
Chain Ruleyesnonono
lim α 1 f ( Y | X ) = H ( Y | X ) yesyesyesno
0 f ( Y | X ) | Y | 1 α 1 α and α > 1 yesyesyesyes
0 f ( Y | X ) | Y | 1 α 1 α and α < 1 noyes?yes
f is non-increasing with α yesyesnoyes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Teixeira, A.; Souto, A.; Antunes, L. On Conditional Tsallis Entropy. Entropy 2021, 23, 1427. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111427

AMA Style

Teixeira A, Souto A, Antunes L. On Conditional Tsallis Entropy. Entropy. 2021; 23(11):1427. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111427

Chicago/Turabian Style

Teixeira, Andreia, André Souto, and Luís Antunes. 2021. "On Conditional Tsallis Entropy" Entropy 23, no. 11: 1427. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111427

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop