Next Article in Journal
An Efficient, Parallelized Algorithm for Optimal Conditional Entropy-Based Feature Selection
Previous Article in Journal
A New Limit Theorem for Quantum Walk in Terms of Quantum Bernoulli Noises
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Total Uncertainty Measure in the Evidence Theory and Its Application in Decision Making

School of Big Data and Software Engineering, Chongqing University, Chongqing 401331, China
*
Author to whom correspondence should be addressed.
Submission received: 31 March 2020 / Revised: 19 April 2020 / Accepted: 22 April 2020 / Published: 24 April 2020
(This article belongs to the Section Signal and Data Analysis)

Abstract

:
Dempster–Shafer evidence theory (DS theory) has some superiorities in uncertain information processing for a large variety of applications. However, the problem of how to quantify the uncertainty of basic probability assignment (BPA) in DS theory framework remain unresolved. The goal of this paper is to define a new belief entropy for measuring uncertainty of BPA with desirable properties. The new entropy can be helpful for uncertainty management in practical applications such as decision making. The proposed uncertainty measure has two components. The first component is an improved version of Dubois–Prade entropy, which aims to capture the non-specificity portion of uncertainty with a consideration of the element number in frame of discernment (FOD). The second component is adopted from Nguyen entropy, which captures conflict in BPA. We prove that the proposed entropy satisfies some desired properties proposed in the literature. In addition, the proposed entropy can be reduced to Shannon entropy if the BPA is a probability distribution. Numerical examples are presented to show the efficiency and superiority of the proposed measure as well as an application in decision making.

1. Introduction

Uncertain information processing is a hot topic in intelligent information processing of theory and applications [1,2,3]. Many theories have been proposed to address the problem such as fuzzy set theory [4], probability theory [5], Dempster–Shafer evidence theory (DS theory) [6,7], etc. First proposed by Arthur Dempster and Glenn Shafer, Dempster–Shafer theory has been widely used in many fields such as risk evaluation [8,9,10], sensor data fusion [11], decision making [12,13], image recognition [14], classification [15,16], clustering [17,18] and so on [19,20]. However, some open issues remain unresolved. First, the fused information based on Dempster combination rule may generate counterintuitive results if evidence is highly conflicted with each other [21,22,23]. Second, many properties have been built to evaluate different uncertainty measures [24,25], but the rationality for some of these properties is still questionable. Third, during the process of calculating entropy, some characteristics of the basic probability assignment are lost. Fourth, the evidence in the open world assumption needs further consideration [26,27]. The uncertainty measure including belief entropy can be a promising method for uncertain information modeling, processing, and management of conflict in DS theory [28,29].
Uncertainty measure in DS theory has attracted much attention [30,31,32,33]. Over the years, many theories have been developed to quantify the uncertainty in DS Theory, such as Deng entropy [28], Pouly et al. method [34], Yager’s interval valued entropy [35], the cross entropy in Dempster–Shafer theory [36], uncertainty measure for D numbers [37], and so on [38,39]. To develop a reasonable measure, several properties for entropy of BPAs in the DS theory were built by Jirousek and Shenoy as an improved version of five axiomatic requirements in Shannon entropy [25,40]. The desired properties are: consistency with DS theory semantics, non-negativity, maximum entropy, monotonicity, probability consistency, additivity, and subadditivity. However, none of the existing measures satisfies all seven properties. The properties of belief entropy are still an open issue.
In this paper, we first analyze the rationality of some properties suggested in [24,40] and list seven reasonable properties, and then propose an improved total uncertainty measure in the evidence theory to measure the uncertainty of BPA. The hypotheses are that: (1) BPA is a generalization of probability; and (2) Shannon entropy can be adopted to DS theory. Thus, the new belief entropy can be a helpful tool for measuring uncertain degree of information. The proposed belief entropy has two components. The first component is an improved version of Dubois–Prade entropy [24], which can capture the non-specificity portion of uncertainty with a consideration of the size of frame of discernment (FOD). The second component is adopted from Nguyen entropy [41], which captures conflict in BPA. We show that our entropy satisfies six of seven desired properties. In addition, our entropy can reduce to Shannon entropy if a BPA is degenerated to a probability distribution. The proposed method is applied in decision making, which follows the works in diverse applications [42,43].
The rest of this paper is outlined as follows. In Section 2, the preliminaries of DS theory and entropy are briefly introduced. The improved entropy is presented in Section 3, as well as the analysis of the properties. In Section 4, some numerical examples are carried out to demonstrate the effectiveness and superiority of the proposed method. In Section 5, the proposed entropy is used in decision making. Finally, the conclusion and a discussion of future work are given in Section 6.

2. Preliminaries

Some basic preliminaries are introduced in this section.

2.1. Dempster–Shafer Evidence Theory

Dempster–Shafer evidence theory is widely applied in addressing uncertainties [44,45]. Some basic concepts are introduced as follows [6,7].
Let X be a set of mutually exclusive and exhaustive events, denoted as:
X = { θ 1 , θ 2 , θ 3 θ | X | } ,
where the set X is named the frame of discernment (FOD). The power set of X is defined as follows:
2 X = { , { θ 1 } { θ | X | } , { θ 1 , θ 2 } { θ 1 , θ 2 θ i } | X | } ,
For a FOD X = { θ 1 , θ 2 , θ 3 θ | X | } , the mass function is a mapping m from 2 X to [ 0 , 1 ] , defined as follows:
m : 2 X [ 0 , 1 ] ,
which satisfies the following condition:
m = 0 a n d A 2 X m ( A ) = 1 .
In DS theory, the mass function is also referred as basic probability assignment (BPA). If a subset a satisfies a 2 X and m ( a ) > 0 , then a is called the focal element of m. If a BPA m contains a focal element X with belief 1, then the BPA m is a vacuous BPA.

2.2. Dempster’s Rule of Combination

Assume there are two independent BPAs m 1 and m 2 ; the Dempster’s rule of combination, which is denoted as m = m 1 m 2 , is defined as: follows [6]:
m ( A ) = 1 1 K B C = A m 1 ( B ) m 2 ( C ) , A , 0 , A = ,
where the K is is a normalization constant defined as follows:
K = B C = m 1 ( B ) m 2 ( C )
The normalization constant K is assumed to be non-zero. if K = 0 , then m 1 and m 2 are in total-conflict and cannot be combined using Dempster’s rule. If k = 1 , m 1 and m 2 are non-interactive with each other, then m 1 and m 2 are non-conflicting.
The Dempster’s rule of combination combines two BPAs in such a way that the new BPA represents a consensus of the contributing pieces of evidence. It also focus BPA on single set to decrease the uncertainty in the system based on the combination rule, which can be useful in decision making process.

2.3. Belief Entropy in Dempster–Shafer Evidence Theory

Several methods have been proposed to solve the problem of uncertainty measure in DS theory. Some previous methods are briefly summarized in Table 1.

3. The Improved Total Uncertainty Measure

3.1. The New Measure

To address the uncertainty in the FOD, an improved total uncertainty measure is proposed in this paper. The improved total uncertainty measure denoted as Q ( m ) is defined as follows:
Q ( m ) = A X A X m ( A ) l o g 2 ( A ) + A X m ( A ) l o g 2 ( 1 m ( A ) ) ,
where A denotes the cardinality of the focal element A, X is the number of element within X, and m is the BPA in FOD X. The first component A X A X m ( A ) l o g ( A ) is designed as a weighted measure for the total non-specificity among focal elements. The second component A X m ( A ) l o g 2 ( 1 m ( A ) ) can be interpreted as a portion to capture the uncertainty in the form of conflict. The improved total uncertainty measure also guarantees that the measure could be reduced to Shannon entropy in Bayesian probability cases. Notice that the coefficient | A | | X | is added to consider the size of FOD. As shown in Table 1, the number of elements contained in FOD is not included for most measures.

3.2. Desired Properties of Belief Entropy

In this paper, we consider some properties of entropy H ( m ) (m is a BPA) proposed in [40,51].
Let X and Y denote random variables with state spaces Ω X and Ω Y , respectively. Let m X and m Y denote distinct BPAs for X and Y, respectively. Let γ X and γ Y denote the vacuous BPAs for X and Y, respectively.
(1) Consistency with DS Theory semantics. If a definition of entropy of m, or a portion of a definition, is based on a transform of BPA m to a probability mass function (PMF) p m , then the transform must satisfy the following condition: P m 1 m 2 = P m 1 P m 2
(2) Non-negativity. H m X 0 , with equality if and only if there is a x Ω X where x is a single element set that satisfies m X ( x ) = 1 . It is similar to probabilistic case.
(3) Monotonicity. If Ω X < Ω Y , then H ( γ X ) < H ( γ Y ) .
(4) Probability consistency. If m X is a Bayesian BPA for X, then H ( m X ) = x Ω X m X ( x ) l o g 1 m X ( x ) . In other words, in the case of Bayesian BPA, the entropy reduces to Shannon entropy [52].
(5) Maximum entropy. H m X H γ X , with equality if and only if m X = γ X . The entropy in [40] shows that it is rational that the vacuous BPA γ X has the most uncertainty among all cases since in this case γ X ( Ω X ) = 1 . One hundred percent belief degree is assigned to γ X ( Ω X ) , which provides no information to help with decision making because the belief of each proposition in FOD is completely unknown.
(6) Additivity. Given two distinct BPAs m X and m Y for X and Y, we can combine them using Dempster’s rule, denoted as m X m Y . Then, H ( m X m Y ) must satisfy the following equation:
H ( m X m Y ) = H ( m X ) + H ( m Y ) .
(7) Set consistency. H ( m ) = l o g 2 ( | a | ) whenever m is deterministic with focal set a, i.e., m ( a ) = 1 .
(8) Range. For any BPA m X for X, 0 H ( m ) l o g 2 | Ω X | .
(9) Subadditivity. Suppose m is a BPA for X , Y ; then, with marginal BPA m X for X and m Y for Y, we have
H ( m ) H ( m X ) + H ( m Y )
It is argued in [40] that, if a definition of entropy of m, or a portion of a definition, is based on a transform of BPA to a PMF, then the transform must satisfy the condition P m 1 m 2 = P m 1 P m 2 where ⊗ is the combination rule in probability theory, and ⊕, as mentioned in Section 2, is Dempster’s combination rule. Notice that only if a transform is used, then it must be consistent with Dempster’s rule. Since none of the methods except for Jirousek–Shenoy Equation in Table 1 use transform of BPA to a PMF, we do not discuss the Consistency with DS Theory semantics property in this article.
The Set consistency property requires that H ( γ X ) = l o g 2 | Ω X | . The probability consistency property would require that, for the Bayesian uniform BPA m u , H ( m u ) = l o g 2 | Ω X | as well. Thus, these two requirements would entail that H ( γ X ) = H ( m u ) . On the contrary, the Maximum entropy property indicates that the entropy of vacuous BPA H ( γ X ) should be maximum, H ( γ X ) > H ( m u ) . Before further analysis of these two properties, first consider the following the example.
Example 1.
Suppose there is a race with three bikes. We have two experts that make following the statements.
  • Expert 1: “The three bikes and riders are similar”.
  • Expert 2: “I do not have information about the characteristics of each bike and rider”.
If we represent these information in BPA, the opinion of Expert 1 produces a uniform distribution ( 1 3 , 1 3 , 1 3 ) and the second one a vacuous BPA. The question is, if we must bet on a bike that will win this race, on which should we place our bet following the information of these experts? In this case, we do not have anything that allows us to bet on a bike, thus our final decision will be made randomly. Hence, the information, or uncertainty, should be the same. In both cases, it must reach the maximum uncertainty value. Therefore in this paper, we modify the Maximum entropy property instead of adopting the original property:
  • Maximum entropy. H ( m X ) H ( γ X ) , with equality if and only if m X = γ X or m X is a uniform distribution.
The Range property requires that, for all BPAs, the value of entropy should be bounded by l o g 2 | Ω X | ; we disagree. In Shannon’s information theory, the maximum number of bits required to represent the uncertainty of a system with n status is l o g 2 ( n ) , which is reasonable since the system can be represented only by state number n. However, in this case, the BPAs focus on several subsets (up to 2 n 1 ) and each of them can have non-empty intersection with others. However, because of the non-specificity part of total uncertainty, one cannot even say that the total uncertainty should be bounded by l o g 2 ( 2 n 1 ) . Based on this analysis, we do not adopt Range property.
Based on the analysis above, we list seven properties that may be satisfied by uncertainty measure in DS theory:
  • Non-negativity
  • Monotonicity
  • Probability consistency
  • Maximum entropy
  • Additivity
  • Set consistency
  • Subadditivity

3.3. Property of the New Measure

The analysis of the property for the proposed measure is presented as follows.
(1) Non-negativity. The first component A X A X m ( A ) l o g 2 ( A ) = 0 and second component A X m ( a ) l o g 2 ( 1 m ( A ) ) = 0 if and only if there exists x X and m ( x ) = 1 . For all BPAs, | A | | X | > 0 , then Q ( m ) > 0 . Therefore, Q ( m ) satisfies the non-negativity property.
(2) Monotonicity. Suppose a vacuous BPA γ X ; then, Q ( γ X ) = X . Since it is monotonic in X , Q ( m ) satisfies the monotonicity property.
(3) Probability Consistency. If m is Bayesian, then the first component A X A X m ( A ) l o g 2 ( A ) = 0 . In this case, Q ( m ) is degenerated to Shannon entropy. Thus, Q ( m ) satisfies the probability consistency property.
(4) Maximum entropy. Suppose a Bayesian uniform BPA m u and n denotes X ; therefore, for each focal element, m ( A ) = 1 n . Suppose another vacuous BPA γ X with n denoting X . After calculation, it is clear that the entropies of both cases reach the maximum value of l o g 2 n at the same time, thus Q ( m ) satisfies the Maximum entropy property.
(5) Additivity. Let BPA X = x 1 , x 2 , x 3 , Y = y 1 , y 2 and X × Y be the product space of X and Y. BPAs for X and Y are listed as follows:
m X x 1 , x 2 = 0.6 m X x 3 = 0.3 m X X = 0.3 m Y Y = 1
Now, we build the BPA m = m X × m Y on X × Y . The BPA m has the following masses:
m ( z 11 , z 12 , z 21 , z 22 ) = 0.6 m ( z 31 , z 32 ) = 0.1 m ( X × Y ) = 0.3
where z i j = ( x i , y j ) .
The values of uncertainty via Q ( m ) are:
Q ( m ) = 2.9043 Q ( m X ) + Q ( m Y ) = 3.1710
Clearly, Q ( m ) does not meet the requirements of Additivity property. The second component A X m ( a ) l o g 2 ( 1 m ( A ) ) satisfies the additivity property that the log of a product is the sum of the logs. Let R be the product space of X, Y, R = A × B X × Y ; then, H ( m ) = R X × Y m ( R ) l o g 2 ( 1 m ( R ) ) = A X , B Y m ( A × B ) l o g 2 ( 1 m ( A × B ) ) = A X , B Y m ( A ) m ( B ) l o g 2 ( 1 m ( A ) m ( B ) ) = A X m X ( A ) l o g 2 ( 1 m X ( A ) ) + B Y m Y ( B ) l o g 2 ( 1 m Y ( B ) ) = H ( m X ) + H ( m Y ) . The first part A X A X m ( A ) l o g ( A ) does not meet the requirement of additivity for adding the coefficient | A | | X | . Thus, Q ( m ) does not satisfy the Additivity property. Notice that, in probability cases, Q ( m ) degenerates to Shannon entropy and the Additivity property holds. In addition, while both Deng’s entropy and Q ( m ) fail to satisfy this property, their performances shown in Section 4 and Section 5 are still reasonable.
(6) Set consistency. For vacuous BPA γ X of X, Q ( m ) is reduced to Nguyen entropy, and, according to Table 1, the definition of Nguyen entropy satisfies the Set consistency property, thus Q ( m ) satisfies the Set consistency property.
(7) Subadditivity. Consider BPA m for X , Y as follows: m ( x , y ) , ( x ¯ , y ¯ = m ( x ¯ , y ) , ( x , y ¯ = 1 2 , for this BPA, Q ( m ) = 1.5000 . m X and m X have vacuous BPA γ X and γ Y , Q ( m X ) + Q ( m Y ) = 2 , and the inequality holds when m reduces to a possibility distribution. Therefore, Q ( m ) satisfies the Subadditivity property.
As shown in Table 1, both Pal et al.’s equation and Q ( m ) satisfies six of seven properties listed in Section 3.2; the limitation of Pal et al.’s is discussed later in Section 4.

4. Numerical Examples

In this section, some typical numerical examples are presented to show the effectiveness of the proposed measure.
Example 2
(Adopted from [28]). Given a frame of discernment X with 15 elements which are denoted as Elements 1–15, the BPA is shown as follows:
m ( 3 , 4 , 5 ) = 0.05 , m ( 6 ) = 0.05 , m ( A ) = 0.8 , m ( X ) = 0.1 .
Table 2 and Figure 1 list the values of the proposed entropy Q ( m ) when A increases. Figure 2 shows that the uncertainty degree measured by the proposed measure increases along with the growing size of A.
It is rational since more information volume becomes unknown if the size of A rises.
Figure 2 lists the performance of other uncertainty measures including Dubois–Prade’s entropy [24], Höhle’s entropy [49], Yager’s entropy [48], Klir–Ramer’s entropy [46], Klir–Parviz’s entropy [49], and Pal et al.’s entropy [50]. According to Figure 2, only Deng entropy, Dubois–Prade’s entropy, Pal et al.’s entropy and the proposed entropy increase monotonously with the size of A. On the contrary, other measures either decrease irregularly or fluctuate as the size of A increases. This is because other methods do not consider the size of A and X simultaneously in the definition. Using more available information means less uncertainty.
Example 3
(Adopted from [53]). Consider two different FODs Θ 1 = a , b , c , d and Θ 2 = a , b , c ; the BPAs are given as follows:
m 1 : m 1 ( a , b ) = 0.4 , m 1 ( c , d ) = 0.6 . m 2 : m 2 ( a , c ) = 0.4 , m 2 ( b , c ) = 0.6 .
According to the definitions in Table 1, the uncertainty of m 1 and m 2 can be calculated with different uncertainty measures; the results are shown in Table 3. The results calculated by Deng entropy, Pal et al.’s entropy, and Dubois–Prade entropy fail to show the difference of uncertain degree among the two bodies of evidence. The FOD of m 1 consists of four elements denoted as a, b, c, and d, while the FOD of m 2 only has three elements denoted as a, b, and c. It is expected that the uncertainty of these two BPAs should be different. Deng entropy, Pal et al.’s entropy, and Dubois–Prade entropy fail to measure the difference between these BPAs, while the proposed method can effectively measure the difference by considering the size of the FOD. The final result seems rational; although there fewer less elements in the second BPA m 2 , the intersection in m 2 provides more uncertainty while all elements in m 1 are independent from each other. Therefore, the proposed method appears to be a reasonable way to measure uncertainty of evidence under such circumstances.

5. Application in Conflict Data Fusion

In this section, the proposed measure is applied to a case study on conflict data-based decision making. The dataset is adopted from [53,54].

5.1. Problem Statement

Supposing that the FOD is Θ = F 1 , F 2 , F 3 , which consists of three types of faults for the machines. The diagnosis sensors are denoted as S = S 1 , S 2 , S 3 , S 4 , S 5 . Five sensors are positioned on different places for collecting diagnosis data. The results represented by BPAs are shown in Table 4.

5.2. Decision Making Procedure

The process for decision making based on the improved belief entropy is proposed in Figure 3. Six steps are designed as follows.

Step 1

Data from sensors are modeled as BPAs. As shown in Table 4, each piece of evidence is modeled as a BPA.

Step 2

Measure the uncertain degree using the improved total uncertainty measure in Equation (7). Generally, the more dispersive is the mass value assigned among the power set, the bigger is the entropy of the BPA. The entropy of each BPA is calculated as follows: Q ( m 1 ) = 1.5664 , Q ( m 2 ) = 0.4690 , Q ( m 3 ) = 1.4878 , Q ( m 4 ) = 1.5700 , and Q ( m 5 ) = 1.4955 . Notice that entropy Q ( m 2 ) is much smaller than the others because m 2 assigns a belief of 90% on F 2 , while other BPAs are more dispersive.

Step 3

Calculate the relative weight based on the uncertain degree of each evidence. It is commonly accepted that the bigger is the entropy, the higher is the uncertain degree. The relative weight of each BPA is defined according to the new belief entropy. For the ith BPA, the corresponding relative weight among all n BPAs is defined as follows:
W B P A ( m i ) = Q ( m i ) j = 1 n Q ( m j ) .
The relative weight of each BPA in Table 4 can be calculated with Equation (11). The calculation results are: W B P A ( m 1 ) = 0.2377 , W B P A ( m 2 ) = 0.0712 , W B P A ( m 3 ) = 0.2258 , W B P A ( m 4 ) = 0.2383 , and W B P A ( m 5 ) = 0.2270 .

Step 4

Evidence modification for the original BPA using the proposed measure. By using the relative weight of each BPA, we unify the BPAs given by all sensors and generate one weighted BPA. The resulting weighted BPA is used in final data fusion. For a proposition A, the modified BPA can be derived according to the following function:
m ( A ) = i = 1 n m i ( A ) W B P A ( m i ) .
According to Equation (12), the BPAs in Table 4 are modified and the result is as follows: m ( F 1 ) = 0.4957 , m ( F 2 ) = 0.1953 , m ( F 3 ) = 0.0784 , m ( F 1 , F 3 ) = 0.2305 .

Step 5

Evidence fusion using Dempster’s rule of combination in Equations (5) and (6) with ( n 1 ) times based on the modified BPA. The final fusion result is as follows:
m ( F 1 ) = ( ( ( m m ) m ) m ) ( F 1 ) = 0.9849 , m ( F 2 ) = ( ( ( m m ) m ) m ) ( F 2 ) = 0.0014 , m ( F 3 ) = ( ( ( m m ) m ) m ) ( F 3 ) = 0.0105 , m ( F 1 , F 3 ) = ( ( ( m m ) m ) m ) ( F 1 , F 3 ) = 0.0032 .

Step 6

Decision-making based on data fusion results. From the original data in Table 4, the report from the second sensor is highly conflicted with the other sensors on F 1 and F 2 . Based on all the data in Table 4, intuitively, F 1 should be recognized as the potential fault type. The fusion results with five sensors have a belief of over 98% on the potential fault type F 1 .

5.3. Discussion

The result with different information fusion methods is shown in Table 5. The fusion results for fault type identification based on the proposed method with two, three, four, and five sensors are shown in Table 6.
Figure 4 shows the performance of different combination methods with two sensor reports. One cannot make a decision based on the Yager’s combination rule since, in this case, the universal set denoted as m ( X ) has the highest belief among all the propositions. Other methods have a belief of more than 80% on F 2 , while according to the prior knowledge, the report of m 2 may come from a bad sensor and F 2 cannot be the potential fault. The result with the proposed method has the lowest belief on F 2 . In this case, m 1 and m 2 are highly conflicted with each other, while m 1 ( F 1 ) = 0.41 , m 1 ( F 2 ) = 0.29 , m 1 ( F 3 ) = 0.30 , m 1 ( F 1 , F 3 ) = 0 and m 2 ( F 1 ) = 0 ; m 2 ( F 2 ) = 0.90 , m 2 ( F 1 ) = 0.10 , and m 2 ( F 1 , F 3 ) = 0 ; and, clearly, m 2 ( F 2 ) = 0.90 is much more influential than m 1 ( F 1 ) = 0.41 , which allows the decision making procedure to allocate too much weight on m 2 during Step 3, resulting in the failure of identifying the right target.
Figure 5 shows the fusion results with three sensor reports. The result of Dempster’s rule seems counterintuitive for assigning the highest belief on F 2 . The fusion result of other methods assign the highest belief on F 2 ; however, the proposed method is the only one that has over 90% on the right fault F 1 while none of other methods assign a belief of more than 60% on fault F 1 . In this case, the proposed method successfully identify the right fault with only three sensor reports with the highest belief degree.
As shown in Figure 6, the fusion result of Dempster’s combination rule with four BPAs still leads to the wrong fault type F 2 due to the conflicting report given by m 2 , while the proposed method has the highest belief degree than the other methods on the right fault type F 1 .
The fusion results with all five sensors are shown in Figure 7; the proposed method has the highest belief of 98.49% on the right fault F 1 .
Table 5 summarizes Figure 4, Figure 5, Figure 6 and Figure 7. For Dempster’s rule, the belief assigned to F 1 remains zero no matter how many BPAs are fused, but, intuitively, as shown in Table 4, F 1 should be identified as the target. For Yager’s rule, because of the belief assigned to m ( X ) , the fusion result of five BPAs shows that only a belief of 77.32% is assigned to F 1 , while other methods except for Dempster’s all have much better performance, with a belief of over 90% on F 1 .
The fault type identification based on the proposed method with two, three, four, and five sensors is indicated in Table 6. As shown in the table and Figure 5, the proposed method is the only one that has over 90% on the right fault F 1 when only three sensor reports are given. Suppose that the data in Table 4 only consist of m 1 , m 2 , and m 3 ; based on our intuition and the provided information, F 1 should be the target, and the result shows that the proposed method indeed identifies F 1 . Later fusion results with four and five BPAs also indicate that F 1 should be the right fault. From this point, the decision made based on the proposed method with limited number of reports has certain degree of validity.
According to the fusion results, the proposed method has several superiorities in comparison with the other methods. First, as shown in Section 4, the proposed method can effectively measure the uncertainty degree of two different BPAs even if the same belief value is assigned on different FODs, while both Deng entropy and the Dubois–Prade entropy are failed. Secondly, it is presented in Section 3 that Q ( m ) satisfies six of seven properties, ensuring that the fusion results in Table 5 are reasonable and consistent with our intuition. Even in special cases such as vacuous BPA or probability cases, Q ( m ) still presents rational result. In addition, as shown in Figure 5, the proposed method recognizes the right fault with limited number of reports while the other methods cannot make the right decision until there are four reports. The proposed method reduces the interference of conflicting evidence more efficiently than the other method due to the consideration on not only the uncertainty appearing in the mass function, but also the size of FOD and the size of each proposition. As shown in Table 1, only the proposed method considers the size of FOD. Finally, the proposed method is based on information volume measured by modified belief entropy, thus the physical meaning is clear.
There are several reasons that contribute to the performance of the decision-making procedure. First, all sensor data are preprocessed before the decision-making procedure. With the corresponding BPAs, our process successfully identifies the fault and eliminates the conflicting evidence by using the proposed method. The effectiveness and superiority proved in Section 3 and Section 4 also guarantee the efficiency of our decision-making approach. Finally, the relative weight is calculated using the proposed measure and the final result is based on the Dempster’s combination rule, which combines the merits of Dempster’s combination rule and the effectiveness of the proposed method.

6. Conclusions

An improved total uncertainty measure based on total non-specificity the uncertainty in the form of conflict is proposed in this paper. The rationality of some previous properties are discussed and seven desired properties are listed to define a meaningful measure. The new measure satisfies six of seven desired properties of belief entropy. The proposed entropy not only captures the non-specificity and conflict in uncertainty, but also considers the size of FOD and the size of the proposition with respect to the FOD. Numerical examples show that the proposed entropy can quantify the uncertain degree of the BPA more accurately than the other uncertainty measures when the same belief value is assigned to different FODs. In the case of vacuous BPA and uniform distribution, the proposed method improves the performance of other measures and provides a result that is consistent with our intuition.
A decision making approach based on the proposed measure was applied to a case study. The fusion result shows the superiorities and effectiveness of the improved method in comparison with the other methods. In future studies, the proposed method will be applied in more real word applications such as image compression, image recognition, etc.

Author Contributions

Conceptualization, M.Q. and Y.T.; Formal analysis, Y.T. and J.W.; Funding acquisition, J.W.; Methodology, M.Q. and Y.T.; Project administration, Y.T. and J.W.; Validation, M.Q. and Y.T.; Writing—original draft, M.Q.; and Writing—review and editing, M.Q. and Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key Research and Development Project of China (2018YFF0214700) and National Natural Science Foundation of China (61672117).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, X.; Mahadevan, S. Ensemble machine learning models for aviation incident risk prediction. Decis. Support Syst. 2019, 116, 48–63. [Google Scholar] [CrossRef]
  2. Fei, L.; Deng, Y. A new divergence measure for basic probability assignment and its applications in extremely uncertain environments. Int. J. Intell. Syst. 2019, 34, 584–600. [Google Scholar] [CrossRef]
  3. He, Z.; Jiang, W.; Chan, F.T. Evidential supplier selection based on interval data fusion. Int. J. Fuzzy Syst. 2018, 20, 1159–1171. [Google Scholar] [CrossRef] [Green Version]
  4. Zadeh, L.A. Fuzzy Sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  5. Feller, W. An Introduction to Probability Theory and Its Applications II; John Wiley & Sons: Chichester, UK, 1950. [Google Scholar]
  6. Dempster, A.P. Upper and Lower Probabilities Induced by a Multi-Valued Mapping. Ann. Math. Stat. 1967, 38, 325–339. [Google Scholar] [CrossRef]
  7. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  8. Vandecasteele, F.; Merci, B.; Verstockt, S. Reasoning on multi-sensor geographic smoke spread data for fire development and risk analysis. Fire Saf. J. 2016, 86, 65–74. [Google Scholar] [CrossRef]
  9. Zheng, H.; Tang, Y. A Novel Failure Mode and Effects Analysis Model Using Triangular Distribution-Based Basic Probability Assignment in the Evidence Theory. IEEE Access 2020, 8, 66813–66827. [Google Scholar] [CrossRef]
  10. Zhang, L.; Wu, X.; Zhu, H.; AbouRizk, S.M. Perceiving safety risk of buildings adjacent to tunneling excavation: An information fusion approach. Autom. Constr. 2017, 73, 88–101. [Google Scholar] [CrossRef]
  11. Xiao, F. Multi-sensor data fusion based on the belief divergence measure of evidences and the belief entropy. Inf. Fusion 2019, 46, 23–32. [Google Scholar] [CrossRef]
  12. Fan, G.; Zhong, D.; Yan, F.; Yue, P. A hybrid fuzzy evaluation method for curtain grouting efficiency assessment based on an AHP method extended by D numbers. Expert Syst. Appl. 2016, 44, 289–303. [Google Scholar] [CrossRef]
  13. Fu, C.; Yang, J.B.; Yang, S.L. A group evidential reasoning approach based on expert reliability. Eur. J. Oper. Res. 2015, 246, 886–893. [Google Scholar] [CrossRef]
  14. Moghaddam, H.A.; Ghodratnama, S. Toward semantic content-based image retrieval using Dempster–Shafer theory in multi-label classification framework. Int. J. Multimed. Inf. Retr. 2017, 6, 317–326. [Google Scholar] [CrossRef]
  15. Liu, Z.G.; Pan, Q.; Dezert, J.; Mercier, G. Hybrid Classification System for Uncertain Data. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2783–2790. [Google Scholar] [CrossRef] [Green Version]
  16. Liu, Z.; Liu, Y.; Dezert, J.; Cuzzolin, F. Evidence combination based on credal belief redistribution for pattern classification. IEEE Trans. Fuzzy Syst. 2019, 28, 618–631. [Google Scholar] [CrossRef] [Green Version]
  17. Su, Z.G.; Denoeux, T. BPEC: Belief-peaks evidential clustering. IEEE Trans. Fuzzy Syst. 2018, 27, 111–123. [Google Scholar] [CrossRef]
  18. Meng, J.; Fu, D.; Tang, Y. Belief-peaks clustering based on fuzzy label propagation. Appl. Intell. 2020, 50, 1259–1271. [Google Scholar] [CrossRef]
  19. Fu, C.; Xu, D.L. Determining attribute weights to improve solution reliability and its application to selecting leading industries. Ann. Oper. Res. 2016, 245, 401–426. [Google Scholar] [CrossRef]
  20. Li, R.; Li, H.; Tang, Y. An Improved Method to Manage Conflict Data Using Elementary Belief Assignment Function in the Evidence Theory. IEEE Access 2020, 8, 37926–37932. [Google Scholar] [CrossRef]
  21. Song, Y.; Wang, X.; Wu, W.; Quan, W.; Huang, W. Evidence combination based on credibility and non-specificity. Pattern Anal. Appl. 2018, 21, 167–180. [Google Scholar] [CrossRef]
  22. Zhang, W.; Deng, Y. Combining conflicting evidence using the DEMATEL method. Soft Comput. 2019, 23, 8207–8216. [Google Scholar] [CrossRef]
  23. Martin, A. Conflict management in information fusion with belief functions. In Information Quality in Information Fusion and Decision Making; Springer: Berlin/Heidelberger, Germany, 2019; pp. 79–97. [Google Scholar]
  24. Dubois, D.; Prade, H. Properties of measures of information in evidence and possibility theories. Fuzzy Sets Syst. 1987, 24, 161–182. [Google Scholar] [CrossRef]
  25. Jiroušek, R.; Shenoy, P.P. On properties of a new decomposable entropy of Dempster-Shafer belief functions. Int. J. Approx. Reason. 2020, 119, 260–279. [Google Scholar] [CrossRef]
  26. Deng, Y. Generalized evidence theory. Appl. Intell. 2015, 43, 530–543. [Google Scholar] [CrossRef] [Green Version]
  27. Zhou, X.; Tang, Y. A Note on Incomplete Information Modeling in the Evidence Theory. IEEE Access 2019, 7, 166410–166414. [Google Scholar] [CrossRef]
  28. Deng, Y. Deng entropy. Chaos Solitons Fractals 2016, 91, 549–553. [Google Scholar] [CrossRef]
  29. Zheng, H.; Tang, Y. Deng Entropy Weighted Risk Priority Number Model for Failure Mode and Effects Analysis. Entropy 2020, 22, 280. [Google Scholar] [CrossRef] [Green Version]
  30. Deng, X.; Xiao, F.; Deng, Y. An improved distance-based total uncertainty measure in belief function theory. Appl. Intell. 2017, 46, 898–915. [Google Scholar] [CrossRef]
  31. Jiang, W.; Wang, S. An uncertainty measure for interval-valued evidences. Int. J. Comput. Commun. Control 2017, 12, 631–644. [Google Scholar] [CrossRef] [Green Version]
  32. Yang, Y.; Han, D. A new distance-based total uncertainty measure in the theory of belief functions. Knowl.-Based Syst. 2016, 94, 114–123. [Google Scholar] [CrossRef]
  33. Wang, X.; Song, Y. Uncertainty measure in evidence theory with its applications. Appl. Intell. 2018, 48, 1672–1688. [Google Scholar] [CrossRef]
  34. Pouly, M.; Kohlas, J.; Ryan, P.Y. Generalized information theory for hints. Int. J. Approx. Reason. 2013, 54, 228–251. [Google Scholar] [CrossRef]
  35. Yager, R.R. Interval valued entropies for Dempster–Shafer structures. Knowl.-Based Syst. 2018, 161, 390–397. [Google Scholar] [CrossRef]
  36. Khalaj, M.; Tavakkoli-Moghaddam, R.; Khalaj, F.; Siadat, A. New definition of the cross entropy based on the Dempster-Shafer theory and its application in a decision-making process. Commun. Stat. Theory Methods 2020, 49, 909–923. [Google Scholar] [CrossRef]
  37. Deng, X.; Jiang, W. A total uncertainty measure for D numbers based on belief intervals. Int. J. Intell. Syst. 2019, 34, 3302–3316. [Google Scholar] [CrossRef] [Green Version]
  38. Jiang, W. A correlation coefficient for belief functions. Int. J. Approx. Reason. 2018, 103, 94–106. [Google Scholar] [CrossRef] [Green Version]
  39. Gao, X.; Deng, Y. The Pseudo-Pascal Triangle of Maximum Deng Entropy. Int. J. Comput. Commun. Control 2020, 15. [Google Scholar] [CrossRef] [Green Version]
  40. Jiroušek, R.; Shenoy, P.P. A new definition of entropy of belief functions in the Dempster–Shafer theory. Int. J. Approx. Reason. 2018, 92, 49–65. [Google Scholar] [CrossRef] [Green Version]
  41. Nguyen, H.T. On entropy of random sets and possibility distributions. Anal. Fuzzy Inf. 1987, 1, 145–156. [Google Scholar]
  42. Xiao, F. A multiple-criteria decision-making method based on D numbers and belief entropy. Int. J. Fuzzy Syst. 2019, 21, 1144–1153. [Google Scholar] [CrossRef]
  43. Song, Y.; Wang, X.; Zhu, J.; Lei, L. Sensor dynamic reliability evaluation based on evidence theory and intuitionistic fuzzy sets. Appl. Intell. 2018, 48, 3950–3962. [Google Scholar] [CrossRef]
  44. Zhou, K.; Martin, A.; Pan, Q.; Liu, Z. SELP: Semi–supervised evidential label propagation algorithm for graph data clustering. Int. J. Approx. Reason. 2018, 92, 139–154. [Google Scholar] [CrossRef] [Green Version]
  45. Liu, Z.g.; Zhang, Z.; Liu, Y.; Dezert, J.; Pan, Q. A new pattern classification improvement method with local quality matrix based on K-NN. Knowl.-Based Syst. 2019, 164, 336–347. [Google Scholar] [CrossRef]
  46. MacKay, D.J.; Mac Kay, D.J. Information Theory, Inference and Learning Algorithms; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  47. Maeda, Y.; Ichihashi, H. An uncertainty measure with monotonicity under the random set inclusion. Int. J. Gener. Syst. 1993, 21, 379–392. [Google Scholar] [CrossRef]
  48. Yager, R.R. Entropy and specificity in a mathematical theory of evidence. Int. J. Gener. Syst. 1983, 9, 249–260. [Google Scholar] [CrossRef]
  49. Klir, G.J.; Parviz, B. A note on the measure of discord. In Uncertainty in Artificial Intelligence; Elsevier: Amsterdam, The Netherlands, 1992; pp. 138–141. [Google Scholar]
  50. Pal, N.R.; Bezdek, J.C.; Hemasinha, R. Uncertainty measures for evidential reasoning I: A review. Int. J. Approx. Reason. 1992, 7, 165–183. [Google Scholar] [CrossRef] [Green Version]
  51. Klir, G.J.; Wierman, M.J. Uncertainty-Based Information: Elements of Generalized Information Theory; Physica: Heidelberger, Germany, 2013; Volume 15. [Google Scholar]
  52. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  53. Zhou, D.; Tang, Y.; Jiang, W. An improved belief entropy and its application in decision-making. Complexity 2017, 2017, 4359195. [Google Scholar] [CrossRef]
  54. Yong, D.; WenKang, S.; ZhenFu, Z.; Qi, L. Combining belief functions based on distance of evidence. Decis. Support Syst. 2004, 38, 489–493. [Google Scholar] [CrossRef]
  55. Yager, R.R. On the Dempster-Shafer framework and new combination rules. Inf. Sci. 1987, 41, 93–137. [Google Scholar] [CrossRef]
  56. Murphy, C.K. Combining belief functions when evidence conflicts. Decis. Support Syst. 2000, 29, 1–9. [Google Scholar] [CrossRef]
  57. Zhang, Z.; Liu, T.; Chen, D.; Zhang, W. Novel algorithm for identifying and fusing conflicting data in wireless sensor networks. Sensors 2014, 14, 9562–9581. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Q ( m ) changes with the variable A.
Figure 1. Q ( m ) changes with the variable A.
Entropy 22 00487 g001
Figure 2. Uncertain degree with the proposed measure and other methods.
Figure 2. Uncertain degree with the proposed measure and other methods.
Entropy 22 00487 g002
Figure 3. Decision making progress based on the improved total uncertainty measure.
Figure 3. Decision making progress based on the improved total uncertainty measure.
Entropy 22 00487 g003
Figure 4. Fusion results with two BPAs.
Figure 4. Fusion results with two BPAs.
Entropy 22 00487 g004
Figure 5. Fusion results with three BPAs.
Figure 5. Fusion results with three BPAs.
Entropy 22 00487 g005
Figure 6. Fusion results with four BPAs.
Figure 6. Fusion results with four BPAs.
Entropy 22 00487 g006
Figure 7. Fusion results with five BPAs.
Figure 7. Fusion results with five BPAs.
Entropy 22 00487 g007
Table 1. Belief entropy in Dempster–Shafer evidence theory.
Table 1. Belief entropy in Dempster–Shafer evidence theory.
NameDefinationSet Cons.Non-NegMax. Ent.Monton.Prob. Cons.AdditivitySubadditivity
Dubois–Prade Equation [24] H D ( m ) = A X m ( a ) l o g 2 A yesnonoyesnoyesyes
Nguyen Equation [41] H n ( m ) = A X m ( A ) l o g 2 ( 1 m ( A ) ) nonononoyesyesno
Klir–Ramer Equation [46] H k r ( m ) = A X m ( A ) l o g 2 B X m ( B ) A B B noyesnoyesyesyesno
Klir–Parviz Equation [47] H k p ( m ) = A X m ( A ) l o g 2 B X m ( B ) A B A noyesnoyesyesyesno
Deng Equation [28] H D ( m ) = A X m ( A ) l o g 2 2 A 1 m ( A ) noyesnonoyesnono
Jirousek–Shenoy Equation [40] H ( m ) = x X P l _ P m ( x ) l o g 2 1 P l _ P m ( x ) + A X m ( A ) l o g 2 ( A ) yesyesnoyesyesyesno
Yager Equation [48] H y ( m ) = A X m ( A ) l o g 2 P l ( A ) nonononoyesyesno
Hohle Equation [49] H o ( m ) = A X m ( A ) l o g 2 B e l ( A ) nonononoyesyesno
Pal et al. Equation [50] H b ( m ) = A X m ( A ) l o g 2 ( A m ( A ) ) yesyesyesyesyesyesno
Table 2. Proposed measure Q ( m ) with A.
Table 2. Proposed measure Q ( m ) with A.
Cases Q ( m )
A = 1 1.4285
A = 1 , 2 1.5351
A = 1 , 2 , 3 1.6821
A = 1 , , 4 1.8551
A = 1 , , 5 2.0476
A = 1 , , 6 2.2557
A = 1 , , 7 2.4765
A = 1 , , 8 2.7085
A = 1 , , 9 2.9500
A = 1 , , 10 3.2002
A = 1 , , 11 3.4580
A = 1 , , 12 3.7228
A = 1 , , 13 3.9941
A = 1 , , 14 4.2731
Table 3. Results of different uncertainty measures of Example 3.
Table 3. Results of different uncertainty measures of Example 3.
BPAsDengPal et al.Dubois–PradeProposed Measure
m 1 2.55591.971011.4710
m 2 2.55591.971011.6376
Table 4. Conflict data from sensors.
Table 4. Conflict data from sensors.
BPAs F 1 F 2 F 3 F 1 , F 3
S 1 : m 1 0.410.290.300.00
S 2 : m 2 0.000.900.100.00
S 3 : m 3 0.580.070.000.35
S 4 : m 4 0.550.100.000.35
S 5 : m 5 0.600.100.000.30
Table 5. Fusion results with different combination rules.
Table 5. Fusion results with different combination rules.
m 1 , m 2 m 1 , m 2 , m 3 m 1 , m 2 , m 3 , m 4 m 1 , m 2 , m 3 , m 4 , m 5
Dempster’s rule [7] m ( F 1 ) = 0
m ( F 2 ) = 0.8969
m ( F 3 ) = 0.1031
m ( F 1 , F 3 ) = 0
m ( F 1 ) = 0
m ( F 2 ) = 0.6575
m ( F 3 ) = 0.3425
m ( F 1 , F 3 ) = 0
m ( F 1 ) = 0
m ( F 2 ) = 0.3321
m ( F 3 ) = 0.6679
m ( F 1 , F 3 ) = 0
m ( F 1 ) = 0
m ( F 2 ) = 0.1422
m ( F 3 ) = 0.8578
m ( F 1 , F 3 ) = 0
Yager’s rule [55] m ( F 1 ) = 0
m ( F 2 ) = 0.2610
m ( F 3 ) = 0.0300
m ( F 1 , F 3 ) = 0
m ( X ) = 0.7090
m ( F 1 ) = 0.4112
m ( F 2 ) = 0.0679
m ( F 3 ) = 0.0105
m ( F 1 , F 3 ) = 0.2481
m ( X ) = 0.2622
m ( F 1 ) = 0.6508
m ( F 2 ) = 0.0330
m ( F 3 ) = 0.0037
m ( F 1 , F 3 ) = 0.1786
m ( X ) = 0.1339
m ( F 1 ) = 0.7732
m ( F 2 ) = 0.0167
m ( F 3 ) = 0.0011
m ( F 1 , F 3 ) = 0.0938
m ( X ) = 0.1152
Mruphy’s rule [56] m ( F 1 ) = 0.0964
m ( F 2 ) = 0.8119
m ( F 3 ) = 0.0917
m ( F 1 , F 3 ) = 0
m ( F 1 ) = 0.4619
m ( F 2 ) = 0.4497
m ( F 3 ) = 0.0794
m ( F 1 , F 3 ) = 0.0090
m ( F 1 ) = 0.8362
m ( F 2 ) = 0.1147
m ( F 3 ) = 0.0410
m ( F 1 , F 3 ) = 0.0081
m ( F 1 ) = 0.9620
m ( F 2 ) = 0.0210
m ( F 3 ) = 0.0138
m ( F 1 , F 3 ) = 0.0032
Deng et al. method [54] m ( F 1 ) = 0.0964
m ( F 2 ) = 0.8119
m ( F 3 ) = 0.0917
m ( F 1 , F 3 ) = 0
m ( F 1 ) = 0.4974
m ( F 2 ) = 0.4054
m ( F 3 ) = 0.0888
m ( F 1 , F 3 ) = 0.0084
m ( F 1 ) = 0.9089
m ( F 2 ) = 0.0444
m ( F 3 ) = 0.0379
m ( F 1 , F 3 ) = 0.0089
m ( F 1 ) = 0.9820
m ( F 2 ) = 0.0039
m ( F 3 ) = 0.0107
m ( F 1 , F 3 ) = 0.0034
Zhang et al. method [57] m ( F 1 ) = 0.0964
m ( F 2 ) = 0.8119
m ( F 3 ) = 0.0917
m ( F 1 , F 3 ) = 0
m ( F 1 ) = 0.5681
m ( F 2 ) = 0.3319
m ( F 3 ) = 0.0929
m ( F 1 , F 3 ) = 0.0084
m ( F 1 ) = 0.9142
m ( F 2 ) = 0.0395
m ( F 3 ) = 0.0399
m ( F 1 , F 3 ) = 0.0083
m ( F 1 ) = 0.9820
m ( F 2 ) = 0.0034
m ( F 3 ) = 0.0115
m ( F 1 , F 3 ) = 0.0032
The proposed method m ( F 1 ) = 0.1648
m ( F 2 ) = 0.7796
m ( F 3 ) = 0.0556
m ( F 1 , F 3 ) = 0
m ( F 1 ) = 0.9417
m ( F 2 ) = 0.0250
m ( F 3 ) = 0.0322
m ( F 1 , F 3 ) = 0.0011
m ( F 1 ) = 0.9756
m ( F 2 ) = 0.0039
m ( F 3 ) = 0.0176
m ( F 1 , F 3 ) = 0.0029
m ( F 1 ) = 0.9849
m ( F 2 ) = 0.0014
m ( F 3 ) = 0.0105
m ( F 1 , F 3 ) = 0.0032
Table 6. Fusion results of Q ( m ) with different number of sensor reports.
Table 6. Fusion results of Q ( m ) with different number of sensor reports.
m 1 , m 2 m ( F 1 ) = 0.1648
m ( F 2 ) = 0.7796
m ( F 3 ) = 0.0556
m ( F 1 , F 3 ) = 0
F 2
m 1 , m 2 , m 3 m ( F 1 ) = 0.9417
m ( F 2 ) = 0.0250
m ( F 3 ) = 0.0322
m ( F 1 , F 3 ) = 0.0011
F 1
m 1 , m 2 , m 3 , m 4 m ( F 1 ) = 0.9756
m ( F 2 ) = 0.0039
m ( F 3 ) = 0.0176
m ( F 1 , F 3 ) = 0.0029
F 1
m 1 , m 2 , m 3 , m 4 , m 5 m ( F 1 ) = 0.9849
m ( F 2 ) = 0.0014
m ( F 3 ) = 0.0105
m ( F 1 , F 3 ) = 0.0032
F 1

Share and Cite

MDPI and ACS Style

Qin, M.; Tang, Y.; Wen, J. An Improved Total Uncertainty Measure in the Evidence Theory and Its Application in Decision Making. Entropy 2020, 22, 487. https://0-doi-org.brum.beds.ac.uk/10.3390/e22040487

AMA Style

Qin M, Tang Y, Wen J. An Improved Total Uncertainty Measure in the Evidence Theory and Its Application in Decision Making. Entropy. 2020; 22(4):487. https://0-doi-org.brum.beds.ac.uk/10.3390/e22040487

Chicago/Turabian Style

Qin, Miao, Yongchuan Tang, and Junhao Wen. 2020. "An Improved Total Uncertainty Measure in the Evidence Theory and Its Application in Decision Making" Entropy 22, no. 4: 487. https://0-doi-org.brum.beds.ac.uk/10.3390/e22040487

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop