Next Article in Journal
PT -Symmetric Potentials from the Confluent Heun Equation
Next Article in Special Issue
Entropy Analysis of COVID-19 Cardiovascular Signals
Previous Article in Journal
Diffusive Resettlement: Irreversible Urban Transitions in Closed Systems
Previous Article in Special Issue
A Hyperspectral Image Classification Approach Based on Feature Fusion and Multi-Layered Gradient Boosting Decision Trees
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Hybrid Possibilistic-Probabilistic Decision-Making Scheme for Classification

Image & Information Processing Department (iTi), IMT-Atlantique, Technopôle Brest Iroise CS 83818, 29238 Brest, France
*
Author to whom correspondence should be addressed.
Submission received: 13 November 2020 / Revised: 28 December 2020 / Accepted: 30 December 2020 / Published: 3 January 2021

Abstract

:
Uncertainty is at the heart of decision-making processes in most real-world applications. Uncertainty can be broadly categorized into two types: aleatory and epistemic. Aleatory uncertainty describes the variability in the physical system where sensors provide information (hard) of a probabilistic type. Epistemic uncertainty appears when the information is incomplete or vague such as judgments or human expert appreciations in linguistic form. Linguistic information (soft) typically introduces a possibilistic type of uncertainty. This paper is concerned with the problem of classification where the available information, concerning the observed features, may be of a probabilistic nature for some features, and of a possibilistic nature for some others. In this configuration, most encountered studies transform one of the two information types into the other form, and then apply either classical Bayesian-based or possibilistic-based decision-making criteria. In this paper, a new hybrid decision-making scheme is proposed for classification when hard and soft information sources are present. A new Possibilistic Maximum Likelihood (PML) criterion is introduced to improve classification rates compared to a classical approach using only information from hard sources. The proposed PML allows to jointly exploit both probabilistic and possibilistic sources within the same probabilistic decision-making framework, without imposing to convert the possibilistic sources into probabilistic ones, and vice versa.

1. Introduction

Uncertainty can be categorized into two main kinds [1]: aleatory or randomness uncertainty, aka statistical uncertainty, due to the variability or the natural randomness in a process and epistemic uncertainty, aka systematic uncertainty, which is the scientific uncertainty in the model of the process. It is due to limited data and knowledge. Epistemic uncertainty calls for alternative methods of representation, propagation, and interpretation of uncertainty than just probability. Since the beginning of the 60 s, following fruitful cross-fertilization, a convergence is emerging between physics, engineering, mathematics, and the cognitive sciences to provide new techniques and models that shows a trend of inspiration from human brain mechanism towards a unified theory to represent knowledge, belief and uncertainty [2,3,4,5,6,7,8,9].
Uncertainty is a natural and unavoidable part in real-world applications. When observing a “real-world situation”, decision making is the process of selecting among several alternatives or decisions.
The problem here is to assign a label or a class to measurements or other types of observations (data) from sensors or other sources to which the observations are assumed to belong. This is a typical classification process.
As shown in Figure 1, the general classification process can be formulated as follows. An input set of observations o ( o Ψ ) is “observed” using a sensor (or a set of sensors) delivering a feature vector x Θ ( Θ is called the features set). This feature vector x is then injected into the decision-making system or labelling system in order to recognize the most likely decision (hypothesis, alternative, class) from a given exhaustive set Ω = { C m .   m = 1 , , M } of M exclusive decisions [10].
The development of both classification algorithms and decision-making criteria are governed by several factors mainly depending on the nature of the feature vector, the nature of the imperfection attached to the observed features as well as the available knowledge characterizing each decision. Several global constraints also drive the conception of the global classification process: the “physical” nature and quality of the measures delivered by the sensors, the categories discrimination capacity of the computed features, the nature and the quality of the available knowledge used for the development of the decision-making system.
However, in much of the literature, the decision-making system is performed by the application of two successive functionalities: the soft labeling and the hard decision (selection) functionalities. The labeling functionality [12] uses the available a priori knowledge in order to perform a mapping between the features set Θ and the decisions set Ω   ( :   Θ Ω ) . For each feature vector x Θ , a soft decision label vector ( x ) = [ C 1 ( x ) , , C m ( x ) , , M ( x )   ] [ 0 , 1 ] M is determined in the light of the available knowledge where C M ( x ) measures the degree of belief or support, that we have in the occurrence of the decision C m . For instance, if the available knowledge allows probabilistic computations, the soft decision label vector is given through C M ( x ) = P r { C M | x } [13] where P r { C M | x } represents the a posteriori probability of the decision C M given the observed feature vector x Θ [14]. When the available knowledge is expressed in terms of ambiguous information, the possibility theory formalism (developed by L. Zadeh [7] and D. Dubois et al. [15,16,17]) can be used. The soft decision label vector ( x ) is then expressed with an a posteriori possibility distribution π x defined on the decisions set Ω . In this case, C m ( x ) = π x ( C m ) where π x ( C m ) represents the possibility degree for the decision C m   to occur, given the observed feature vector x Θ .
The second functionality performed by the decision-making process is called the hard decision or the selection functionality. As the ultimate goal of most classification applications is to select one and only one class (associated with the observations “o” for which the feature vector x Θ is extracted) out of the classes set Ω , then a mapping has to be applied in order to transform the soft decision label vector ( x ) into a hard decision label vector for which one and only one decision is selected. The goal is then to make a choice according to an optimality criterion.
In this paper, we propose a new criterion for decision-making process in classification systems called possibilistic maximum likelihood (PML). This criterion is framed within the possibility theory, but it uses corresponding notions from Bayesian decision-making. The main motivation being the development of PML is for multisource information fusion where an object or a pattern may be observed through several channels and where the available information, concerning the observed features, may be of a probabilistic nature for some features, and of an epistemic nature for some others.
In the presence of both types of information sources, most encountered studies transform one of the two information types into the other form, and then apply either the classical Bayesian or possibilistic decision-making criteria. With the PML decision-making approach, the Bayesian decision-making framework is adopted. The epistemic knowledge is integrated into the decision-making process by defining possibilistic loss values instead of the usually used zero-one loss values. A set of possibilistic loss values is proposed and evaluated in the context of pixel-based image classification where a synthetic scene, composed of several thematic classes, is randomly generated using two types of probabilistic sensors: a Gaussian and a Rayleigh sensor, complemented by an expert type of information source. Results obtained with the proposed PML criterion show that the classification recognition rates approach the optimal case, being, when all the available information is expressed in terms of probabilistic knowledge.
When the sources of information can be modelled by probability theory, the Baysesian approach has sufficient decision-making tools to fuse that information and performs classification. However, in the case where the knowledge available for the decision-making process is ill-defined in the sense that it is totally or partially expressed in terms of ambiguous information representing limitations in feature values, or, encoding linguistic expert’s knowledge about the relationship between the feature values and different potential decisions, new mathematical tools (i.e., PML) need to be developed. This type of available knowledge can be represented as a conditional possibilistic soft decision label vector ( x ) defined on the decisions set Ω such that, C m ( x ) = π x ( C m ) = π ( C m | x ) where   π ( C m | x ) represents the possibility degree for the decision Cm to occur, given the observed feature vector x Θ and the underlying observations o.
Possibility theory constitutes the natural framework allowing to tackle this type of information imperfection (called the epistemic uncertainty type) when one and only one decision (hard decision) must be selected from the exhaustive decisions set Ω , with incomplete, ill-defined or ambiguous available knowledge thus encoded as a possibility distribution over Ω . This paper proposes a joint decision-making criterion which allows to integrate such extra possibilistic knowledge within a probabilistic decision-making framework taking into account both types of information: possibilistic and probabilistic. In spite of the fact that possibility theory deals with uncertainty, which means that a unique but unknown elementary decision is to occur, and the ultimate goal is to determine this decision, there are relatively few studies that tackle that decision-making issue [18,19,20,21,22,23,24,25]. We must however mention the considerable contributions of Dubois and Prade [26] on possibility theory as well as on clarification on the various semantics of fuzzy sets [27,28,29,30]. Denoeux et al. [31,32,33] contributed as well significantly on that topic but they consider epistemic uncertainty as a higher order uncertainty upon probabilistic models such as in imprecise probabilities of Walley [34,35] and fuzzy sets type-2 [36,37,38] which is not being the case in this current paper.
The paper is organized this way. A brief recall of the Bayesian decision-making criteria, and of possibility theory is given in Section 2 and Section 3. Three major possibilistic decision making criteria, i.e., maximum possibility, maximum necessity measure and confidence index maximization, are being detailed in Section 4. The PML criterion is presented in Section 5 followed by its evaluation in Section 6. Paper closes with conclusion in Section 7.

2. Hard Decision in the Bayesian Framework

In the Bayesian classification framework, the most widely used hard decision is based on minimizing an overall decision risk function [14]. Assuming o Ψ is the pattern for which the feature vector x Θ is observed, let λ m , n denotes a “predefined” conditional loss or penalty, incurred for deciding that the observed pattern o is associated with the decision Cn, whereas the true decision (class or category) for o is C m ( n , m   ϵ { 1 , , M } ) . Therefore, the probabilistic expected loss R ( C n | x ) , also called the Conditional risk, associated with the decision C n given the observed feature vector x Θ , is given by:
R ( C n | x ) = E { λ m , n } = m = 1 M λ m , n P r { C m | x }
where E { · } stands for the mathematical expectation. Bayes decision criterion consists in minimizing the overall risk R, also called Bayes risk, as defined in (2), by computing the conditional risk for all decisions and then, selecting the decision Cn for which R ( C n | x ) is minimum:
R ( C n | x ) = E x { R ( C n | x )   } = R ( C n | x ) P r { x } d x
Therefore, the minimum-risk Bayes decision criterion is based on the selection of the decision Cn which gives the smallest risk R ( C n | x ) . This rule can thus be formulated as follows:
D e c i s i o n [ x ( p ) ] = arg min n = 1 , , M ( m = 1 M λ m , n P r { C m | x } )
If P r { C m } denotes the a priori probability of the decision C m and P r { x | C m } , the likelihood function of the measured feature vector x, given the decision Cm, then using Bayes’ rule, the minimum-risk Bayes decision criterion (3) can be rewritten as:
D e c i s i o n [ x ( p ) ] = arg min n = 1 , , M ( m = 1 M P r { C m } P r { x | C m } )  
In the two-category decision case, i.e., Ω = { C 1 , C 2 } , it can be easily shown that the minimum-risk Bayes decision criterion, simply called Bayes criterion, can be expressed as in (5):
L R = P r { x | C 1 } P r { x | C 2 }     > D e c i s i o n [ x ( p ) ] = C 1 < D e c i s i o n [ x ( p ) ] = C 2     λ 2 , 2 λ 2 , 1   λ 1 , 1 λ 1 , 2 · P r { C 2 } P r { C 1 } η
In other words, this decision criterion consists of comparing the likelihood ratio (LR) P r { x | C 1 } / P r { x | C 2 }   to a threshold η independent of the observed feature vector x. The binary cost, or zero-one loss, assignment is commonly used in classification problems. This rule, expressed in (6), gives λ m , n no cost for a correct decision (when the true pattern class/decision C m   is identical to the decided class/decision C n   ) and a unit cost for a wrong decision (when the true class/decision C m   is different from the decided class/decision C n   ).
λ m , n = { 0 if C m = C n 1 if C m C n
It should be noticed that this binary cost assignment considers all errors as equally costly. It also leads to express the conditional risk as:
R ( C n | x ) = m = 1 M λ m , n P r { C m | x } = 1 P r { C n | x }
A decision minimizing the conditional risk R ( C n | x ) becomes a decision maximizing the a posteriori probability P r { C n | x } . As shown in (8), this version of the Bayes criterion is called the maximum a posteriori criterion (MAP) since it seeks to determine the decision maximizing the a posteriori probability value. It is also obvious that this decision process corresponds to the minimum-error decision rule which leads to the best recognition rate that a decision criterion can achieve:
D e c i s i o n M A P [ x ( p ) ] = arg max n = 1 , M P r { C n | x }
When the decisions a priori probabilities P r { C m } and the likelihood functions P r { x | C m }   are not available, or simply difficult to obtain, the Minmax Probabilistic Criterion (MPC) can be an interesting alternative to the minimum-risk Bayes decision criterion [39]. As expressed in (9), this hard decision criterion consists in selecting the decision that minimizes the maximum decision cost:
D e c i s i o n M P C [ x ( p ) ] = arg   min n = 1 , , M [ max m = 1 , , M { λ m , n P r { C m | x } } ]  

3. Brief Review of Possibility Theory

Possibility theory is a relatively new theory devoted to handle uncertainty in the context where the available knowledge is only expressed in an ambiguous form. This theory was first introduced by Zadeh in 1978 as an extension of fuzzy sets and fuzzy logic theory, to express the intrinsic fuzziness of natural languages as well as uncertain information [7]. It is well established that probabilistic reasoning, based on the use of a probability measure, constitutes the optimal approach dealing with uncertainty. In the case where the available knowledge is ambiguous and encoded by a membership function, i.e., a fuzzy set, defined over the decisions set, the possibility theory transforms the membership function into a possibility distribution π . Then the realization of each event (subset of the decisions set) is bounded by a possibilistic interval defined though a possibility, Π , and a necessity, N, measures [16]. The use of these two dual measures in possibility theory makes the main difference compared with the probability theory. Besides, possibility theory is not additive in terms of beliefs combination, and makes sense on ordinal structures [17]. In the following subsections, the basic concepts of a possibility distribution and the dual possibilistic measures (possibility and necessity measures) will be presented. The possibilistic decision rules will be detailed in Section 4. Full details can be found in [11].

3.1. Possibility Distribution

Let Ω = { C 1 , C 2 ,   , C M } be a finite and exhaustive set of M mutually exclusive elementary decisions (e.g., decisions, thematic classes, hypothesis, etc.). Exclusiveness means that one and only one decision may occur at one time, whereas exhaustiveness states that the occurring decision certainly belongs to Ω . Possibility theory is based on the notion of possibility distribution denoted by π , which maps elementary decisions from Ω to the interval [0, 1], thus encoding “our” state of knowledge or belief, on the possible occurrence of each class C m Ω . The value π ( C m ) represents to what extent it is possible for C m to be the unique occurring decision. In this context, two extreme cases of knowledge are given:
Complete knowledge: ! C m Ω ,   π ( C m ) = 1   and π ( C n ) = 0 ,   C n Ω ,   C n C m .
Complete ignorance: C m Ω ,   π ( C m ) = 1 (all elements from   Ω are considered as totally possible). π ( · ) is called a normal possibility distribution if it exists at least one element C m 0 from Ω such that π ( C m 0 ) = 1 .

3.2. Possibility and Necessity Measures

Based on the possibility distribution concept, two dual set measures, possibility, Π , and a necessity, N, measures are derived. For every subset (or event) A Ω , these measures are defined by:
Π ( A ) = max C m A [ π ( C m ) ]
N ( A ) = 1 Π ( A c ) = min C m A c [ 1 π ( C m ) ]
where A c denotes the complement of the event A (i.e., A A c = Ω   with   A A c = ).
The possibility measure Π ( A ) estimates the level of consistency about event A occurrence, given the available knowledge encoded by the possibility distribution π . Thus, Π ( A ) = 0 means that A is an impossible event while Π ( A ) = 1 means that the event A is totally possible. The necessity measure N(A) evaluates the level of certainty about event A occurrence, involved by possibility distribution π . N ( A ) = 0 means that the certainty about the occurrence of A is null. On the contrary, N ( A ) = 1 means that the occurrence of A is totally certain. In a classification problem, where each decision C m refers to a given class or category, the case where all events A are composed of a single decision ( A m = { C m } ,   m = 1 ,   ,   M ) , is of particular interest. In this case, the possibility Π ( · ) , and the necessity N( · ), measures are reduced to:
Π ( A m ) = Π ( { C m } ) = π ( C m )
N ( A m ) = N ( { C m } ) = 1 Π ( { C m } c ) = 1 max n m π ( C n )

4. Decision-Making in the Possibility Theory Framework

In this section, we will investigate existing possibilistic decision-making rules. Two families of rules can be distinguished: rules based on the direct use of the information encapsulated in the possibility distribution, and rules based on the use of uncertainty measures associated with this possibility distribution. Let Ω = { C 1 , C 2 ,   , C M } be a finite and exhaustive set of M mutually exclusive elementary decisions. Given an observed pattern o Ψ for which the feature vector x Θ is observed, let π x ( C m ) denotes the a posteriori possibility distribution π ( C m | x ) defined on Ω . The possibility, Π x ( { C m } ) , and necessity, N x ( { C m } ) , measures are obtained as expressed in Equation (11), using the possibility distribution π x ( C m ) .

4.1. Decision Rule Based on the Maximum of Possibility

The decision rule based on the maximum of possibility is certainly the most widely used in possibilistic classification—decision-making applications. Indeed, as shown in (12), this rule is based on the selection of the elementary decision C m 0 Ω having the highest possibility degree of occurrence Π x ( { C m 0 } ) :
D e c i s i o n   [ x ( p ) ] =   C m 0   if   and   only   if   m 0   = arg max n = 1 , M Π x ( { C m } )
A “first” mathematical justification of this “intuitive” possibilistic decision-making rule can be derived from the Minmax Probabilistic Criterion (MPC), Equation (9), using a binary cost assignment rule. Indeed, ‘converting’ the a posteriori possibility distributions π x ( · )   into a posteriori probability distributions P r { · | x } is assumed to respect the three following constraints [30]: (a) the consistency principle, (b) the preference ordering preservation, and (c) the least commitment principle. The preference ordering preservation, on which we focus the attention here, means that if decision C m 1 is preferred to decision C m 2 , i.e., π x ( C m 1 ) > π x ( C m 2 ) , then the a posteriori probability distribution P r { · | x } obtained from π x ( · )   should satisfy P r { C m 1 | x } > P r { C m 2 | x } . Equation (13) sums up this preference ordering preservation constraint:
π x ( C m 1 ) > π x ( C m 2 ) P r { C m 1 | x } > P r { C m 2 | x }
Therefore, selecting the decision maximizing the a posteriori probability or selecting the decision maximizing the a posteriori possibility decision is identical: using the MPC associated with the binary cost assignment rule or using the maximum possibility decision rule led to an identical result as expressed in (14).
D e c i s i o n   [ x ( p ) ] =   C m 0   iff   m 0   = arg max m = 1 , M P r { C m | x } = max m = 1 , M π x ( C m )
This decision-making criterion is called the Naive Bayes style possibilistic criterion Refs. [40,41,42] and most ongoing efforts are oriented into the computation of the a posteriori possibility values using numerical data [43]. An extensive study of properties and equivalence between possibilistic and probability approaches is presented in [20]. Notice that this decision rule, strongly inspired from probabilistic decision reasoning, does not provide a hard decision mechanism when several elementary decisions have the same maximum possibility measure.

4.2. Decision Rule Based on Maximizing the Necessity Measure

It is worthwhile to notice that the a posteriori measures of possibility Π x and necessity N x coming from a normal a posteriori possibility distribution π x ( · ) , constitute a bracketing for the a posteriori probability distribution P r { · | x } [17]:
N x ( { C m } ) = 1 max n m n = 1 . , M π x ( C n ) P r { C m | x } Π x ( { C m } ) = π x ( C m )
Therefore, the maximum possibility decision criterion can be considered as an optimistic decision criterion as it maximizes the upper bound of the a posteriori probability distribution. On the contrary, a pessimistic decision criterion based on maximizing the a posteriori necessity measure can be considered as a maximization of the lower bound of the a posteriori probability distribution. Equation (16) expresses this pessimistic decision criterion:
D e c i s i o n   [ x ( p ) ] =   C m 0   iff   m 0   = arg max n = 1 , M [ N x ( { C m } ) ]
The question that we must raise concerns the “links” between the optimistic and the pessimistic decision criteria. Let us consider the a posteriori possibility distribution π x ( · ) for which C m 1 ( resp .   C m 2 ) is the “winning decision” obtained using the maximum possibility (resp. necessity measure) decision criteria as given in (17):
π x ( C m 1 ) = max m π x ( C m ) a n d   N x ( { C m 2 } ) = max m N x ( { C m } )
The following important question can be formulated as follows: “Is the winning decision C m 1 (according to the maximum possibility criterion) is the same as the winning decision C m 2 according to maximum necessity measure criterion?”
First, notice that if several elementary decisions share the same maximum possibility value υ = π x ( C m 1 ) , then, the necessity measure becomes a useless decision criterion since:
N x ( { C m } ) = 1 max k m π x ( C k ) = 1 υ   for   all   the   elementary   decisions .
Now, suppose that only one decision C m 1 assumes the maximum possibility value υ = π x ( C m 1 ) , it is important to raise the question whether the decision C m 1 will (or will not) be the decision assuming the maximum necessity measure value. Let us note v’, the possibility value for the “second best” decision according to the possibility value criterion. As C m 1 is the unique decision having the maximum possibility value υ , we have υ < υ . Therefore, as shown in (18), the necessity measure value N x ( { C m } ) only gets maximum for the decision C m 1   since 1 υ > 1 υ .
N x ( { C m } ) = 1 max k m π x ( C k ) = { 1 υ       if m =   m 1   1 υ       if m   m 1
As a conclusion, when the maximum necessity measure criterion is useful for application (i.e., only one elementary decision assumes the maximum possibility value), then, both decision criteria (maximum possibility and maximum necessity) produce the same winning decision. In order to illustrate the difference between the maximum possibility and the maximum necessity measure criteria, Figure 2 presents an illustrative example.
In Figure 2 example, four different a posteriori possibility distributions π 1 ,   π 2 , π 3 , π 4 , all defined on a five elementary decisions set Ω = { C 1 , C 2 , C 3 , C 4 , C 5 } are considered. The necessity measures N k ( { C m } ) have been computed from the corresponding possibility distribution π k . The underlined values indicate which decisions result from the maximum possibility decision criterion as well as the maximum necessity measure decision criterion, for the four possibility distributions π k . Note that the necessity measure assumes at most two values whatever the considered possibility distribution. When the a posteriori possibility distribution has one and only one decision having the highest possibility degree, then both decision rules produce the same winning decision. This is the case of the normal possibility distribution π 1 as well as the subnormal possibility distribution π 3 , indicated as cases (a) and (c) in Figure 2.
When several elementary decisions share the same highest possibility degree, then the maximum possibility decision criterion can randomly select one of these potential winning decisions. In this case, the maximum necessity measure decision criterion will affect a single necessity measure degree to all elementary decisions from Ω , and thus, it will be impossible to select any of the potential winning decisions. This behavior can be observed with a normal possibility distribution π 2 as well as with a subnormal possibility distribution (like π 4 ), cases (b) and (d) in Figure 2. This example clearly shows the weakness of the decisional capacity of the maximum necessity measure decision criterion when compared to the maximum possibility decision criterion.

4.3. Decision Rule Based on Maximizing the Confidence Index

Other possibilistic decision rules based on the use of uncertainty measures are also encountered in literature. The most frequently used criterion (proposed by Kikuchi et al. [44]) is based on the maximization of the confidence index Ind defined as a combination of the possibility and the necessity measures for each event A Ω , given a possibility distribution π ( · ) :
I n d :   2 Ω [ 1 ,   + 1 ]
A I n d ( A ) = Π ( A ) + N ( A ) 1 ,   A Ω
where 2 Ω denotes the power set of Ω , i.e., the set of all subsets from Ω .
For an event A, this index ranges from −1 to +1:
-
Ind(A) = −1, iff Π ( A ) = N ( A ) = 0 (the occurrence of A is totally impossible and uncertain);
-
Ind(A) = +1, iff Π ( A ) = N ( A ) = 1 (the occurrence of A is totally possible and certain).
Restricting the application of this measure to events A m having only one decision A m = { C m } shows that Ind( A m ) measures the difference between the possibility measure of the event A m (which is identical to the possibility degree of the decision C m ) and the highest possibility degree of all decisions contained in ( A m ) c (the complement of A m in Ω ):
I n d ( A m ) =   Π ( A m ) + N ( A m ) 1 = π ( C m ) max m n π ( C n )
Therefore, if A m 0 = { C m 0 } is the only event having the highest possibility measure value π ( C m 0 ) , then, A m 0 will be the unique event having a positive confidence index value, whereas all other events will have negative values, as illustrated in Figure 3 where we assume π ( C m 0 ) > π ( C m ) ,   m m 0 , and, C m 1 refers to the decision having the second highest possibility degree.
In a classification decision-making problem, the decision criterion associated with this index can be formulated as follows:
D e c i s i o n I n d   [ x ( p ) ] =   A m 0   iff   I n d ( A m 0 ) = max m = 1 , M [ I n d ( A m ) ]
The main difference between the maximum possibility and the maximum confidence index decision criteria lies in the fact that the maximum possibility decision criterion is only based on the maximum possibility degree whereas the maximum confidence index decision criterion is based on the difference between the two highest possibility degrees associated with the elementary decisions. As already mentioned, it is important to notice that the event A m 0 = { C m 0 } having the highest possibilistic value, will be the unique event producing a positive confidence index measuring the difference with the second highest possibility degree. All other events A m = { C m } ,   m m 0 , will produce negative confidence indices.
When several decisions share the same highest possibility degree, their confidence index (the highest one) will be null. This shows the real capacity of this uncertainty measure for the decision-making process. However, this criterion brings the same resulting decisions as the two former ones.

5. Possibilistic Maximum Likelihood (PML) Decision Criterion

In the formulation of the Bayesian classification approach, all information sources are assumed to have probabilistic uncertainty where the available knowledge describing this uncertainty is expressed, estimated or evaluated in terms of probability distributions. In the possibilistic classification framework, the information sources are assumed to suffer from possibilistic (or epistemic) uncertainty where the available knowledge describing this uncertainty is expressed in terms of possibility distributions. In this section, the Bayesian pattern recognition framework is generalized in order to integrate both probabilistic and epistemic sources of knowledge. A joint probabilistic—possibilistic decision criterion called Possibilistic Maximum Likelihood (PML) is proposed to handle both types of uncertainties.

5.1. Sources with Probabilistic and Possibilistic Types of Uncertainties

In some situations, an object from the observation space is observed through several feature sets. This is the case, for instance, in multi-sensor environment for classification applications. In such situations, the information available for the description of the feature vectors may be of different natures: probabilistic, epistemic, etc. Yager [24,45,46] addresses the same sort of problems: multi-source uncertain information fusion in the case when the information can be both from hard sensors of a probabilistic type and from soft knowledge-expert linguistic source of a possibilistic type. He uses t-norms (‘and’ operations) to combine possibility and probability measures. As will be explained below, Yager’s product of possibilities and probabilities coincides with our ‘decision variables’ optimized through the proposed PML approach.
Let us consider the example illustrated in Figure 4, where each pattern o (from the patterns set Ψ ) is “observed” through two channels. Source 1 (resp. source 2) measures a sub-feature vector x 1 Θ 1 ( resp . x 2 Θ 2 ) . Therefore, the resulting feature vector x ( o ) is obtained as the concatenation of the two sub-feature vectors: x ( o ) = [ x 1   x 2 ] . In this configuration, the available information in the sub-feature vector x1 (resp. x2) undergoes probabilistic (resp. epistemic) uncertainties and is encoded as an a posteriori probability soft decision label vector C n 1 = P r { C n | x 1 } ,   n = 1 , 2 ,   ,   M   (resp. a posteriori possibility soft decision label vector C n 2 = π x 2 ( C n ) ,   n = 1 , 2 ,   ,   M ) .
As an example, in a remote sensing system, Source 1 may be considered as a multispectral imaging system, where all potential a posteriori probability distributions, P r { C n | x 1 } ,   n = 1 , 2 ,   ,   M , are assumed to be known and well established. The second sensor, Source 2, could be a radar imaging system where the available information concerning the different thematic classes is expressed by an expert using ambiguous linguistic variables like: “the thematic class C n   is observed as “Bright”, “Slightly Dark”, etc. in the sub-feature set Θ 2 ”. Each linguistic variable can be used to generate an a posteriori possibility distribution associated with each thematic class π x 2 ( C n ) ,   n = 1 , 2 ,   ,   M .

5.2. Possibilistic Maximum Likelihood (PML) Decision Criterion: A New Hybrid Criterion

In the Bayesian decision framework, detailed in previous sections, the binary cost assignment approach suffers from two constraints. On one hand, all errors are considered as equally costly: the penalty (or cost) of misclassifying an observed pattern o as being associated with a decision C n whereas the true decision for “o” is C m is the same (unit loss). This situation does not reflect real applications constraints. For instance, deciding that an examined patient is healthy whereas he suffers a cancer is much more serious than the other way around. On the other hand, the loss function values λ m , n ,   m , n { 1 , 2 , , M } are static (or, predefined) and do not depend on the feature vectors of the observed patterns. The possibilistic maximum likelihood (PML) criterion, proposed in this paper, is based on the use of the epistemic source of information (the a posteriori possibility distribution, defined on the sub-feature space Θ 2 ) in order to define possibilistic loss values and to inject, afterwards, these values into the Bayesian decision criterion.
Assume that, for each object o Ψ , the observed feature vector is given by x ( o ) = [ x 1   x 2 ] Θ 1 × Θ 2 , and denote P r { · | x 1 }   ( resp . π x 2 ( · ) ) as encoding the a posteriori probability (resp. possibility) soft decision label vectors defined over the sub-feature set Θ 1 (resp. Θ 2 ). The proposed PML criterion relies on the use of loss values λ m , n ranging from −1 (i.e., no loss) to +1 (i.e., maximum loss), and λ m , n refers to the risk of choosing C n whereas the real decision for the considered pattern is C m . Depending on the epistemic information available through Source 2, the proposed loss values are given by:
λ m , n = { max k m π x 2 ( C k )       = Π x 2 ( { C m } c ) m n max k m π x 2 ( C k ) π x 2 ( C m )   = I n d x 2 ( { C m } ) if       m = n  
In the case of a wrong decision, the decision penalty values, i.e., λ m , n where m , are considered as positive loss values ranging in the interval: 0 λ m , n = max k m π x 2 ( C k ) 1 . Thus, the wrong decision unit cost in the framework of binary-cost assignment, is “softened”, in this possibilistic approach, and assumes its maximum value, i.e., unit cost, only when the wrong decision C n   has a total possibility degree of occurrence.
When a correct decision C m is selected, the zero-loss value (used by the binary cost assignment approach) is substituted by λ m , n = max k m π x 2 ( C k ) π x 2 ( C m ) = I n d x 2 ( { C m } ) . If the occurrence possibility degree π x 2 ( C m ) of the true decision C m , is the highest degree π x 2 ( C m ) > max k π x 2 ( C k ) , then the resulting loss value λ m , n becomes negative. The smallest penalty value is reached, i.e., λ m , n = 1 , when π x 2 ( C m ) = 1 (i.e., true decision C m has a total possibility degree of occurrence), with a null possibility degree of occurrence for all the remaining decisions (leading to max m k π x 2 ( C k ) = 0 ). Two special cases are present:
(1)
If the true decision C m shares the same maximum possibility value with, at least one different wrong decision C m , then, the correct decision C m loss value becomes null λ m , n = max k m π x 2 ( C k ) π x 2 ( C m )   = 0 ;
(2)
If the true decision C m does not produce the maximum occurrence possibility degree, i.e.,   π x 2 ( C m ) < max k m π x 2 ( C k ) , then the loss value λ m , n is positive and will increase the conditional risk, associated with the true decision C m .
Using the proposed possibilistic loss values, the conditional risk R ( C k | x 1 ) of choosing decision Ck can thus be computed as follows:
R ( C k | x )   = I n d x 2 ( { C k } ) · P r { C k | x 1 } + i = 1 i k M Π ( { C m } c ) ·   P r { C i | x 1 }
As already mentioned, Bayes decision criterion computes the conditional risk for all decisions, then, selects the decision C n for which R ( C n | x ) is minimum. Based on Equation (23), and to select the minimum conditional risk decision, the comparison of conditional risks related to two decisions C k and C p , can be straightforward performed leading to:
R ( C k | x 1 )   R ( C p | x 1 ) π x 2 ( C k ) · P r { C k | x 1 } π x 2 ( C k ) ·   P r { C p | x 1 }
Therefore, the application of the PML criterion, for the selection of the minimum conditional risk decision (out of M potential elementary decisions) can be simply formulated by the following decision rule:
D e c i s i o n   [ x ( p ) ] = arg max n = 1 , M π x 2 ( C n ) · P r { C n | x 1 }
This “intuitive” decision criterion allows the joint use both probabilistic and epistemic sources of information in the very same Bayesian minimum risk framework. As an example, the application of the proposed possibilistic loss values in the two-class decision case, where Ω = { C 1 , C 2 } , leads to the following loss matrix [ λ ]:
[ λ ] = [ λ 1 , 1 λ 1 , 2 λ 2 , 1 λ 2 , 2 ] = [ π x 2 ( C 2 ) π x 2 ( C 1 ) π x 2 ( C 2 ) π x 2 ( C 1 ) π x 2 ( C 1 ) π x 2 ( C 2 ) ]
The use of this loss matrix [ λ ] into the minimum-risk Bayes decision approach (as defined in (5)), leads to express the PML decision as follows:
P r { x 1 | C 1 } P r { x 1 | C 2 }     > D e c i s i o n [ x ( p ) ] = C 1 < D e c i s i o n [ x ( p ) ] = C 2     π x 2 ( C 2 ) π x 2 ( C 1 ) · P r { C 2 } P r { C 1 }
Notice that when the proposed possibilistic loss values are considered, then the PML induces a “weighting adjustment” of the a priori probabilities where the weighting factors are simply the a posteriori possibility degrees issued from the possibilistic Source 2. In the case of equal a priori probabilities, P r { C 2 } = P r { C 1 } , this decision criterion turns to an intuitive form using jointly probabilistic and epistemic sources of information, in the Bayesian minimum risk framework as shown by:
π x 2 ( C 1 ) · P r { x 1 | C 1 }     > D e c i s i o n [ x ( p ) ] = C 1 < D e c i s i o n [ x ( p ) ] = C 2   π x 2 ( C 2 ) · P r { x 1 | C 2 }
It is worthwhile to notice that when the two following conditions prevail:
  • when the available probabilistic information (issued from source 1) is non-informative; and,
  • when the only meaningful and available information is reduced to the epistemic expert information on the sub-feature vector issued from source 2; then, the proposed PML criterion is simply reduced to the maximum possibility decision criterion:
    π x 2 ( C 1 )     > D e c i s i o n [ x ( p ) ] = C 1 < D e c i s i o n [ x ( p ) ] = C 2   π x 2 ( C 2 )  
This raises a fundamental interpretation of the maximum possibility decision criterion as being a very special case of the possibilistic Bayesian decision making process under the total ignorance assumption of the probabilistic source of information.

5.3. PML Decision Criterion Behavior

Let S 1 denotes a probabilistic source of information measuring a sub-feature vector x 1 Θ 1 and attributing to each elementary decision C m ,   m = 1 , 2 , , M , an a posteriori probability soft decision label P r { C m | x 1 } . Under the assumption of equal a priori probabilities and using the binary-cost assignment, the application of the maximum a posteriori criterion (MAP), Equation (8), turns to be the “optimal” criterion ensuring the minimum-error decision rate.
Assume that an additional possibilistic source of information, S 2 , (measuring a sub-feature vector x 2 Θ 2 ) is available, see Figure 4. Based on the use of the sub-feature vector x 2 Θ 2 , S 2 attributes to each elementary decision, C m , an a posteriori possibility soft decision label π x 2 ( C m ) ,   m = 1 , 2 , , M . To obtain a hard decision, the application of the maximum of possibility decision criterion, Equation (12), is considered.
In the previous section, we have proposed the possibilistic maximum likelihood, PML, decision criterion, Equation (25), as a hybrid decision criterion allowing the coupled use of both sources of information, S 1 and S 2 , by considering the possibilistic information issued from S 2 , i.e., π x 2 ( C m ) ,   m = 1 , 2 , , M , for the definition of the loss values in the framework of the minimum-risk Bayes decision criterion (instead of the use of the binary-cost assignment approach). In this section, we will briefly discuss, from a descriptive point of view through an illustrative example, the “decisional behavior” of the PML criterion when compared to the decisions obtained with the “individual” application of the MAP and the maximum of possibility decision criteria.
First, it is worthwhile to notice that the “decision variable” to be maximized by the PML criterion is simply the direct product υ ( C m ) = π x 2 ( C m ) · P r { C m | x 1 } ,   m = 1 , 2 , , M , which is a T-norm fusion operator (considering both probabilistic and possibilistic information as two “similar” measures of the degree of truthfulness related to the occurrence of different elementary decisions, see also p.101 of Yager [24]). This also means that both sources of information, S 1 and S 2 , are considered as having the same informative level. It is also important to notice that the PML criterion, as a decision fusion operator merging decisional information from both sources, S 1 and S 2 , constitutes a coherent decision fusion criterion in the sense that:
-
when both sources S 1 and S 2 are in full agreement (i.e., leading to the same decision C m 0 ), then, the decision obtained by the application of the PML criterion will be the same as C m 0 ;
-
when one of the two sources S 1 and S 2 suffers from total ignorance (i.e., producing equal a posteriori probabilities, for S 1 and equal a posteriori possibilities, for S 2 ), then the PML criterion will “duplicate” the same elementary decision as the one proposed by the remaining reliable source of information;
-
when the two sources S 1 and S 2 lack decisional agreement, then, the decision obtained by the application of the PML criterion will be the most “plausible” elementary decision that may be different from individual decisions resulting from the MAP (resp. maximum possibility) criterion using the sub-feature vector x 1 Θ 1 (resp. sub-feature vector x 2 Θ 2 ).
This decision fusion coherence is illustrated through the examples given in Table 1. The decisions set is formed by five elementary decisions, i.e., Ω = { C 1 , C 2 , C 3 , C 4 , C 5 } , and we assume that, given the observed feature x 1 Θ 1 , the probabilistic source S 1 produces the following a posteriori probability distribution: P r { · | x 1 } = [0.1 0.4 0.1 0.3 0.1]. Each example fits in one sub-array which presents S 1 and S 2 specific configuration ( P r { · | x 1 } , π x 2 ( · ) and υ = P r { · | x 1 } · π x 2 ( · ) ) with the resulting decision for each decision parameter. The cases presented in Table 1 are explained as the following:
  • Case 1: when both sources S 1 and S 2 agree, with a winning decision C 2 , the PML criterion maintains this agreement and obtains the same decision, C 2 .
  • Cases 2 and 3: it shows that when one of the two sources presents a total ignorance, then the PML criterion “duplicates” the same elementary decision as the one offered by the remaining reliable source of information.
  • Cases 4, 5 and 6: when sources S 1 and S 2 lack agreement (i.e., dissonant sources), then, the resulting decision obtained through the application of the PML criterion is the most reasonable decision. That may not necessarily be one of the winning decisions offered by the two sources (this is specifically shown in case 6).

6. Experimental and Validation Results

In this section, the proposed PML decision-making criterion is evaluated in a pixel-based image classification context. A synthetic scene composed of five thematic classes Ω = { C 1 , C 2 , C 3 , C 4 , C 5 } is assumed to be observed through two independent imaging sensors. Sensor S 1 (resp. sensor S 2 ) provides an image I 1 (resp. I 2 ) of the simulated scene. The two considered sensors are assumed to be statistically independent. Without loss of generalization, pixels from both images I 1 and I 2 are assumed to have the same spatial resolution, thus, they represent the same observed spatial cell or object o. The value of the pixel I 1 ( i , j ) (resp. I 2 ( i , j ) ) provides the observed feature x 1 ( resp . x 2 ) delivered by the first (resp. second) sensor. According to sensors characteristics, the measured feature x 1 ( resp . x 2 ) follows a Gaussian N ( m C , σ C 2 ) (resp. Rayleigh ( σ C 2 ) ) probability distribution with related parameters m C ,   σ C 2   depending on the thematic class “C” of the observed object.
Figure 5 depicts the experimental simulated images I 1 (resp. I 2 ) assumed to be delivered at the output of the two sensors. Figure 6 shows the possibility distributions encoding expert’s information, for the five thematic classes. Parameter values considered for each thematic class are given in the same figure. This configuration of classes’ parameters is considered as a reasonable configuration that may be encountered when real data is observed. Nevertheless, other configurations have been generated and the obtained results are in full accordance with those obtained by the considered configuration.
In addition to the previously mentioned probabilistic information, we assume that each thematic class is described, by an expert, using the “simplest” linguistic variable “Close to v S , C k ” where v S , C k denotes the thematic class C k feature mean value, observed through sensor S s . Therefore, the only information given by the expert is v 1 , C k = m C K for sensor S 1 (underlying Gaussian distributions) and v 2 , C k = σ C K π / 2 for sensor S 2 (underlying Rayleigh distributions). For each sensor S s and thematic class C k , a standard triangular possibility distribution is considered to encode this epistemic knowledge with the summit positioned at the mean value v S , C k and the support covering the whole range of the features set. It is clearly seen that the possibility distributions (considered as encoding the expert’s knowledge), represent a weak knowledge which is less informative than the initial, or even estimated, probabilistic density functions.
To evaluate the efficiency of the proposed possibilistic maximum likelihood decision making criterion, the adopted procedure consists, first, on the random generation of 1000 statistical realizations of the two synthetic Gaussian and Rayleigh images (with the five considered thematic classes) representing the analyzed scene. Second, the following average pixel-based recognition rates are evaluated:
-
τ ( P r C m G ) : Minimum-risk Bayes average pixel-based decision recognition rate, Equation (8), using zero-one loss assignment, for each thematic class C m ,   m = 1 , 2 , , 5 ,   based on the use of sensor S 1 Gaussian feature x 1 only.
-
τ ( π C m G ) : Maximum possibility average pixel-based decision recognition rate, Equation (12), exploiting the epistemic expert knowledge for the description of each considered thematic class C m ,   m = 1 , 2 , , 5 ,   in the features set Θ 1 only.
-
τ ( P r C m G · π C m R ) : Possibilistic maximum likelihood average pixel-based decision recognition rate, Equation (24), jointly exploiting the epistemic expert knowledge for the description of each considered thematic class C m ,   m = 1 , 2 , , 5 , in the features set Θ 2 (sensor S 2 ), and the Gaussian probabilistic knowledge for the description of the same thematic class in the features set Θ 1 (sensor S 1 ).
-
τ ( P r C m R ) : Minimum-risk Bayes average pixel-based decision recognition rate, Equation (8), using zero-one loss assignment, for each thematic class C m ,   m = 1 , 2 , , 5 ,   based on the use of sensor S 2 Rayleigh feature x 2 only.
-
τ ( π C m G ) : Maximum possibility average pixel-based decision recognition rate, Equation (12), exploiting the epistemic expert knowledge for the description of each considered thematic class C m ,   m = 1 , 2 , , 5 ,   in the features set Θ 1 (sensor S 1 ).
-
τ ( P r C m R · π C m G ) : Possibilistic maximum likelihood average pixel-based decision recognition rate, Equation (24), jointly exploiting the epistemic expert knowledge for the description of each thematic class C m ,   m = 1 , 2 , , 5 ,   in sensor S 1 features set Θ 1 , and the Rayleigh probabilistic knowledge for the description of C m ,   m = 1 , 2 , , 5 , in sensor S 2 features set Θ 2 .
-
τ ( P r C m G · P r C m R ) : Minimum-risk Bayes average pixel-based decision recognition rate, Equation (8), using zero-one loss assignment, for each thematic class C m ,   m = 1 , 2 , , 5 ,   and based on the joint use of both sensors S 1 (associated with Gaussian probabilistic knowledge in the features set Θ 1 ) and S2 (associated with Rayleigh probabilistic knowledge in the features set Θ 2 . Sensors S 1 and S 2 are considered as being statistically independent. This criterion as well as all the criteria above have been calculated for the example in Table 2.
The obtained average recognition rates are summarized in Table 2 (last row). As expected, at the global scene level, the average recognition rates when a probabilistic information source is used (for modelling the observed features) are higher than those obtained by the use of epistemic knowledge (i.e., τ ( P r G ) τ ( π G ) , and τ ( P r R ) τ ( π R ) ). Nevertheless, at the thematic classes’ level, this property does not hold for some classes. This is mainly due to the fact that for “sharp classes” probability density functions, i.e., small variance, (for instance, thematic classes C 2 and C 4 ), the possibility distributions shape used to encode the expert knowledge (i.e., a wide-based triangular possibility shape) may bias each class influence, leading to a better recognition rate to the detriment of other neighboring classes (for instance, class C 3 ). In this case, this leads to obtain   τ ( π C 2 G ) > τ ( P r C 2 G ) and τ ( π C 4 G ) > τ ( P r C 4 G ) .
Poorer recognition performances of the maximum possibility decision criterion clearly come from the “weak epistemic knowledge” produced by the expert (indicating just the mean values) compared to the “strong probabilistic” knowledge involved by full probability density functions (resulting from either a priori information or the densities estimation using some learning data). The most interesting and promising result can be witnessed in terms of recognition rate improvement when the epistemic knowledge is jointly used with the probabilistic one as proposed by the PML decision criterion. Indeed, Table 2 (columns 4 and 7, bold numbers) shows that for all classes C m ,   m = 1 , 2 , , 5 , we have τ ( P r C m G · π C m R ) τ ( P r C m G ) and   τ ( P r C m R · π C m G ) τ ( P r C m R ) .
It is worthwhile to notice (columns 4 and 7, bold numbers) that the level of performance improvement depends on the “informative” capacity of the “additional” knowledge source. For instance, embedding the Gaussian source of knowledge (in terms of epistemic knowledge form) into the decisional process based on the probabilistic Rayleigh source of knowledge, improves much more the performance level than the reverse (i.e., embedding epistemic Rayleigh source of knowledge into the decisional process based on the probabilistic Gaussian source of knowledge):   τ ( P r C m G · π C m R ) τ ( P r C m G ) whereas   τ ( P r C m R · π C m G ) > τ ( P r C m R ) .
Finally, it is important to notice that the PML decision performances are lower-upper bounds delimited as follows:
τ ( P r C m G ) τ ( P r C m G · π C m R ) τ ( P r C m G · P r C m R ) τ ( P r C m R ) τ ( P r C m R · π C m G ) τ ( P r C m R · P r C m G )
Given the fact that the two sources S1 and S2 are assumed to be statistically independent, then the joint probability distribution of the augmented feature vector x = [ x 1   x 2 ] is the direct product of marginal ones. This simply means that the upper bounds given in Equation (30) constitute the optimal recognition rate (obtained by considering both probabilistic sources of knowledge). Therefore, the PML criterion improves the performances of the use of a “single” probabilistic source of knowledge, and approaches for some thematic classes the optimal recognition rate upper bound (last column of Table 2).

7. Conclusions

In this paper, a new criterion for decision-making process in classification systems is proposed. After a brief recall of the Bayesian decision-making criteria, three major possibilistic decision making criteria, i.e., maximum possibility, maximum necessity measure and confidence index maximization, have been detailed. It was clearly shown that the three considered decision criteria lead to, at best, the maximum possibility decision criterion. However, the maximum possibility criterion has no physical justification. A new criterion called, possibilistic maximum likelihood (PML) framed within the possibility theory, but using notions from Bayesian decision-making, has been presented and its behavior evaluated. The main motivation being the development of such criterion is for multisource information fusion where a pattern may be observed through several channels and where the available knowledge, concerning the observed features, may be of a probabilistic nature for some features, and of an epistemic nature for some others.
In this configuration, most encountered studies transform one of the two knowledge types into the other form, and then apply either the classical Bayesian or possibilistic decision-making criteria. In this paper, we have proposed a new approach called the Possibilistic maximum likelihood (PML) decision-making approach, where the Bayesian decision-making framework is adopted and where the epistemic knowledge is integrated into the decision-making process by defining possibilistic loss values instead of the usually used zero-one loss values.
A set of possibilistic loss values is proposed and evaluated in the context of pixel-based image classification where a synthetic scene, composed of several thematic classes, was randomly generated using two types of sensors: a Gaussian and a Rayleigh sensor. The evaluation of the proposed PML criterion has clearly shown the interest of the application of PML; where the obtained recognition rates approach the optimal rates (i.e., where all the available knowledge is expressed in terms of probabilistic knowledge). Moreover, the proposed PML decision criterion offers a physical interpretation of the maximum possibility decision criterion as a special case of the possibilistic Bayesian decision-making process when all the available probabilistic information indicates equal decisions probabilities.

Author Contributions

Conceptualization, B.S., D.G., S.A. and B.A.; Methodology, B.S., D.G., B.A., and É.B.; Software, S.A. and B.A.; Supervision, B.S. and É.B.; Writing—original draft, B.S.; Writing—review & editing, and É.B. All authors have read and agreed to the final published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dubois, D. Uncertainty theories: A unified view. In SIPTA School 08—UEE 08; SIPTA: Montpellier, France, 2008. [Google Scholar]
  2. Klir, G.J.; Wierman, M.J. Uncertainty-Based Information: Elements of Generalized Information Theory; Physica-Verlag HD: Heidelberg, Germany, 1999. [Google Scholar]
  3. Denoeux, T. 40 years of Dempster-Shafer theory. Int. J. Approx. Reason. 2016, 79, 1–6. [Google Scholar] [CrossRef]
  4. Yager, R.R.; Liu, L. (Eds.) Classic Works of the Dempster-Shafer Theory of Belief Functions; Springer: Berlin, Germany, 2008; Volume 219. [Google Scholar]
  5. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976; Volume 42. [Google Scholar]
  6. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  7. Zadeh, L. Fuzzy Sets as the Basis for a Theory of Possibility. Fuzzy Sets Syst. 1978, 1, 3–28. [Google Scholar] [CrossRef]
  8. Zadeh, L.A. Generalized theory of uncertainty (GTU)—principal concepts and ideas. Comput. Stat. Data Anal. 2006, 51, 15–46. [Google Scholar] [CrossRef]
  9. Dubois, D.; Prade, H. The legacy of 50 years of fuzzy sets: A discussion. Fuzzy Sets Syst. 2015, 281, 21–31. [Google Scholar] [CrossRef] [Green Version]
  10. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  11. Solaiman, B.; Bossé, É. Possibility Theory for the Design of Information Fusion Systems; Springer: Berlin, Germany, 2019. [Google Scholar]
  12. Frélicot, C. On unifying probabilistic/fuzzy and possibilistic rejection-based classifiers. In Advances in Pattern Recognition; Springer: Berlin, Germany, 1998; pp. 736–745. [Google Scholar]
  13. Solaiman, B.; Bossé, E.; Pigeon, L.; Guériot, D.; Florea, M.C. A conceptual definition of a holonic processing framework to support the design of information fusion systems. Inf. Fusion 2015, 21, 85–99. [Google Scholar] [CrossRef]
  14. Tou, J.T.; Gonzalez, R.C. Pattern Recognition Principles; Addison-Wesley: Boston, MA, USA, 1974. [Google Scholar]
  15. Dubois, D.; Prade, H. Possibility Theory: An Approach to Computerized Processing of Uncertainty; Plenum Press: New York, NY, USA, 1988. [Google Scholar]
  16. Dubois, D.J. Fuzzy Sets and Systems: Theory and Applications; Academic Press: Cambridge, MA, USA, 1980; Volume 144. [Google Scholar]
  17. Dubois, D.; Prade, H. When upper probabilities are possibility measures. Fuzzy Sets Syst. 1992, 49, 65–74. [Google Scholar] [CrossRef]
  18. Aliev, R.; Pedrycz, W.; Fazlollahi, B.; Huseynov, O.H.; Alizadeh, A.; Guirimov, B. Fuzzy logic-based generalized decision theory with imperfect information. Inf. Sci. 2012, 189, 18–42. [Google Scholar] [CrossRef]
  19. Buntao, N.; Kreinovich, V. How to Combine Probabilistic and Possibilistic (Expert) Knowledge: Uniqueness of Reconstruction in Yager’s (Product) Approach. Int. J. Innov. Manag. Inf. Prod. (IJIMIP) 2011, 2, 1–8. [Google Scholar]
  20. Coletti, G.; Petturiti, D.; Vantaggi, B. Possibilistic and probabilistic likelihood functions and their extensions: Common features and specific characteristics. Fuzzy Sets Syst. 2014, 250, 25–51. [Google Scholar] [CrossRef]
  21. Fargier, H.; Amor, N.B.; Guezguez, W. On the complexity of decision making in possibilistic decision trees. arXiv 2012, arXiv:1202.3718. [Google Scholar]
  22. Guo, P. Possibilistic Decision-Making Approaches. In The 2007 International Conference on Intelligent Systems and Knowledge Engineering; Atlantis Press: Beijing, China, 2007; pp. 684–688. [Google Scholar]
  23. Weng, P. Qualitative decision making under possibilistic uncertainty: Toward more discriminating criteria. arXiv 2012, arXiv:1207.1425. [Google Scholar]
  24. Yager, R.R. A measure based approach to the fusion of possibilistic and probabilistic uncertainty. Fuzzy Optim. Decis. Mak. 2011, 10, 91–113. [Google Scholar] [CrossRef]
  25. Yager, R.R. On the fusion of possibilistic and probabilistic information in biometric decision-making. In Proceedings of the 2011 IEEE Workshop on Computational Intelligence in Biometrics and Identity Management (CIBIM), Paris, France, 11–15 April 2011; IEEE: Piscataway Township, NJ, USA, 2011; pp. 109–114. [Google Scholar]
  26. Bouyssou, D.; Dubois, D.; Prade, H.; Pirlot, M. Decision Making Process: Concepts and Methods; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  27. Dubois, D.; Foulloy, L.; Mauris, G.; Prade, H. Probability-possibility transformations, triangular fuzzy sets, and probabilistic inequalities. Reliab. Comput. 2004, 10, 273–297. [Google Scholar] [CrossRef]
  28. Dubois, D.; Prade, H. Possibility theory: Qualitative and quantitative aspects. In Quantified Representation of Uncertainty and Imprecision; Springer: Berlin, Germany, 1998; pp. 169–226. [Google Scholar]
  29. Dubois, D.; Prade, H. The three semantics of fuzzy sets. Fuzzy Sets Syst. 1997, 90, 141–150. [Google Scholar] [CrossRef]
  30. Dubois, D.; Prade, H.; Sandri, S. On possibility/probability transformations. In Fuzzy Logic; Springer: Berlin, Germany, 1993; pp. 103–112. [Google Scholar]
  31. Denœux, T. Modeling vague beliefs using fuzzy-valued belief structures. Fuzzy Sets Syst. 2000, 116, 167–199. [Google Scholar] [CrossRef]
  32. Denœux, T. Maximum likelihood estimation from fuzzy data using the EM algorithm. Fuzzy Sets Syst. 2011, 183, 72–91. [Google Scholar] [CrossRef] [Green Version]
  33. Denoeux, T. Maximum likelihood estimation from uncertain data in the belief function framework. Knowl. Data Eng. 2013, 25, 119–130. [Google Scholar] [CrossRef] [Green Version]
  34. Walley, P. Statistical Reasoning With Imprecise Probabilities; Chapman and Hall: London, UK, 1991. [Google Scholar]
  35. Walley, P. Towards a unified theory of imprecise probability. Int. J. Approx. Reason. 2000, 24, 125–148. [Google Scholar] [CrossRef] [Green Version]
  36. Linda, O.; Manic, M.; Alves-Foss, J.; Vollmer, T. Towards resilient critical infrastructures: Application of Type-2 Fuzzy Logic in embedded network security cyber sensor. In Proceedings of the 2011 4th International Symposium on Resilient Control Systems (ISRCS), Boise, ID, USA, 9–11 August 2011; IEEE: Piscataway Township, NJ, USA, 2011; pp. 26–32. [Google Scholar]
  37. Mendel, J.M.; John, R.I.B. Type-2 fuzzy sets made simple. Fuzzy Syst. 2002, 10, 117–127. [Google Scholar] [CrossRef]
  38. Ozen, T.; Garibaldi, J.M. Effect of type-2 fuzzy membership function shape on modelling variation in human decision making. In Proceedings of the IEEE International Conference on Fuzzy Systems, Budapest, Hungary, 25–29 July 2004. [Google Scholar]
  39. Luce, R.D.; Raiffa, H. Games and Decisions; Wiley: New York, NY, USA, 1957. [Google Scholar]
  40. Haouari, B.; Amor, N.B.; Elouedi, Z.; Mellouli, K. Naïve possibilistic network classifiers. Fuzzy Sets Syst. 2009, 160, 3224–3238. [Google Scholar] [CrossRef]
  41. Benferhat, S.; Tabia, K. An efficient algorithm for naive possibilistic classifiers with uncertain inputs. In Scalable Uncertainty Management; Springer: Berlin, Germany, 2008; pp. 63–77. [Google Scholar]
  42. Bounhas, M.; Hamed, M.G.; Prade, H.; Serrurier, M.; Mellouli, K. Naive possibilistic classifiers for imprecise or uncertain numerical data. Fuzzy Sets Syst. 2014, 239, 137–156. [Google Scholar] [CrossRef] [Green Version]
  43. Bounhas, M.; Mellouli, K.; Prade, H.; Serrurier, M. Possibilistic classifiers for numerical data. Soft Comput. 2013, 17, 733–751. [Google Scholar] [CrossRef] [Green Version]
  44. Kikuchi, S.; Perincherry, V. Handling Uncertainty in Large Scale Systems with Certainty and Integrity. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.132.7069 (accessed on 30 December 2020).
  45. Yager, R.R. Set measure directed multi-source information fusion. Fuzzy Syst. 2011, 19, 1031–1039. [Google Scholar] [CrossRef]
  46. Yager, R.R. Hard and soft information fusion using measures. In Proceedings of the 2010 IEEE International Conference on Intelligent Systems and Knowledge Engineering, Hangzhou, China, 15–16 November 2010. [Google Scholar]
Figure 1. Structure of a multisource classification system (Source: [11]).
Figure 1. Structure of a multisource classification system (Source: [11]).
Entropy 23 00067 g001
Figure 2. Comparative example of the maximum possibility and maximum necessity measures decision criteria using four a posteriori possibility distributions.
Figure 2. Comparative example of the maximum possibility and maximum necessity measures decision criteria using four a posteriori possibility distributions.
Entropy 23 00067 g002
Figure 3. Confidence indices associated with different decisions ( A m 0 : event having the highest possibility degree, A m 1 : event with the second highest possibility degree). (Source: [11]).
Figure 3. Confidence indices associated with different decisions ( A m 0 : event having the highest possibility degree, A m 1 : event with the second highest possibility degree). (Source: [11]).
Entropy 23 00067 g003
Figure 4. Multi-source information context for pattern classification.
Figure 4. Multi-source information context for pattern classification.
Entropy 23 00067 g004
Figure 5. Two-sensors simulated images representing a scene of five thematic classes. Pixels from I 1 (resp. I 2 ) are generated using Gaussian (resp. Rayleigh) density functions.
Figure 5. Two-sensors simulated images representing a scene of five thematic classes. Pixels from I 1 (resp. I 2 ) are generated using Gaussian (resp. Rayleigh) density functions.
Entropy 23 00067 g005
Figure 6. Triangular-shaped possibility distributions encoding expert’s knowledge, for the five thematic classes and the two sensors.
Figure 6. Triangular-shaped possibility distributions encoding expert’s knowledge, for the five thematic classes and the two sensors.
Entropy 23 00067 g006
Table 1. PML decision making behavior for several cases.
Table 1. PML decision making behavior for several cases.
Case 1Case 2Case 3
P r { · | x 1 } π x 2 ( · ) υ ( · ) P r { · | x 1 } π x 2 ( · ) υ ( · ) P r { · | x 1 } π x 2 ( · ) υ ( · )
C10.10.20.020.11.00.10.20.80.16
C20.40.70.280.41.00.40.20.20.04
C30.10.30.030.11.00.10.20.10.02
C40.30.70.210.31.00.30.20.10.02
C50.10.10.100.11.00.10.20.20.04
Case 4Case 5Case 6
P r { · | x 1 } π x 2 ( · ) υ ( · ) P r { · | x 1 } π x 2 ( · ) υ ( · ) P r { · | x 1 } π x 2 ( · ) υ ( · )
C10.11.00.10.10.80.080.10.80.08
C20.40.20.080.40.20.080.40.30.12
C30.10.00.00.10.10.010.10.40.04
C40.30.00.00.30.30.030.350.50.16
C50.10.00.00.10.10.020.10.50.05
Table 2. PML decision average pixel-based recognition rates for the five thematic classes using various configurations of knowledge sources.
Table 2. PML decision average pixel-based recognition rates for the five thematic classes using various configurations of knowledge sources.
Knowledge SourcesSource S1: Probabilistic (G: Gaussian)
Source S2: Epistemic (Expert)
Source S2: Probabilistic (R: Rayleigh)
Source S1: Epistemic (Expert)
Both Sources Are Probabilistic
S 1 S 2 S 1 PML S 2 S 2 S 1 S 1 PML S 2 S 1 S 2
Criterion τ ( P r C m G ) τ ( π C m G ) τ ( P r C m G · π C m R ) τ ( P r C m R ) τ ( π C m G ) τ ( P r C m R · π C m G ) τ ( P r C m G · P r C m R )
C10.960.330.960.340.960.640.96
C20.900.320.910.490.980.680.92
C30.780.910.810.890.670.900.91
C40.940.250.950.320.960.610.97
C50.830.720.860.580.750.710.94
Average
Recognition
Rate
0.880.510.900.520.860.710.94
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Solaiman, B.; Guériot, D.; Almouahed, S.; Alsahwa, B.; Bossé, É. A New Hybrid Possibilistic-Probabilistic Decision-Making Scheme for Classification. Entropy 2021, 23, 67. https://0-doi-org.brum.beds.ac.uk/10.3390/e23010067

AMA Style

Solaiman B, Guériot D, Almouahed S, Alsahwa B, Bossé É. A New Hybrid Possibilistic-Probabilistic Decision-Making Scheme for Classification. Entropy. 2021; 23(1):67. https://0-doi-org.brum.beds.ac.uk/10.3390/e23010067

Chicago/Turabian Style

Solaiman, Basel, Didier Guériot, Shaban Almouahed, Bassem Alsahwa, and Éloi Bossé. 2021. "A New Hybrid Possibilistic-Probabilistic Decision-Making Scheme for Classification" Entropy 23, no. 1: 67. https://0-doi-org.brum.beds.ac.uk/10.3390/e23010067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop