Next Article in Journal
First-Order Random Coefficient Multinomial Autoregressive Model for Finite-Range Time Series of Counts
Previous Article in Journal
Fixed Effect Meta-Analytic Structural Equation Modeling (MASEM) Estimation Using Generalized Method of Moments (GMM)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Approach to Decision Making Based on Interval-Valued Fuzzy Soft Set

College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
*
Author to whom correspondence should be addressed.
Submission received: 28 October 2021 / Revised: 10 November 2021 / Accepted: 11 November 2021 / Published: 29 November 2021

Abstract

:
Interval-valued fuzzy soft set theory is a powerful tool that can provide the uncertain data processing capacity in an imprecise environment. The two existing methods for decision making based on this model were proposed. However, when there are some extreme values or outliers on the datasets based on interval-valued fuzzy soft set for making decisions, the existing methods are not reasonable and efficient, which may ignore some excellent candidates. In order to solve this problem, we give a novel approach to decision making based on interval-valued fuzzy soft set by means of the contrast table. Here, the contrast table has symmetry between the objects. Our proposed algorithm makes decisions based on the number of superior parameter values rather than score values, which is a new perspective to make decisions. The comparison results of three methods on two real-life cases show that, the proposed algorithm has superiority to the existing algorithms for the feasibility and efficiency when we face up to the extreme values of the uncertain datasets. Our proposed algorithm can also examine some extreme or unbalanced values for decision making if we regard this method as supplement of the existing algorithms.

1. Introduction

There exists uncertain and fuzzy data when we face up to practical and complicated problems in a lot of domains as diverse as social science, medical science, economics, engineering [1,2], and so on. Many classical approaches such as probability theory, fuzzy sets and rough sets [3], and so on, have been developed to deal with vagueness. However, these solutions have inherent difficulties which were figured out in [4]. Molodtsov initiated the concept of soft set theory [4] which is an effective mathematical tool in handling uncertainty. There are rich variety of applications of soft set theory in many fields as diverse as game theory [4], operations research, decision making [5,6,7,8], data mining [9,10] Screening alternatives [11], resource discovery [12] and data filling [13] for incomplete datasets [14], and so on. In addition to the soft set theory, recently, scholars have developed and studied plenty of combination models of the soft set theory with other mathematical models such as fuzzy soft set [15,16,17], intuitionistic fuzzy soft set [18,19], belief interval-valued soft set [20], interval-valued intuitionistic fuzzy soft sets [21], hesitant N-soft sets [22], confidence soft sets [23], fault-tolerant enhanced bijective soft set [24], trapezoidal interval type-2 fuzzy soft sets [25], soft rough set [26], Z-soft fuzzy rough set [27], Z-soft rough fuzzy set [28], and so on.
In a number of fuzzy applications, the related membership functions are extremely individual, and so, giving an interval-valued data to express degree of membership is more reasonable and rational. As a result, the interval-valued fuzzy soft set was created [29], which is constructed by integrating interval valued fuzzy set and soft set. Interval-valued fuzzy soft set (IVFSS) is one of the most successful extended models of soft set. Because of inheriting the merits of the above two models, this model is robust when we are in the face of difficulties of handling fuzzy and unclearly defined datasets. At present, the main applications of this model involve the field of data filling, parameter reduction evaluation system and decision making. There is some incomplete information in the datasets described by IVFSS, which can be handled by the method of [30]. For the purpose of ignoring the needless parameters, Ma et al. [31] depicted and analyzed demerits and merits four parameter reduction ideas in the process of decision making. The research of [32] further displayed a complete evaluation system framework constructed on this model. Decision making is an important application of this model. The paper of [33] expressed one adjustable algorithms of decision making created on the definition of level soft sets. However, this method transformed interval-valued data into binary data, which lost the original superiority of interval-valued description for IVFSS. The combined weights for the parameters in IVFSS were considered in [34,35] when we have to make stochastic multi-criteria decision and emergency decision. However, the two methods were complicated and not easy to implement. In [29], one decision making method based on scores for IVFSS was proposed and an efficient decision making approach which considered the added objects was given in [36] in 2021. however, when there are some extreme values or outliers on the datasets for making decisions, the methods proposed in [29,36] were not reasonable and efficient. That is, according to the methods proposed in [29,36], the object with the maximum score value will be selected as the best choice. However, if the objects have some outliers or extreme value for some specific parameters, the two existing methods are likely to ignore some excellent candidates. And in some situation, we need to find some extreme value or some objects which have outstanding performances on some specific parameters. In order to solve this problem, in this paper, we give a novel approach to decision making based on IVFSS. Our contributions are as follows:
(1)
We propose a novel approach to decision making based on IVFSS by means of the contrast table, which considers the extreme values or outliers.
(2)
Our proposed algorithm can find and examine some extreme or unbalanced values for decision making when the two existing approaches have the different decision making results with our algorithm; if the three methods have the same results, we can verify that the datasets have no extreme data.
(3)
The comparison results of three methods on one example and two real-life applications including 5-star Sydney Hotel Rating Systems and Scenic Spots Weather Condition Evaluation Systems show that, the proposed algorithm has superiority to the two existing algorithms when we face up to extreme or outlier values in the processing of decision making.
The remainder of this paper is organized as follows. The basic related terms are introduced in Section 2. Section 3 recalls the two existing algorithms of [29,36] and points to the weakness of the two methods. Section 4 proposes a novel approach to decision making based on IVFSS by means of the contrast table. Section 5 depicts the comparison results among three methods on two real-life cases. Finally, Section 6 shows the conclusion from our research.

2. Basic Notions

In this section, we briefly recall some definitions with regard to soft sets and IVFSS.
Definition 1.
([4]).Let U be a non-empty initial universe of objects, E be a set of parameters in relation to objects in U, P ( U ) be the power set of U, and A be a subset of E. A pair ( F , A ) is called a soft set over U, where F is a mapping given by F : A P ( U ) .
Definition 2.
([29]).Let an interval-valued fuzzy set X on an universe U be a mapping such that   X ^ : U Int ( [ 0,1 ] ) where the set of all closed sub-intervals is denoted by Int([0,1]). Suppose that   X ^ ψ ˜ ( U ) , where ψ ˜ ( U ) represents the set of all interval-valued fuzzy sets on U. μ   x ^ - ( x ) and μ   x ^ + ( x ) represents the lower and upper degrees of membership x to   X ^   ( 0 μ   x ^ - ( x ) μ   x ^ + ( x ) 1 ) . For every x U , μ   x ^ ( x ) = [ μ   x ^ - ( x ) , μ   x ^ + ( x ) ] is denoted as the degree of membership an element x to   X ^ . Suppose further that E is a set of parameters in relation to objects in U. A pair ( ω ˜ , E ) is defined as an interval-valued fuzzy soft set over ψ ˜ ( U ) , where ω ˜ is a mapping given by ω ˜ : E ψ ˜ ( U ) .
We give the following example for illustration of IVFSS.
Example 1.
One movie critic wants to express the feeling of four movies from four dimensionalities. We can apply the model of IVFSS to describe the fuzzy expression. Note that the universe U represents the set of the four different movies and U = {h1,h2,h3,h4}. A represents the set of four parameters and A = {e1,e2,e3,e4} = {deliciously clever story, tight and witty plots, transporting visual beauty, high-profile releases}. Then IVFSS (F, A) on U is described by the following Table 1. The lower and upper approximations of such an assessment for four movies from four aspects are displayed in Table 1. For instance, a movie h1 has at least tight and witty plots on the level of 0.3 and it has at most tight and witty plots on the level of 0.7.

3. Related Work

Though there are some decision making methods based on the models such as soft sets [37] and fuzzy soft sets [38], the two mathematical models and interval-valued fuzzy soft set are different which have the different characteristics. The entries of soft set are 0 and 1, while the data of fuzzy soft set lies between 0 and 1. Interval-valued fuzzy soft sets have the lower and upper degrees of membership which are between 0 and 1. Different models need the different decision making approaches. Here we only focus on the decision making methods based on this model of interval-valued fuzzy soft sets. In this section, we briefly introduce the two existing Algorithms 1 and 2 to solve fuzzy decision making problems based on IVFSS such as score based decision making approach (SBDM) [29] and decision making method considering the added objects (CAODM) in [36] as follows.
Algorithm 1: SBDM [29]
Input: an IVFSS  ( Z ˜ , P ) .
Output: the optimal object.
Step 1: figure out the choice value ci for each object hi by the equation of c i = [ c i , c i + ] = [ p P μ Z ˜ ( P ) ( h i ) , p P μ Z ˜ ( P ) + ( h i ) ] .
Step 2: find the score value ri of hi by the equation of r i = h i U ( ( c i c j ) + ( c i + c j + ) ) .
Step 3: find the maximum of the score value and the corresponding object is referred to as the best outcome.
Here, we apply Example 1. to illustrate the performance of SBDM.
Algorithm 2: CAODM [36]
Input: an IVFSS  ( S ~ , E ) .
Output: the optimal object.
Step 1: Compute the choice value c i for each object h i by the equation as
c i = [ c i , c i + ] = [ p P μ Z ~ ( P ) ( h i ) , p P μ Z ~ ( P ) + ( h i ) ] ,
For every h i U , where c i   and   c i + are termed as the upper choice value and lower choice value, respectively. Here μ Z ˜ ( e j ) - ( h i ) , μ Z ˜ ( e j ) + ( h i ) are the upper and lower degrees for object hi and parameter ej, respectively.
Step 2: Figure out the overall choice value C i overall   of h i by the following equation as
C i overall = c i + c i + ,
For every h i U .
Step 3: Find C k overall = Max { C i overall } . That is, find the optimal object which has the maximum overall choice value.
Example 2.
According to the dataset of Example 1, that is, this movie critic wants to assess these movies with its tabular form given by Table 1 and find the best one. According to SBDM, we compute choice value c i and the score r i for all the objects. The corresponding results are shown in Table 1, from which we see that h3 is the most desirable movie, since it has the maximum score r3 = maxhi ∈U{ri} = 2.1. If we sort the movies according to the scores in descending order, we are able to get h3 > h2 > h1 > h4. In this dataset, the algorithm of SBDM works well. Similarly, CAODM get the same outcome as h3 > h2 > h1 > h4.
However, if we face up to some extreme situation, the two algorithms seem unreasonable, which is likely to ignore some good candidates. Let us look at this following Example 3.
Example 3.
One family is planning to buy an apartment building for living. There are five alternative apartment candidates from five different property developers. This family hesitates about which to buy. We are able to evaluate the alternatives from four aspects: “reasonable price”, “excellent geographical location”, “perfect facilities”, “cozy environment”. We choose the model of IVFSS to describe the customer’s feeling for the five candidates from four aspects. Hence, suppose that the universe U represents the set of the five different alternative apartment candidates and U = {h1, h2, h3, h4, h5}. A represents the set of four parameters and A = {e1, e2, e3, e4} = {reasonable price, excellent geographical location, perfect facilities, cozy environment}. Then IVFSS (F, A) on U is described by Table 2.
This family wants to assess these apartments with its tabular form given by Table 2 and finds the best one. According to SBDM, we compute choice values and the scores for all the objects. The corresponding results are shown in Table 2, from which we see that h3 seems the most choice, since it has the maximum score 2.6. If we sort the apartments according to the scores in descending order, we are able to obtain h3 > h4 > h1 > h2 > h5. CAODM considers the new added objects and reduces the computational complexity. However, about the outcome of decision making, the two methods are equivalent. Hence, we can get h3 > h4 > h1 > h2 > h5 by CAODM. However, when we look through this dataset, it seems that h3 is not the best choice. h3 has the highest level of excellent geographical location, but it has the poorer performance at the level of another three parameters such as “reasonable price”, “perfect facilities” and “cozy environment” compared with h4. That is, h4 has the reasonable price, perfect facilities, and cozy environment, only it has poor geographical location. Compared with h3, object h4 outperforms from three aspects. h4 has the maximum score because of the extreme value or outlier such as [1.0, 1.0] with regard to e2.. h3 has the lower score due to the extreme value or outlier such as [0.0, 0.1] with regard to e2.. It is clear that h4 is likely to be the best choice rather than h3 for this family. In this situation, the two algorithms of SBDM and CAODM are likely to ignore the excellent candidates. In order to solve this problem, we propose a new method to decision making based on the model of IVFSS.

4. A New Approach to Decision Making Based on Interval-Valued Fuzzy Soft Set

In this section, firstly, we give some related definitions as diverse as average degree of membership and contrast table, and so on. And then we describe a novel approach to decision making based on IVFSS.
Definition 3.
For interval-valued fuzzy soft set ( S ˜ , E ) , U = { h 1 , h 2 , , h n } , E = { e 1 , e 2 , , e m } , μ S ˜ ( e j ) ( h i ) = [ μ S ˜ ( e j ) ( h i ) , μ S ˜ ( e j ) + ( h i ) ] is the degree of membership an element h i to S ˜ ( e j ) . We define μ ¯ S ˜ ( e j ) ( h i ) as average degree of membership for every entry, where it is computed by the formula as
μ ¯ S ˜ ( e j ) ( h i ) = μ S ˜ ( e j ) ( h i ) + μ S ˜ ( e j ) + ( h i ) 2
We create a table in which both of rows and columns are the corresponding objects of the interval-valued fuzzy soft set ( S ˜ , E ) , and the entries Mij are the number of parameters for which the average degree of membership value of object hi goes over or equal to the average degree of membership value of object hj. We term this table as the contrast table of IVFSS ( S ˜ , E ) .
It is clear that 0 ≤ Mij ≤ m and where m is the number of parameters in IVFSS ( S ˜ , E ) . The number of diagonals of this contrast table is m, that is, Mii = m. Mij implies object hi dominates object hj in Mij number of parameters based on the average degree of membership value.
Definition 4.
For IVFSS ( S ˜ , E ) U = { h 1 , h 2 , , h n } , E = { e 1 , e 2 , , e m } , we have a contrast table and the entries Mij are the number of parameters for which the average degree of membership value of object hi goes over or equal to the average degree of membership value of object hj. We define Ri as the row dominant sum of an object hi and Tj as the column dominant sum of an object hj, which is calculated by the Formulas (2) and (3) as
R i = j = 1 n M i j
T j = i = 1 n M i j
From above Definition 4, we find that the row dominant sum of an object displays the total number of parameters in which this object dominates all the other objects of U. Likewise, the integer Tj indicates the total number of parameters in which hj is dominated by all the other objects of U.
Definition 5.
For IVFSS ( S ˜ , E ) U = { h 1 , h 2 , , h n } , E = { e 1 , e 2 , , e m } ; for the corresponding contrast table, Ri and Ti are the row dominant sum and column dominant sum of an object hi, respectively.
We define Si as the overall dominant score, which is obtained by the formula as,
S i = R i T i
Based on the above given definitions, we describe our proposed algorithm as follows Algorithm 3:
Algorithm 3: Our proposed algorithm
Step 1: Input an interval-valued fuzzy soft set ( S ~ , E ) , U = { h 1 , h 2 , h 3 , h n } ,⋯ E = { e 1 , e 2 , e m } .
Step 2: Calculate average degree of membership for every entry by the formula as
μ ¯ S ˜ ( e j ) ( h i ) = μ S ˜ ( e j ) ( h i ) + μ S ˜ ( e j ) + ( h i ) 2 ,
Step 3: Create the contrast table for this IVFSS.
Step 4: Compute the row dominant sum and column dominant sum for every object, respectively.
Step 5: Calculate the overall dominant score for every object.
Step 6: Get the maximum of the overall dominant score for all of objects. Then the corresponding object is the optimal choice.
In order to display our proposed algorithm, let us come back to Example 3. According to our proposed algorithm, firstly, we should get the average degree of membership for every entry, which is shown in Table 3. Secondly, we should construct the contrast table for this interval-valued fuzzy soft set, which is given in Table 4. Here, it is clear that the contrast table has symmetry between the objects.And then, we should calculate the row dominant sum, column dominant sum and the overall dominant score for every object, which are illustrated in Table 5. Finally, we find the object h4 has the maximum of the overall dominant score for all of objects as 10, which is the best choice for this family. The sequence of these candidates is h4 > h3 > h5 > h1 > h2, which is different with the results by SBDM and CAODM. This is because that h3 has the highest level of excellent geographical location, but it has the poorer performance at the level of another three parameters such as “reasonable price”, “perfect facilities” and “cozy environment” compared with h4. That is, h4 has the reasonable price, perfect facilities and cozy environment, only it has poor geographical location. Hence, we think that there are some extreme values such as the high value of “excellent geographical location” and low value of “perfect facilities” for object h3, which lead to the highest score value by SBDM and lower overall dominant score by our proposed algorithm. In this situation, our proposed algorithm is more reasonable and feasible.

5. Comparison Results on Real-Life Cases

In this part, we compare the proposed algorithm with the two existing algorithms such as SBDM [29] and CAODM [36] on two real-life applications.
Case 1: 5-star Sydney Hotel Rating Systems
A traveler is going to arrive in Sydney to have a splendid trip, who is looking for 5-star accommodation. We browse the website of www.agoda.com (accessed on 1 September 2021) to obtain evaluation data. All guests who checked in this hotel give scores to this hotel from these aspects such as “Cleanliness”, “Location”, “Service”, “Facilities”, “Room comfort and quality” and “Value for money”. All guests comprise the traveler for business, couple traveler, solo traveler, family with young children, family with older children and group travelers. Every guest category gave the average scores to this hotel. We find the maximum and minimum score value based on the evaluation scores from six guest categories as lower and upper degrees of membership, which are normalized and described by the model of IVFSS (F, A). Here we have 21 candidate hotels U = {h1, h2,…, h21} = { Amora Jamison Hotel, Establishment Hotel, Fraser Suites Sydney, Hilton Sydney, Swissotel Sydney, Zara Tower-Luxury Suites and Apartments, Sofitel Sydney Wentworth Hotel, Radisson Blu Hotel Sydney, Ovolo Woolloomooloo Hotel, QT Sydney, Sheraton Grand Sydney Hyde Park, The Old Clare Hotel, Meriton Suites World Tower, Sydney Central YHA, The Langham Sydney Hotel, Primus Hotel Sydney, Hyatt Regency Sydney, ParkRoyal Darling Harbour Hotel, Shangri-la Hotel, The Darling at The Star, Four Seasons Hotel Sydney} and the set of six parameters as A = {e1, e2, e3, e4, e5, e6} = {“Cleanliness”, “Location”, “Service”, “Facilities”, “Room comfort and quality”, “Value for money”}. These collected data are presented in Table 6.

5.1. Decision Making by SBDM

According to the algorithm of SBDM, we can follow the related steps to solve this problem.
Step 1: Input an IVFSS (F, A) as shown in Table 6.
Step 2: figure out the choice value for each object as given in Table 6.
Step 3: find the score value of every object as shown in Table 6.
Step 4: find the maximum of the score value and the corresponding object is referred to as the best outcome.
Finally, we discover that object h15 has the maximum of the score value as 12.94 among all of objects. Hence, h15, that is, The Langham Sydney Hotel is the optimal choice. The sequence of objects is h15 > h20 > h11 > h9 > h8 > h10 = h12 > h13 > h16> h21 > h5 > h1 > h19 > h7 > h3 > h4 > h6 > h18 > h2 > h17 > h14.

5.2. Decision Making by CAODM

Compared with SBDM, CAODM considers the new added objects and reduces the computational complexity. However, about the outcome of decision making, the two methods are equivalent. According to the algorithm of CAODM, we compute the choice value for each object, and then figure out the overall choice value for every object. As a result, we discover that object h15 has the maximum overall choice value among all of objects. Hence, h15, that is, The Langham Sydney Hotel is the optimal choice. The sequence of objects is h15 > h20 > h11 > h9 > h8 > h10 = h12 > h13 > h16> h21 > h5 > h1 > h19 > h7 > h3 > h4 > h6 > h18 > h2 > h17 > h14.

5.3. Decision Making by Our Proposed Algorithm

According to our proposed algorithm, we can follow the related steps.
Step 1: Input an IVFSS (F, A) as shown in Table 6.
Step 2: Calculate average degree of membership for every entry as shown in Table 7.
Step 3: Create the contrast table for this IVFSS as shown in Table 8.
Step 4: Compute the row dominant sum and column dominant sum for every object, respectively. These data are presented in Table 9.
Step 5: Calculate the overall dominant score for every object as shown in Table 9.
Step 6: Get the maximum of the overall dominant score for all of 21 objects. Then the corresponding object is the optimal choice.
As a result, we find that h15 is the best choice. That is, The Langham Sydney Hotel is the best choice for this traveler. The sequence of the candidate hotels is h15 > h20 > h11 > h10 > h8 = h9 > h12 > h16 > h13 > h5 > h21 > h1 > h19 > h2 > h7 > h3 > h6 > h4 > h18 > h14 > h17.
We find that the sequence by our proposed algorithm is a little different with SBDM and CAODM, although three methods conduct the same top three. We can analyze the sequence of h10, h8 and h9. By means of our proposed algorithm, h10 is better than h8 and h9. However, h8 and h9 are better than h10 by the method of SBDM and CAODM. This is because h10 has the comparative extreme values such as the performance of h10 from the aspect of value for money, which leads to the lower score value than h8 and h9. Our proposed method can also examine some extreme or unbalanced values for decision making. We find that the two methods have the similar decision results, which shows that there is no too much extreme value or outliers in this case.
As a result, we illustrate the comparison results among three methods on this case shown in Table 10.
Case 2: Scenic Spots Weather Condition Evaluation Systems
A person has 7 days off for his annual holiday. He desires to spend and enjoy his holiday at one scenic spot. He goes to visit the web site of www.weather.com.cn (accessed on 1 September 2021), which displays the weather forecast for sixteen destination scenic spots. The data of weather forecast are described from four aspects such as “temperature”, “relative humidity”, “air quality index”, “wind speed”. We find the maximum and minimum values for every parameter as the upper limits and lower limits of the interval. Here we apply the model of IVFSS to describe this Scenic Spots Weather Condition Evaluation Systems. There are sixteen scenic spots in China as the candidates. Propose that the universe = {Forbidden City, The Bund, Bangchui Island, West Lake, Five Avenue, Ciqikou, Confucius Temple, Yellow Crane Tower, Mount Tai, Jiuzhai Valley, Zhangjiajie, Gulangyu Islet, The ancient City of Ping Yao, Terra Cotta Warriors, Mogao Grottoes, Erhai Lake} and the parameter set.
It is necessary to normalize the original data into IVFSS. We transform maximum value and minimum value into sub-intervals of [0, 1], which is normalized as upper and lower degree of membership. After normalization, we get IVFSS for Scenic Spots Weather Condition Evaluation Systems, which is illustrated in Table 10.

5.4. Decision Making by SBDM

According to the algorithm of SBDM, we can follow the related steps to solve this problem.
Step 1: Input an IVFSS (F, A) as shown in Table 10.
Step 2: Compute the choice value for each object as shown in Table 10.
Step 3: Obtain the related score value of every object as presented in Table 10.
Step 4: Sort the score values and find the maximum of the score value and the corresponding object is referred to as the best outcome.
Finally, we discover that object h16 has the maximum of the score value among all of objects. Hence, h16, that is, Erhai Lake is the optimal choice. The sequence of Scenic Spots is h16 > h1 > h6 > h11 > h12 > h2 > h5 > h7 > h3 > h8 > h15 > h10 > h4 > h13> h9 > h14 according to the respective weather condition.

5.5. Decision Making by CAODM

According to the algorithm of CAODM, similarly, object h16 has the maximum of the overall choice value among all of objects. Hence, h16, that is, Erhai Lake is the optimal choice. The sequence of Scenic Spots is h16 > h1 > h6 > h11 > h12 > h2 > h5 > h7 > h3 > h8 > h15 > h10 > h4 > h13 > h9 > h14 according to the respective weather condition.

5.6. Decision Making by Our Proposed Algorithm

According to our proposed algorithm, we can follow the related steps.
Step 1: Input an IVFSS (F, A) as shown in Table 11.
Step 2: Calculate average degree of membership for every entry as shown in Table 12.
Step 3: Construct the contrast table for this interval-valued fuzzy soft set as shown in Table 13.
Step 4: Compute the row dominant sum and column dominant sum for every object, which are depicted in Table 14.
Step 5: Calculate the overall dominant score for every object as shown in Table 14.
Step 6: Rank the overall dominant score and get the maximum of them for all of 16 objects. Then the corresponding object is the optimal choice.
As a result, we find that h16 is the best choice. That is, Erhai Lake is the best choice for this traveler. The sequence of the candidate scenic spots is h16 > h6 > h1 > h12 > h11 > h2 > h8 > h7 > h3 = h5 > h15 = h10 > h13 > h4 > h14 > h9.
It is clear that h16, that is, Erhai Lake has high-level performance associated with four weather condition aspects. As a result, Erhai Lake is the best choice by both two methods. However, for object h6 and object h1, which one is better? By the method of SBDM, object h1 is better than object h6. There exists a contradiction that object h6 has the better performance than object h1 by our proposed algorithm. The reason for this contradiction: the extreme values. Let us come back to Table 11. It is clear that object h6 has the much better performance from “temperature”, “air quality index” and “wind speed” than object h1 except for “relative humidity”. It is clear that object h6 is the better choice. However, object h6 has the very low relative humidity, which results in object h6 has the lower score value than object h1 by the methods of SBDM and CAODM. We think that h6 is the better choice which is more reasonable. Our proposed method can also examine some extreme or unbalanced values for decision making. We find that the three methods have the different decision results, which shows that there is some extreme value or outliers in this case. Therefore, we come back to this dataset. We can find that h6 has the extreme data such as [0.06, 0.20] with regard to relative humidity.
Consequently, we present the comparison results among three methods on case 2 shown in Table 15.
From above two down-to-earth applications, we can draw the conclusion that our proposed algorithm is more reasonable and feasible when there exists some extreme values or outliers in the datasets for making decisions than the two existing algorithms of SBDM and CAODM. Our proposed algorithm makes decision based on the number of superior parameter values, which is a new perspective to make decision. Our proposed algorithm can examine some extreme or unbalanced values for decision making if we regard this method as supplement of the existing algorithms of SBDM and CAODM. Finally, we demonstrate the characteristics of three methods as the following Table 16.

6. Conclusions

This paper analyzes the score-based decision making approach (SBDM) [29] and CAODM [36], and then points out the weakness and irrationality when there are some extreme values or outliers in the datasets based on IVFSS for making decisions. In order to overcome this shortcoming, a novel approach to decision making based on IVFSS by means of the contrast table is proposed. Comparison results on two real-life application cases such as five-star Sydney Hotel Rating Systems and Scenic Spots Weather Condition Evaluation Systems between two methods provide the testing and verification for the feasibility and efficiency when we face up to the extreme values or outliers of the uncertain datasets. Our proposed algorithm makes decision based on the number of superior parameter values, which is a new perspective to make decision. Our proposed algorithm can also examine if there are some extreme or unbalanced values for decision making if we regard this method as supplement of the existing algorithm of SBDM and CAODM. Future scope of this research work might be to apply the decision making methods into more practical applications such as evaluation systems, recommender system, conflict handling, and so on, and give the complete solution.

Author Contributions

Conceptualization, H.Q.; Formal analysis, X.M.; Funding acquisition, X.M.; Investigation, Y.W.; Methodology, H.Q.; Software, J.W.; Writing—original draft, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science Foundation of China, grant number 62162055, 61662067 and the National Science Foundation of Gansu province, grant number 21JR7RA115.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

This work was supported by the National Science Foundation of China No. 62162055, 61662067 and the Science Foundation of Gansu province No.21JR7RA115.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Riaz, M.N.; Çağman Wali, N.; Mushtaq, A. Certain properties of soft multi-set topology with applications in multi-criteria decision making. Decis. Mak. Appl. Manag. Eng. 2020, 3, 70–96. [Google Scholar] [CrossRef]
  2. Ali, Z.; Mahmood, T.; Ullah, K.; Khan, Q. Einstein Geometric Aggregation Operators using a Novel Complex Interval-valued Pythagorean Fuzzy Setting with Application in Green Supplier Chain Management. Rep. Mech. Eng. 2021, 2, 105–134. [Google Scholar] [CrossRef]
  3. Sahu, R.; Dash, S.R.; Das, S. Career selection of students using hybridized distance measure based on picture fuzzy set and rough set theory. Decis. Mak. Appl. Manag. Eng. 2021, 4, 104–126. [Google Scholar] [CrossRef]
  4. Molodtsov, D. Soft set theory-First results. Comput. Math. Appl. 1999, 37, 19–31. [Google Scholar] [CrossRef] [Green Version]
  5. Wen, T.; Chang, K.; Lai, H. Integrating the 2-tuple linguistic representation and soft set to solve supplier selection problems with incomplete information. Eng. Appl. Artif. Intell. 2020, 87, 103248. [Google Scholar] [CrossRef]
  6. Han, B.; Li, Y.; Geng, S. 0–1 Linear programmingmethods for optimal normal and pseudo parameter reductions of soft sets. Appl. Soft Comput. 2017, 54, 467–484. [Google Scholar] [CrossRef]
  7. Kong, Z.; Jia, W.; Zhang, G.; Wang, L. Normal parameter reduction in soft set based on particle swarm optimization algorithm. Appl. Math. Model. 2015, 39, 4808–4820. [Google Scholar] [CrossRef]
  8. Ma, X.; Qin, H. Soft Set Based Parameter Value Reduction for Decision Making Application. IEEE Access 2019, 7, 35499–35511. [Google Scholar] [CrossRef]
  9. Feng, F.; Cho, J.; Pedryczc, W.; Fujita, H.; Herawan, T. Soft set based association rule mining. Knowl.-Based Syst. 2016, 111, 268–282. [Google Scholar] [CrossRef]
  10. Feng, F.; Wang, Q.; Ronald, R.Y.; José, C.R.A.; Zhang, L. Maximal association analysis using logical formulas over soft sets. Expert Syst. Appl. 2020, 159, 113557. [Google Scholar] [CrossRef]
  11. Li, M.; Fan, Z.; You, T. Screening alternatives considering different evaluation index sets: A method based on soft set theory. Appl. Soft Comput. 2018, 64, 614–624. [Google Scholar] [CrossRef]
  12. Ezugwu, A.E.; Adewumi, A.O. Soft sets based symbiotic organisms search algorithm for resource discovery in cloud computing environment. Futur. Gener. Comput. Syst. 2017, 76, 33–55. [Google Scholar] [CrossRef]
  13. Kong, Z.; Zhao, J.; Wang, L.; Zhang, J. A new data filling approach based on probability analysis in incomplete soft sets. Expert Syst. Appl. 2021, 184, 115358. [Google Scholar] [CrossRef]
  14. Qin, H.; Ma, X.; Herawan, T.; Zain, J.M. A Novel Data Filling Approach For An Incomplete Soft Set. Int. J. Appl. Math. Comput. Sci. 2012, 22, 817–828. [Google Scholar] [CrossRef] [Green Version]
  15. Maji, P.K.; Biswas, R.; Roy, A.R. Fuzzy soft sets. J. Fuzzy Math. 2001, 9, 589–602. [Google Scholar]
  16. Ma, X.; Qin, H. A Distance-Based Parameter Reduction Algorithm of Fuzzy Soft Sets. IEEE Access 2018, 6, 10530–10539. [Google Scholar] [CrossRef]
  17. Biswajit, B.; Swarup, K.G.; Siddhartha, B.; Jan, P.; Vaclav, S.; Amlan, C. Chest X-ray enhancement to interpret pneumonia malformation based on fuzzy soft set and Dempster–Shafer theory of evidence. Appl. Soft Comput. 2020, 86, 105889. [Google Scholar]
  18. Hu, J.; Pan, L.; Yang, Y.; Chen, H. A group medical diagnosis model based on intuitionistic fuzzy soft sets. Appl. Soft Comput. 2019, 77, 453–466. [Google Scholar] [CrossRef]
  19. Feng, F.; Fujita, H.; Ali, M.I.; Yager, R.R.; Liu, X. Another View on Generalized Intuitionistic Fuzzy Soft Sets and Related Multiattribute Decision Making Methods. IEEE Trans. Fuzzy Syst. 2019, 27, 474–488. [Google Scholar] [CrossRef]
  20. Vijayabalaji, S.; Ramesh, A. Belief interval-valued soft set. Expert Syst. Appl. 2019, 119, 262–271. [Google Scholar] [CrossRef]
  21. Jiang, Y.; Tang, Y.; Chen, Q.; Liu, H.; Tang, J. Interval-valued intuitionistic fuzzy soft sets and their properties. Comput. Math. Appl. 2010, 60, 906–918. [Google Scholar] [CrossRef] [Green Version]
  22. Muhammad, A.; Arooj, A.; José, C.R.A. Group decision-making methods based on hesitant N-soft sets. Expert Syst. Appl. 2019, 115, 95–105. [Google Scholar]
  23. Aggarwal, M. Confidence soft sets and applications in supplier selection. Comput. Ind. Eng. 2019, 127, 614–624. [Google Scholar] [CrossRef]
  24. Gong, K.; Wang, P.; Peng, Y. Fault-tolerant enhanced bijective soft set with applications. Appl. Soft Comput. 2017, 54, 431–439. [Google Scholar] [CrossRef] [Green Version]
  25. Zhang, Z.; Zhang, S. A novel approach to multi attribute group decision making based on trapezoidal interval type-2 fuzzy soft sets. Appl. Math. Model. 2013, 37, 4948–4971. [Google Scholar] [CrossRef]
  26. Zhan, J.; Liu, Q.; Herawan, T. A novel soft rough set: Soft rough hemirings and its multicriteria group decision making. Appl. Soft Comput. 2017, 54, 393–402. [Google Scholar] [CrossRef]
  27. Zhan, J.; Ali, M.I.; Mehmood, N. On a novel uncertain soft set model: Z-soft fuzzy rough set model and corresponding decision making methods. Appl. Soft Comput. 2017, 56, 446–457. [Google Scholar] [CrossRef]
  28. Zhan, J.; Zhu, K. A novel soft rough fuzzy set: Z-soft rough fuzzy ideals of hemirings and corresponding decision making. Soft Comput. 2017, 21, 1923–1936. [Google Scholar] [CrossRef]
  29. Yang, X.; Lin, T.Y.; Yang, J.; Li, Y.; Yu, D. Combination of interval-valued fuzzy set and soft set. Comput. Math. Appl. 2009, 58, 521–527. [Google Scholar] [CrossRef] [Green Version]
  30. Qin, H.; Ma, X. Data Analysis Approaches of Interval-Valued Fuzzy Soft Sets Under Incomplete Information. IEEE Access 2019, 7, 3561–3571. [Google Scholar] [CrossRef]
  31. Ma, X.; Qin, H.; Sulaiman, N.; Herawan, T.; Abawajy, J. The parameter reduction of the interval-valued fuzzy soft sets and its related algorithms. IEEE Trans. Fuzzy Syst. 2014, 22, 51–57. [Google Scholar] [CrossRef]
  32. Qin, H.; Ma, X. A Complete Model for Evaluation System Based on Interval-Valued Fuzzy Soft Set. IEEE Access 2018, 6, 35012–35028. [Google Scholar] [CrossRef]
  33. Feng, F.; Li, Y.; Leoreanu-Fotea, V. Application of level soft sets in decision making based on interval-valued fuzzy soft sets. Comput. Math. Appl. 2010, 60, 1756–1767. [Google Scholar] [CrossRef] [Green Version]
  34. Peng, X.; Garg, H. Algorithms for interval-valued fuzzy soft sets in emergency decision making based on WDBA and CODAS with new information measure. Comput. Ind. Eng. 2018, 119, 439–452. [Google Scholar] [CrossRef]
  35. Peng, X.; Yang, Y. Algorithms for interval-valued fuzzy soft sets in stochastic multi-criteria decision making based on regret theory and prospect theory with combined weight. Appl. Soft Comput. 2017, 54, 415–430. [Google Scholar] [CrossRef]
  36. Ma, X.; Fei, Q.; Qin, H.; Li, H.; Chen, W. A new efficient decision making algorithm based on interval-valued fuzzy soft set. Appl. Intell. 2021, 51, 3226–3240. [Google Scholar] [CrossRef]
  37. Maji, P.K.; Roy, A.R.; Biswas, R. An application of soft sets in a decision making problem. Comput. Math. Appl. 2002, 44, 1077–1083. [Google Scholar] [CrossRef] [Green Version]
  38. José, C.; Alcantud, R. A novel algorithm for fuzzy soft set based decision making from multiobserver input parameter data set. Inf. Fusion 2016, 29, 142–148. [Google Scholar]
Table 1. IVFSS (F, A) for Examples 1 and 2.
Table 1. IVFSS (F, A) for Examples 1 and 2.
U e 1 e 2 e 3 e 4 c i r i
h 1 [0.2, 0.4][0.3, 0.7][0.4, 0.6][0.5, 0.9][1.4, 2.6]−1
h 2 [0.3, 0.6][0.4, 1.0][0.5, 0.9][0.1, 0.3][1.3, 2.8]−0.6
h 3 [0.7, 0.8][0.5, 0.7][0.3, 0.5][0.6, 0.9][2.1, 2.9]2.1
h 4 [0.5, 0.8][0.0, 0.4][0.3, 0.8][0.5, 0.6][1.3, 2.6]−1.4
Table 2. IVFSS (F, A) for Example 3.
Table 2. IVFSS (F, A) for Example 3.
U e 1 e 2 e 3 e 4 c i r i
h 1 [0.3, 0.5][0.6, 0.7][0.2, 0.4][0.4, 0.5][1.5, 2.1]−0.4
h 2 [0.3, 0.4][0.4, 0.5][0.6, 0.7][0.1, 0.3][1.4, 1.9]−1.9
h 3 [0.5, 0.6][1.0, 1.0][0.2, 0.3][0.2, 0.4][1.9, 2.3]2.6
h 4 [0.5, 0.7][0.0, 0.1][0.7, 0.8][0.6, 0.7][1.8, 2.3]2.1
h 5 [0.3, 0.6][0.3, 0.4][0.4, 0.7][0.2, 0.3][1.2, 2.0]−2.4
Table 3. Average degree of membership for Example 3.
Table 3. Average degree of membership for Example 3.
U e 1 e 2 e 3 e 4
h 1 [0.40][0.65][0.30][0.45]
h 2 [0.35][0.45][0.65][0.20]
h 3 [0.55][1.00][0.25][0.30]
h 4 [0.60][0.05][0.75][0.65]
h 5 [0.45][0.35][0.55][0.25]
Table 4. The contrast table for Example 3.
Table 4. The contrast table for Example 3.
h 1 h 2 h 3 h 4 h 5
h 1 53212
h 2 25112
h 3 34513
h 4 44453
h 5 33225
Table 5. The row dominant sum, column dominant sum and overall dominant score for Example 3.
Table 5. The row dominant sum, column dominant sum and overall dominant score for Example 3.
Row Dominant Sum (Ri)Column Dominant Sum (Ti)Overall Dominant Score (Si)
h 1 1317−4
h 2 1119−8
h 3 16142
h 4 201010
h 5 15150
Table 6. Interval-valued fuzzy soft set (F, A) for Case 1.
Table 6. Interval-valued fuzzy soft set (F, A) for Case 1.
U e 1 e 2 e 3 e 4 e 5 e 6 c i r i
h 1 [0.89, 0.92][0.84, 0.87][0.90, 0.93][0.86, 0.90][0.87, 0.90][0.82, 0.86][5.18, 5.38]−0.92
h 2 [0.80, 0.94][0.87, 1.00][0.91, 0.95][0.87, 0.94][0.50, 0.95][0.67, 0.85][4.62, 5.63]−7.43
h 3 [0.88, 0.90][0.83, 0.89][0.92, 0.94][0.86, 0.89][0.85, 0.89][0.79, 0.84][5.13, 5.35]−2.6
h 4 [0.88, 0.92][0.83, 0.86][0.92, 0.96][0.83, 0.87][0.84, 0.92][0.77, 0.82][5.07, 5.35]−3.86
h 5 [0.90, 0.91][0.85, 0.88][0.94, 0.96][0.86, 0.89][0.83, 0.90][0.82, 0.84][5.20, 5.38]−0.5
h 6 [0.90, 0.92][0.84, 0.87][0.86, 0.87][0.84, 0.90][0.81, 0.94][0.80, 0.86][5.05, 5.36]−4.07
h 7 [0.86, 0.98][0.81, 0.88][0.89, 0.90][0.84, 0.96][0.80, 1.00][0.76, 0.82][4.96, 5.54]−2.18
h 8 [0.92, 0.96][0.87, 0.92][0.92, 0.96][0.90, 0.93][0.85, 0.95][0.83, 0.86][5.29, 5.58]5.59
h 9 [0.88, 0.98][0.87, 0.97][0.88, 0.90][0.86, 0.97][0.85, 1.00][0.85, 0.92][5.19, 5.74]6.85
h 10 [0.91, 0.96][0.86, 0.90][0.94, 0.98][0.89, 0.93][0.88, 0.96][0.78, 0.87][5.26, 5.60]5.38
h 11 [0.91, 0.93][0.87, 0.91][0.93, 0.99][0.89, 0.90][0.90, 0.93][0.83, 0.90][5.43, 5.56]8.11
h 12 [0.93, 1.00][0.80, 0.93][0.90, 1.00][0.88, 1.00][0.80, 1.00][0.78, 0.84][5.09, 5.77]5.38
h 13 [0.87, 0.93][0.84, 0.95][0.93, 0.96][0.87, 0.95][0.85, 0.91][0.84, 0.88][5.20, 5.58]3.7
h 14 [0.78, 0.79][0.75, 0.80][0.92, 0.93][0.78, 0.81][0.76, 0.84][0.79, 0.82][4.78, 4.99]−17.51
h 15 [0.95, 1.00][0.91, 0.96][0.85, 0.92][0.95, 0.99][0.95, 1.00][0.86, 0.88][5.47, 5.75]12.94
h 16 [0.92, 0.94][0.86, 0.90][0.92, 0.97][0.85, 0.94][0.81, 0.90][0.86, 0.90][5.22, 5.55]3.49
h 17 [0.84, 0.88][0.77, 0.81][0.88, 0.92][0.81, 0.85][0.81, 0.83][0.74, 0.80][4.85, 5.09]−13.94
h 18 [0.84, 0.88][0.81, 0.88][0.90, 0.91][0.84, 0.87][0.84, 0.86][0.80, 0.85][5.03, 5.25]-6.8
h 19 [0.89, 0.93][0.85, 0.91][0.89, 0.90][0.85, 0.89][0.89, 0.92][0.81, 0.82][5.18, 5.37]−1.13
h 20 [0.91, 0.96][0.88, 0.96][0.89, 0.95][0.90, 0.96][0.91, 1.00][0.83, 0.89][5.32, 5.72]9.16
h 21 [0.89, 0.91][0.82, 0.87][0.94, 0.96][0.89, 0.90][0.88, 0.92][0.80, 0.84][5.22, 5.40]0.34
Table 7. Average degree of membership of IVFSS (F, A) for Case 1.
Table 7. Average degree of membership of IVFSS (F, A) for Case 1.
U e 1 e 2 e 3 e 4 e 5 e 6
h 1 0.910.860.920.880.890.84
h 2 0.870.940.930.910.730.76
h 3 0.890.860.930.880.870.82
h 4 0.900.850.940.850.880.80
h 5 0.910.870.950.880.870.83
h 6 0.910.860.870.870.880.83
h 7 0.920.850.900.900.900.79
h 8 0.940.900.940.920.900.85
h 9 0.930.920.890.920.930.89
h 10 0.940.880.960.910.920.83
h 11 0.920.890.960.900.920.87
h 12 0.970.870.950.940.900.81
h 13 0.900.900.950.910.880.86
h 14 0.790.780.930.800.800.81
h 15 0.980.940.890.970.980.87
h 16 0.930.880.950.900.860.88
h 17 0.860.790.900.830.820.77
h 18 0.860.850.910.860.850.83
h 19 0.910.880.900.870.910.82
h 20 0.940.920.920.930.960.86
h 21 0.900.850.950.900.900.82
Table 8. The contrast table for Case 1.
Table 8. The contrast table for Case 1.
Uh1h2h3h4h5h6h7h8h9h10h11h12h13h14h15h16h17h18h19h20h21
h1635546301101251166413
h2363223312221242244322
h3346323301001061165312
h4143612311000251165112
h5446566311103261266414
h6234536200101150155303
h7333434611021241264304
h8656656663433461466525
h9545556546453552555525
h10556666632644461566626
h11646666663464462466626
h12555655643326461465335
h13456655431323661466425
h14131101201001061021110
h15555556555555556455555
h16545555523233362666525
h17020001101000041061100
h18021312301101041066202
h19334545411103251264604
h20645556655543551466665
h21345633521013361365316
Table 9. The row dominant sum, column dominant sum and overall dominant score for Case 1.
Table 9. The row dominant sum, column dominant sum and overall dominant score for Case 1.
URow Dominant SumColumn Dominant SumOverall Dominant Score
h 1 6476−12
h 2 5582−27
h 3 5190−39
h 4 4659−49
h 5 7276−4
h 6 5093−43
h 7 5787−30
h 8 984553
h 9 964353
h 10 1014358
h 11 1053966
h 12 925042
h 13 875631
h 14 23111−88
h 15 1062977
h 16 885533
h 17 18117−99
h 18 36103−67
h 19 6479−15
h 20 1033469
h 21 7079−9
Table 10. Comparison results about three methods on case 1.
Table 10. Comparison results about three methods on case 1.
AlgorithmConsidering The Extreme DataWhether Examining Extreme DataDecision Making Results
Our methodYesYes(h10 has extreme value)h15 > h20 > h11 > h10 > h8 = h9 > h12 > h16 > h13 > h5 > h21 > h1 > h19 > h2 > h7 > h3 > h6 > h4 > h18 > h14 > h17
SBDMNoNoh15 > h20 > h11 > h9 > h8 > h10 = h12 > h13 > h16 > h21 > h5 > h1 > h19 > h7 > h3 > h4 > h6 > h18 > h2 > h17 > h14
CAODMNoNoh15 > h20 > h11 > h9 > h8 > h10 = h12 > h13 > h16 > h21 > h5 > h1 > h19 > h7 > h3 > h4 > h6 > h18 > h2 > h17 > h14
Table 11. IVFSS (F, A) for Case 2.
Table 11. IVFSS (F, A) for Case 2.
U e 1 e 2 e 3 e 4 c i r i
h 1 [0.13, 0.52][0.60, 0.98][0.27, 0.95][0.50, 1.00][1.50, 3.45]13.01
h 2 [0.35, 0.71][0.06, 0.49][0.62, 0.89][0.25, 1.00][1.28, 3.09]3.73
h 3 [0.23, 0.52][0.36, 0.64][0.25, 0.81][0.50, 0.75][1.34, 2.72]−1.23
h 4 [0.39, 0.74][0.06, 0.37][0.46, 0.74][0.25, 0.75][1.16, 2.60]−6.03
h 5 [0.19, 0.55][0.50, 1.00][0.04, 0.76][0.50, 0.75][1.23, 3.06]2.45
h 6 [0.45, 0.81][0.06, 0.20][0.73, 0.85][0.50, 1.00][1.74, 2.86]7.4
h 7 [0.32, 0.71][0.00, 0.51][0.57, 0.73][0.50, 0.75][1.39, 2.70]−0.75
h 8 [0.39, 0.77][0.01, 0.10][0.57, 0.96][0.50, 0.75][1.47, 2.58]−1.39
h 9 [0.03, 0.42][0.33, 0.89][0.06, 0.76][0.00, 0.75][0.42, 2.82]−14.35
h 10 [0.06, 0.42][0.20, 0.48][0.37, 0.67][0.75, 1.00][1.38, 2.57]−2.99
h 11 [0.42, 0.81][0.01, 0.16][0.91, 1.00][0.50, 0.75][1.84, 2.72]6.77
h 12 [0.65, 1.00][0.32, 0.44][0.66, 0.91][0.00, 0.50][1.63, 2.85]5.49
h 13 [0.13, 0.58][0.19, 0.90][0.04, 0.55][0.25, 1.00][0.61, 3.03]−7.95
h 14 [0.19, 0.68][0.15, 0.34][0.00, 0.36][0.50, 0.75][0.84, 2.13]−18.67
h 15 [0.00, 0.48][0.32, 0.91][0.29, 0.78][0.50, 0.75][1.11, 2.92]−1.71
h 16 [0.42, 0.87][0.20, 0.78][0.46, 0.92][0.50, 1.00][1.58, 3.57]16.21
Table 12. Average degree of membership of IVFSS (F, A) for Case 2.
Table 12. Average degree of membership of IVFSS (F, A) for Case 2.
U e 1 e 2 e 3 e 4
h 1 0.330.790.610.75
h 2 0.530.280.760.63
h 3 0.380.500.530.63
h 4 0.570.220.600.50
h 5 0.370.750.400.63
h 6 0.630.130.790.75
h 7 0.520.260.650.63
h 8 0.580.060.770.63
h 9 0.230.610.410.38
h 10 0.240.340.520.88
h 11 0.620.090.960.63
h 12 0.830.380.790.25
h 13 0.360.550.300.63
h 14 0.440.250.180.63
h 15 0.240.620.540.63
h 16 0.650.490.690.75
Table 13. The contrast table for Case 2.
Table 13. The contrast table for Case 2.
Uh1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16
h14233322243223342
h22433314232213431
h31242312233223321
h41124211132112220
h51222412232224331
h63333343432323332
h72133314232213430
h82333303432113331
h90111111141122201
h101212222234212321
h112333312432423331
h122323233323242322
h131222112222224321
h141122211221212420
h150232212243223341
h163334334333323434
Table 14. The row dominant sum, column dominant sum and overall dominant score for Case 2.
Table 14. The row dominant sum, column dominant sum and overall dominant score for Case 2.
URow Dominant SumColumn Dominant SumOverall Dominant Score
h 1 442618
h 2 41356
h 3 3640−4
h 4 2642−16
h 5 3640−4
h 6 472423
h 7 3739−2
h 8 38380
h 9 2048−28
h 10 3237−5
h 11 433310
h 12 412813
h 13 3145−14
h 14 2650−24
h 15 3651−5
h 16 511932
Table 15. Comparison results about three methods on case 2.
Table 15. Comparison results about three methods on case 2.
AlgorithmConsidering the Extreme DataWhether Examining Extreme DataDecision Making Results
Our methodYesYes (h6 has extreme value)h16 > h6 > h1 > h12 > h11 > h2 > h8 > h7 > h3 = h5 > h15 = h10 > h13 > h4 > h14 > h9
SBDMNoNoh16 > h1 > h6 > h11 > h12 > h2 > h5 > h7 > h3 > h8 > h15 > h10 > h4 > h13 > h9 > h14
CAODMNoNoh16 > h1 > h6 > h11 > h12 > h2 > h5 > h7 > h3 > h8 > h15 > h10 > h4 > h13 > h9 > h14
Table 16. Comparison results about three methods.
Table 16. Comparison results about three methods.
AlgorithmConsidering the Extreme DataWhether Examining Extreme DataConsidering the Added Objects
Our methodYesYesNo
SBDMNoNoNo
CAODMNoNoYes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qin, H.; Wang, Y.; Ma, X.; Wang, J. A Novel Approach to Decision Making Based on Interval-Valued Fuzzy Soft Set. Symmetry 2021, 13, 2274. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13122274

AMA Style

Qin H, Wang Y, Ma X, Wang J. A Novel Approach to Decision Making Based on Interval-Valued Fuzzy Soft Set. Symmetry. 2021; 13(12):2274. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13122274

Chicago/Turabian Style

Qin, Hongwu, Yanan Wang, Xiuqin Ma, and Jin Wang. 2021. "A Novel Approach to Decision Making Based on Interval-Valued Fuzzy Soft Set" Symmetry 13, no. 12: 2274. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13122274

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop