Next Article in Journal
Zero-Shot Topic Labeling for Hazard Classification
Next Article in Special Issue
Accelerating Update of Variable Precision Multigranulation Approximations While Adding Granular Structures
Previous Article in Journal
Secure Sensitive Data Sharing Using RSA and ElGamal Cryptographic Algorithms with Hash Functions
Previous Article in Special Issue
Logarithmic Negation of Basic Probability Assignment and Its Application in Target Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Iterative Approach for the Solution of the Constrained OWA Aggregation Problem with Two Comonotone Constraints

1
Deparment of Mathematics and Informatics, University of Oradea, 410087 Oradea, Romania
2
Department of Informatics, Széchenyi István University, 9026 Győr, Hungary
*
Author to whom correspondence should be addressed.
Submission received: 31 August 2022 / Revised: 16 September 2022 / Accepted: 18 September 2022 / Published: 21 September 2022

Abstract

:
In this paper, first, we extend the analytical expression of the optimal solution of the constrained OWA aggregation problem with two comonotone constraints by also including the case when the OWA weights are arbitrary non-negative numbers. Then, we indicate an iterative algorithm that precisely indicates whether a constraint in an auxiliary problem is either biding or strictly redundant. Actually, the biding constraint (or two biding constraints, as this case also may occur) are essential in expressing the solution of the initial constrained OWA aggregation problem.

1. Introduction

Since the introduction of ordered weighted averaging operators (OWA operators) by Yager in [1], this topic has attracted huge interest in both theoretical and practical directions. For detailed accounts on the state of the art, we recommend, for example, the works [2,3,4,5]. In this paper, our goal is to continue the investigation of the so-called constrained OWA aggregation problem. Here, the goal is to optimize the OWA operator under linear constraints. Yager started this research in [6] by proposing an algorithm for the maximization problem that can solve the problem in some special cases. Then, in [7], the authors solve the problem in the special case when we have one restriction and all coefficients in the constraint are equal. In paper [8], the result is generalized; this time, the coefficients in the single constraint are arbitrary. Another approach for this case can be also found in the recent paper [9]. The minimization problem in the case of a single constraint is solved in [10]. Recently, in the paper [3], the authors found a way to solve the maximization and minimization problems in the case when we have two comonotone constraints. In this contribution, we continue the work started in [3]. We will discuss the maximization problem since the results for the minimization problem can be easily deduced using the patterns from the papers [3,10]. First, we find a simple way to generalize the main results in [3] for the case when the OWA weights are arbitrary non-negative numbers. In [3], we assumed these weights to be strictly positive to avoid division with zero in some cases. However, these cases can be eliminated as they will give some redundant constraints. In this way, the results apply for OWA operators having some weights that can be equal to zero, such as, for example the Olympic weights (see, e.g., [2]). We reiterate again as in others of our papers that for solving such problems, it seems that one effective approach is to use the dual of some linear programs derived from the initial constrained OWA aggregation problem. Other optimization problems also use the dual of linear programs (see [11,12]). In the papers mentioned earlier, the idea is to optimize the OWA operator under some linear constraints. Another problem which is of great interest among researchers is to optimize the OWA weights under some additional constraints (see, e.g., [2,5,13,14]). Finally, let us also mention that the study of OWA type operators and their generalizations is a dynamic process, and numerous interesting directions have opened in recent years (see, e.g., [15,16,17,18,19]).
In Section 2, we present the constrained OWA aggregation problem with comonotone constraints and, in the special case of two comonotone constraints, we extend the main results proved in [3] for the case when the OWA weights are arbitrary non-negative numbers. In Section 3, we present the iterative algorithm for an auxiliary problem associated with the initial problem that finds at every step a constraint that is either binding or strictly redundant. In Section 4, we test this algorithm on concrete examples, and we also discuss its proficiency. The paper ends with conclusions that sum up the main contributions as an important step to the general setting of an arbitrary number of constraints.

2. Constrained OWA Aggregation with Comonotone Constraints

In this section, we recall briefly the basics on the constrained OWA aggregation problem. These details can be found in numerous papers, and we use here similar arguments as in [3].
Suppose we have the non-negative weights w 1 , , w n such that w 1 + + w n = 1 and define a mapping F : R n R ,
F ( x 1 , , x n ) = i = 1 n w i y i ,
where y i is the i-th largest element of the sample x 1 , , x n . Then consider a matrix A of type ( m , n ) with real entries and a vector b R m . A constrained maximum OWA aggregation problem corresponding to the above data is the problem (see [6])
max F ( x 1 , , x n ) subject to A x b , x 0 .
Let us recall now two particular problems where the coefficients in the constraints can be rearranged to satisfy certain monotonicity properties. The maximization problem is
max F ( x 1 , , x n ) , α i 1 x 1 + · · · + α i n x n 1 , for all i = 1 , . . . , m , x 0 , α i j > 0 ,
and there exists a permutation σ S n such that
α i σ 1 α i σ 2 · · · α i σ n ,   i = 1 , m ¯ .
Here, S n denotes the set of all permutations of { 1 , , n } , and for some σ S n , we use the notation σ k for the value σ ( k ) for any k { 1 , , n } . From now on, we will say that the constraints in problem (2) are comonotone whenever condition (3) is satisfied. The minimization problem is
min F ( x 1 , , x n ) , α i 1 x 1 + · · · + α i n x n 1 , for all i = 1 , . . . , m , x 0 , α i j > 0 ,
and, again, there exists σ S n such that
α i σ 1 α i σ 2 · · · α i σ n ,   i = 1 , m ¯ .
Obviously, in the minimization problem above, the constraints are comonotone as well.
Considering the general problem (1), Yager used a method based on mixed integer linear programming to approach the optimal solution. The method is quite complex since it requires to introduce auxiliary variables, sometimes causing difficulties in calculations. When the single constraint is particularized to x 1 + + x n = 1 , the problem was solved completely in [7] by providing an analytical solution as a function of the weights. Furthermore, considering arbitrary coefficients in the single constraint, in paper [8], the analytical solution was obtained as a function depending on the weights and on the coefficients in the constraint. This problem can be formulated as
max F ( x 1 , , x n ) subject to α 1 x 1 + + α n x n 1 , x 0 .
The following theorem recalls the main result from [8]. In what follows, S n denotes the set of permutations of the set { 1 , , n } .
Theorem 1
(see [8], Theorem 3). Consider problem (6). Then:
(i)
If there exists i 0 { 1 , , n } such that α i 0 0 , then F is unbounded on the feasible set, and its supremum over the feasible set is ∞;
(ii)
If α i > 0 , i { 1 , , n } , then taking (any) σ S n with the property that α σ 1 α σ 2 · · · α σ n , and  k * { 1 , , n } , such that
w 1 + · · · + w k * α σ 1 + · · · + α σ k * = max w 1 + · · · + w k α σ 1 + · · · + α σ k : k { 1 , , n } ,
then ( x 1 * , , x n * ) is an optimal solution of problem (6), where
x σ 1 * = · · · = x σ k * * = 1 α σ 1 + · · · + α σ k * , x σ k * + 1 * = · · · = x σ n * = 0 .
In particular, if  0 < α 1 α 2 · · · α n , and  k * { 1 , , n } is such that
w 1 + · · · + w k * α 1 + · · · + α k * = max w 1 + · · · + w k α 1 + · · · + α k : k { 1 , , n } ,
then ( x 1 * , , x n * ) is a solution of (6), where
x 1 * = · · · = x k * * = 1 α 1 + · · · + α k * , x k * + 1 * = · · · = x n * = 0 .
An analogue to the minimization problem can be found in [10] (see Theorem 2 there).
Let us now discuss the case when we have two comonotone constraints. We will discuss the maximization problem here by briefly recalling the reasoning used in [3].
Consider the problem
max F ( x 1 , , x n ) , α 1 x 1 + · · · + α n x n 1 , β 1 x 1 + · · · + β n x n 1 , x 0 ,
in the case when 0 < α 1 α 2 · · · α n and 0 < β 1 β 2 · · · β n . In what follows, we consider the special case when the weights are positive. To obtain an analytical representation of the optimal solution, several linear programs are associated with this problem. By Theorem 1 in [3], any solution of problem
max w 1 x 1 + · · · + w n x n , α 1 x 1 + · · · + α n x n 1 , β 1 x 1 + · · · + β n x n 1 x 1 · · · x n 0 ,
is a solution of problem (7). Note that both problems have nonempty solution sets.
To solve (8), we need its dual, which is
min t 1 + t 2 , α 1 t 1 + β 1 t 2 t 3 w 1 , α 2 t 1 + β 2 t 2 + t 3 t 4 w 2 , α 3 t 1 + β 3 t 2 + t 4 t 5 w 3 , · · · α n 1 t 1 + β n 1 t 2 + t n t n + 1 w n 1 , α n t 1 + β n t 2 + t n + 1 w n t 1 0 , t 2 0 , · · · , t n + 1 0 .
Furthermore, we can simplify this problem by introducing the problem
min t 1 + t 2 , α 1 t 1 + β 1 t 2 w 1 , α 1 + α 2 t 1 + β 1 + β 2 t 2 w 1 + w 2 , · · i = 1 k α i · t 1 + i = 1 k β i · t 2 i = 1 k w i , · · i = 1 n α i · t 1 + i = 1 n β i · t 2 i = 1 n w i , t 1 0 , t 2 0 .
Here, we make the first improvement with respect to the reasoning used in [3]. Namely, if  w k = 0 for some k { 1 , . . . , n } , then constraint number k is redundant in (10). Therefore, problem (10) is equivalent to problem
min t 1 + t 2 , i = 1 k α i · t 1 + i = 1 k β i · t 2 i = 1 k w i , k I n t 1 0 , t 2 0 ,
where I n = { k { 1 , . . . , n } : w k > 0 } . This improvement will allow us to investigate problems where some of the weights can be equal to 0, such as, for example the Olympic weights (see, e.g., [2]). As we mentioned in [3], if  t 1 * , t 2 * , , t n + 1 * is a solution of problem (9), then t 1 * , t 2 * is a feasible solution for problem (10), and consequently, it is feasible for problem (11). Now, suppose that t ¯ 1 , t ¯ 2 is a solution of problem (11). One can easily prove (see again [3]) that t ¯ 1 , t ¯ 2 extends to a feasible solution t ¯ 1 , t ¯ 2 , t 3 * , , t n + 1 * of problem (9). Thus, considering only the first two components of the feasible solutions of problem (9), we obtain the same set as for the feasible set of problem (11). Obviously, both problems will have the same minimal value, which in addition is finite.
In order to find a solution for problem (11), we need to investigate a problem given as
min t 1 + t 2 , a k · t 1 + b k · t 2 1 , k I n , t 1 0 , t 2 0 .
It will suffice to consider only the case when a k > 0 and b k > 0 for all k { 1 , , n } . Taking
a k = i = 1 k α i i = 1 k w i and b k = i = 1 k β i i = 1 k w i , k I n ,
problem (11) becomes exactly a problem of type (12). Therefore, solving problem (12) will result in solving problem (11) as well.
We need the following auxiliary result proved in [3].
Lemma 1.
(see [3], Lemma 7) Consider problem (12), where a k and b k are given in (13), k I n . Suppose that k 1 , k 2 I n are such that a k 1 b k 1 and a k 2 b k 2 . If  t 1 * , t 2 * is a solution of the system
a k 1 t 1 + b k 1 t 2 = 1 , a k 2 t 1 + b k 2 t 2 = 1 ,
and if t 1 * , t 2 * is feasible for problem (12), then t 1 * , t 2 * is an optimal solution for problem (12).
All the information above will be very useful in the next section, where we will propose an iterative algorithm to approach the solution of problem (7).
Actually, we can now characterize the optimal solution of problem (7) by slightly improving Theorem 8 in [3]. We omit the proof since one can easily deduce the necessary modifications comparing to the statement of Theorem 8 in [3] coupled with Lemma 1 from above, which stays at the bases of the next theorem.
Theorem 2.
(see also Theorem 8 in [3] covering the case when the weights are strictly positive, that is, I n = { 1 , . . . , n } ) Consider problem (7) in the special case when 0 < α 1 α 2 · · · α n and 0 < β 1 β 2 · · · β n . Then consider problem (12) where a k and b k are obtained using the substitutions given in (13), k I n . Furthermore, take l = arg min a k : k I n and p = arg min b k : k I n . We have the following cases (not necessarily distinct but covering all possible scenarios) in obtaining the optimal solution and the optimal value of problem (7).
(i)
If a l b l , then ( x 1 * , . . . , x n * ) is a solution of problem (7), where
x 1 * = . . . = x l * = 1 α 1 + . . . + α l , x l + 1 * = . . . = x n * = 0 .
In addition, w 1 + · · · + w l α 1 + . . . + α l is the optimal value of problem (7).
(ii)
If a p b p , then ( x 1 * , . . . , x n * ) is a solution of problem (7), where
x 1 * = . . . = x p * = 1 β 1 + . . . + β p , x p + 1 * = . . . = x n * = 0 .
In addition, w 1 + · · · + w p β 1 + . . . + β p is the optimal value of problem (7).
(iii)
If in problem (12) there exists a binding constraint C k 0 such that a k 0 = b k 0 , then ( x 1 * , . . . , x n * ) is a solution of problem (7), where
x 1 * = · · · = x k 0 * = 1 α 1 + · · · + α k 0 , x k 0 + 1 * = · · · = x n * = 0 .
In addition, w 1 + · · · + w k 0 α 1 + · · · + α k 0 is the optimal value of problem (7).
(iv)
If in problem (12) the optimal solution satisfies with equality the constraints C k 1 and C k 2 , where k 1 < k 2 and a k 1 b k 1 · a k 2 b k 2 < 0 , then ( x 1 * , . . . , x n * ) is a solution of problem (7), where
x i * = i = k 1 + 1 k 2 β i α i i = 1 k 1 α i · i = k 1 + 1 k 2 β i i = 1 k 1 β i · i = k 1 + 1 k 2 α i , i = 1 , k 1 ¯ x i * = i = 1 k 1 α i β i i = 1 k 1 α i · i = k 1 + 1 k 2 β i i = 1 k 1 β i · i = k 1 + 1 k 2 α i , i = k 1 + 1 , k 2 ¯ , x k 2 + 1 * = · · · = x n * = 0 .
In addition, the optimal value of problem (7) is equal to
i = 1 k 1 w i · i = k 1 + 1 k 2 β i α i + i = k 1 + 1 k 2 w i · i = 1 k 1 α i β i i = 1 k 1 α i · i = k 1 + 1 k 2 β i i = k 1 + 1 k 2 α i · i = 1 k 1 β i .
Remark 1.
Using the above theorem, we can also generalize the results given in Theorems 9 and 10, respectively, in paper [3]. In Theorem 9, we considered the case when there exists a permutation σ S n such that 0 < α σ 1 α σ 2 · · · α σ n and 0 < β σ 1 β σ 2 · · · β σ n . Then, in Theorem 10, we considered the case when 0 < α 1 = α 2 = · · · = α n = α . Obviously, in view of Theorem 3 from above, all these results can be extended for the more general case when the weights in problem (7) are only assumed to be non-negative. Then, of course, we can state similar refinements for the case of minimization problem. They are easily deduced from the corresponding results obtained in paper [3] (see Theorems 11–13, respectively, in [3]).

3. An Iterative Algorithm to Achieve the Optimal Solution

In this section we propose an iterative algorithm to obtain the optimal solution in Problem (7). Although the computer implementation of Theorem 2 can be done in a very convenient way based on the Simplex algorithm (see the examples in [3]), the following algorithm has an interesting particularity; namely, it can identify a constraint that is either binding or redundant. In this way, we can eliminate the constraints one by one until we obtain the optimal solution. We also hope this algorithm can be generalized for the case when we have more than two comonotone constraints; however, this remains an interesting open question in our opinion.
To construct our algorithm, we need to investigate Problem (12), where a k and b k are given in (13), k I n . We also need some concepts and notations that are well-known in linear programming. Let us denote with C k constraint number k of problem (12), k I n . Then, we denote with U the feasible region of problem (10). Next, for some k I n , let P k = I n { k } and
U k = ( t 1 , t 2 ) [ 0 , ) × [ 0 , ) : a i · t 1 + b i · t 2 1 , i P k .
In other words, U k is the feasible region of any optimization problem which keeps all the constraints from Problem (12) except for constraint C k . The constraint C k is called redundant if U = U k . In other words, the solution set of the optimization problem with feasible region U coincides with the solution set of the optimization problem that has the same objective function and all the constraints except for C k . This means that constraint C k can be removed when solving the given problem. The constraint C k is called strongly redundant if it is redundant and
a k t 1 + b k t 2 > 1 , for all t 1 , t 2 U .
Therefore, C k is strongly redundant if and only if the segment which corresponds to the solutions of the equation a k t 1 + b k t 2 = 1 , t 1 0 , t 2 0 , does not intersect U. A redundant constraint that is not strongly redundant is called weakly redundant. The constraint C k is called binding if there exists at least one optimal point which satisfies this constraint with equality. This means that the segment corresponding to the equation a k t 1 + b k t 2 = 1 , t 1 0 , t 2 0 , contains an optimal point of the problem. Note that it is possible for a weakly redundant constraint to be binding as well. All these concepts were discussed with respect to our Problem (12), but of course, they can be defined accordingly for any kind of optimization problem.
Now, with these new tools, we can investigate Problem (12). As we said in the introduction, searching for binding constraints may be just as difficult as solving the program. However, we can easily spot some binding constraints in Problem (12). We also believe it is worthwhile to do that as otherwise all constraints are needed when performing the algorithm, and therefore, this calculation will be more complex than those used to eliminate the redundant constraints. In general, if  k 1 , k 2 I n are such that a k 1 a k 2 and b k 1 b k 2 , then constraint C k 2 is redundant, and it can be eliminated. Using this fact, we propose a simple method to eliminate some redundant constraints. First, we set M 1 = I n and let N 1 be the subset of M 1 such that a l = min { a k : k M 1 } , for all l N 1 . Then, let p 1 be the index with the minimum value (just to make a choice if more indices would satisfy the following property) in N 1 such that b p 1 = min { b k : k N 1 } . We keep constraint C p 1 and eliminate all the other constraints indexed in N 1 since they all are redundant. Next, for any k M 1 N 1 , we compute a p 1 a k · ( b p 1 b k ) . If this value is strictly negative, then we keep constraint C k , and if not, then we eliminate C k because it is redundant. Let M 2 be the set of indices that correspond only to the constraints that were not eliminated and which does not contain p 1 . We continue with the same reasoning, with the only difference that now we take M 2 instead of M 1 , then define N 2 with respect to M 2 the same way we defined N 1 with respect to M 1 . We then define p 2 the same way we defined p 1 and so on, M 3 , N 3 and p 3 , and so on,..., until we get that M k is the empty set. Note that k is at most equal to n 1 . At every step k, we select in a set that we denote with J, the index of each constraint that was not eliminated from N k , that is, p 1 , p 2 , and so on. Thus, the constraints indexed by J will give the same feasible region as the initial set of n constraints. In addition, if  k 1 , k 2 J , k 1 k 2 , then a k 1 a k 2 · ( b k 1 b k 2 ) < 0 . Then, there exists at most one index k J such that a k = b k . There are other methods to obtain the set J, but in our opinion, this one is between the fastest when using the computer. In all that follows in this paper, the set J will be the one obtained with the above technique. Please note that it still may be possible to have redundant constraints among those indexed in J. Just before our first key result, we explain why it is not useful to search such constraints outside the proposed algorithm.
We are now in position to present a key result that will then give us a fast algorithm to solve Problem (12). What is really interesting in the following theorem is that we present a precise and simple method to search a constraint that will prove to be either a binding constraint or a strongly redundant constraint. This happens rarely in linear programming, and it also explains why it is not necessary to search for redundant constraints separately. This is indeed an important advantage mainly because the techniques to find redundant inequalities involve a lot of computation.
Theorem 3.
Consider Problem (12), where a k and b k are given in (13), k I n . Then, let J I n be the set of indices obtained by the technique described just before Lemma 1). Then, Let J 1 = { k J : a k b k } and J 2 = { k J : a k b k } . In addition, suppose that both J 1 and J 2 are nonempty. Then, let k 1 J 1 be such that a k 1 = min a k : k J 1 and b k 2 = min b k : k J 2 (by the construction of J, it follows that a k 1 and b k 2 are unique minimizers). Then, constraint C k 1 is either binding, or it is strongly redundant. Similarly, constraint C k 2 is either binding, or it is strongly redundant.
Proof. 
Due to the absolutely similar reasoning of the assertions, we prove only the first one. Suppose that C k 1 is not strongly redundant. Again, let f ( t 1 , t 2 ) = t 1 + t 2 be the objective function. We have two cases: (i) the point 1 / a k 1 , 0 belongs to the feasible region and (ii) 1 / a k 1 , 0 does not belong to the feasible region.
For case (i), since a k 1 b k 1 , with reasoning as in the proof of Lemma 1, it follows that f 1 / a k 1 , 0 f ( t 1 , t 2 ) f 0 , 1 / b k 1 for any feasible point t 1 , t 2 that satisfies constraint C k 1 with equality. Now, if  t 1 , t 2 is an arbitrary feasible point, it is clear that the intersection of the segments ( 0 , 0 ) , ( t 1 , t 2 ) and 1 / a k 1 , 0 , 0 , 1 / b k 1 is nonempty. Let ( u 1 , u 2 ) be in this intersection. With reasoning as in the proof of Lemma 1, we obtain that f ( t 1 , t 2 ) f ( u 1 , u 2 ) f 1 / a k 1 , 0 . This means that 1 / a k 1 , 0 is an optimal solution of Problem (12).
For case (ii), from all the feasible points that satisfy constraint C k 1 with equality, we chose the one for which the first component has the maximum value. In other words, considering the intersection of 1 / a k 1 , 0 , 0 , 1 / b k 1 with the feasible region, we take the point which is nearest to 1 / a k 1 , 0 with respect to the usual Euclidean metric in R 2 . Let us denote this point with t 1 * , t 2 * . Note that since the feasible region is a closed and convex subset of R 2 , this intersection is a closed segment; therefore, the construction of t 1 * , t 2 * is correct. Suppose that a 0 = min a k : k J . It is immediate that 1 / a 0 is a feasible point of Problem (12). As  1 / a k 1 is not, it necessarily follows that a k 1 > a 0 . From this property it results that there exists a constraint C l such that t 1 * , t 2 * satisfies this constraint with equality and such that a l < a k 1 . This property can be easily deduced using some elementary geometrical reasoning. For the sake of correctness, let us give a rigorous proof. Let us choose arbitrary k J such that a k < a k 1 (such element exists since a k 1 > a 0 ). In this case, the solution of the system a k 1 t 1 + b k 1 t 2 = 1 , a k t 1 + b k t 2 = 1 , must have its (unique) solution on the segment t 1 * , t 2 * , 1 / a k 1 , 0 . Otherwise, then ( 0 , 0 ) and t 1 * , t 2 * would be on the same semispace with respect to the separating line a k x + b k y = 1 ; hence, a k t 1 * + b k t 2 * < 1 . This implies that t 1 * , t 2 * is not feasible, which is a contradiction. Now, let us choose an arbitrary k J such that a k > a k 1 . In this case, the unique solution of system a k 1 t 1 + b k 1 t 2 = 1 , a k t 1 + b k t 2 = 1 , t 1 0 , t 2 0 , lies on the segment t 1 * , t 2 * , 0 , 1 / b k 1 . Otherwise, we obtain the same contradiction as above. Now, by way of contradiction, suppose that for any k J , such that a k < a k 1 , we have a k t 1 * + b k t 2 * 1 . Using the properties mentioned just above, we obtain that a k t 1 * + b k t 2 * > 1 . Then, if  k J is such that a k > a k 1 , we obtain that a k t 1 + b k t 2 1 for all t 1 , t 2 t 1 * , t 2 * , 1 / a k 1 , 0 . All these imply that there exists ( t ¯ 1 , t ¯ 2 ) t 1 * , t 2 * , 1 / a k 1 , 0 sufficiently close to t 1 * , t 2 * , such that t ¯ 1 > t 1 * and such that a k t ¯ 1 + b k t ¯ 2 1 , for all k J . This means that ( t ¯ 1 , t ¯ 2 ) is a feasible point for Problem (12). On the other hand, ( t ¯ 1 , t ¯ 2 ) satisfies constraint C k 1 with equality and t ¯ 1 > t 1 * . This contradicts the construction of t 1 * . Therefore, there exists a constraint C l such that t 1 * , t 2 * satisfies this constraint with equality and such that a l < a k 1 . By the construction of a k 1 , it follows that a l < b l . Therefore, t 1 * , t 2 * is a feasible point which is a solution of the system
a k 1 t 1 + b k 1 t 2 = 1 , a l t 1 + b l t 2 = 1 ,
where a k 1 b k 1 and a l < b l . By Lemma 1, it follows that t 1 * , t 2 * is an optimal solution for Problem (12). The proof is complete now. In the case of constraint C k 2 , the reasoning is identical. In this case, if  C k 2 is binding, then the optimal solution is the feasible point that satisfies C k 2 with equality and which is the nearest to 0 , 1 / b k 2 , which equivalently means that from all feasible points that satisfy C k 2 with equality, it has the the minimum value for the first component.    □
From the above theorem, we can actually describe precisely the optimal point. We did that in the proof, but it is worthwhile to highlight this fact in the following corollary given without proof since it is nothing else but an analytical characterization of the optimal solution obtained in the previous theorem.
Corollary 1.
Consider all hypotheses and notations from Theorem 3. If  C k 1 is binding, then let α , β be the solution set of variable t 1 obtained after we solve the system of the constraints with the substitution t 2 = 1 a k 1 t 1 / b k 1 (as C k 1 is binding, this solution set is always nonempty). Then an optimal solution of problem (12) is β , 1 a k 1 β / b k 1 and the optimal value is 1 a k 1 / b k 1 β + 1 / b k 1 . Then, if  C k 2 is binding, denote again with α , β the solution set of variable t 1 using the substitution t 2 = 1 a k 2 t 1 / b k 2 . Then, an optimal solution of problem (12) is α , 1 a k 2 α / b k 2 , and the optimal value is 1 a k 2 / b k 2 α + 1 / b k 2 .
Now, we are in position to present an algorithm that will give us a solution of Problem (12). Obviously, there are two methods to approach the solution. Either we search the binding constraint in the set J 1 or we search in the set J 2 . If we would make simulations on very large numbers of such problems, we believe that we would obtain on average the same running time. In general, we will apply the algorithm using J 1 if its cardinal is less than or equal to the cardinal of J 2 ; otherwise, we will apply the algorithm using J 2 . We can also propose an algorithm that takes into consideration both sets J 1 and J 2 . First, we search the binding constraint on J 1 taking the constraint C k 1 as described in the statement of Theorem 3. If C k 1 is binding, then we find the optimal solution as described in the previous corollary. If  C k 1 is redundant, then we check the constraint C k 2 in J 2 selected exactly as in the statement of Theorem 3. If  C k 1 is binding, then we find the optimal solution as described in the previous corollary. If not, then we update J 1 = J 1 { k 1 } and search again for the constraint in J 1 according to the construction in Theorem 3, and so on. We omit this second variant because we think it is slower in general.
In Algorithm 1, we search for the binding constraint from beginning to end either in J 1 or in J 2 . The first two steps are essential in our choice. For that, we need to compute a k * = min a k : k J and b k * * = min b k : k J . Note that besides computing a k * and b k * * , we will also need to identify the indexes a k * and b k * * .
Algorithm 1: solution
Step 1
If a k * b k * , then 1 / a k * , 0 is an optimal solution of Problem (12), and 1 / a k * is the
 optimal value of Problem (12). If  a k * < b k * , then go to step 2.
Step 2
If a k * * b k * * , then 0 , 1 / b k * * is the optimal solution of Problem (12) and 1 / b k * * is
 the optimal value of Problem (12). If  a k * * > b k * * then go to step 3.
Step 3
If we reached this step of the algorithm, it means that both J 1 and J 2 are nonempty.
 What is more, both of them contain at least one index corresponding to a binding
 constraint. Let us explain for J 1 , since for J 2 , the explanation is identical.
 As  a k * * > b k * * , it follows that k * * is in J 1 . If  C k would be strongly redundant for
 any k J 1 such that a k < a k * * , then by Theorem 3, it easily follows that constraint
C k * * is binding. Here, we need to decide if we search the binding constraint
 considering the set J 1 or the set J 2 . We can impose a selection criterion.
 For example, we choose to go with J 1 if its cardinal is less than or equal to the
 cardinal of J 2 and with J 2 otherwise. In what follows, we explain the algorithm
 when the option is J 1 , and at the end of it, we explain in a remark the very small
 differences that occur in the case when the option is J 2 .
Take a l 1 = min a k : k J 1 . We solve the system of the constraints indexed in J with
 the substitution t 2 = 1 a l 1 t 1 / b l 1 . If we obtain for variable t 1 the solution α , β
 then an optimal solution of problem (12) is β , 1 a l 1 β / b l 1 , and the optimal
 value is 1 a l 1 / b l 1 β + 1 / b l 1 . If this system has no solution, then go to step 4.
Step 4
We set J : = J { l 1 } and J 1 : = J 1 { l 1 } , and we repeat all steps 1–3 for the newly
 obtained J and J 1 .
We observe that step 3 is repeated until we get the first binding constraint, and we know that in the worst case this binding constraint is C k * * . We also have to notice that we have a maximum of n 1 iterations in terms of repeatedly applying step 3. What is more, if we chose to go with J 1 because its cardinal does not exceed the cardinal of J 2 , then we have at most J / 2 + 1 iterations (here, · stands for the integer part of a real number). The most important utility of Algorithm 1 is that it helps us to indicate the binding constraint or the binding constraints, respectively, in Theorem 2, corresponding to case (iii) or (iv), respectively.
Remark 2.
If we go with J 2 instead of J 1 , then we just need to adapt the calculations in step 3. First, we take b p 1 = min b k : k J 2 . Then, we solve the system of the constraints with the substitution t 2 = 1 a p 1 t 1 / b p 1 . If we obtain for variable t 1 the solution α , β , then the optimal solution of problem (12) is α , 1 a p 1 α / b p 1 and the optimal value is 1 a p 1 / b p 1 α + 1 / b p 1 . If this system has no solution, then go to step 4, where this time, we set J : = J { p 1 } and J 2 : = J 2 { p 1 } , and we repeat step 3 for the newly obtained J and J 2 .
Remark 3.
Another useful remark is that we can slightly accelerate the algorithm at step 3. Suppose again that we choose to go with J 1 . Instead of solving the system of constraints with the substitution t 2 = 1 a l 1 t 1 / b l 1 , we can use a different approach. For any k J 2 , we solve the system a l 1 t 1 + b l 1 t 2 = 1 , a k t 1 + b k t 2 = 1 . From all solutions, we select the one for which the first component has the minimum value. Suppose this solution is t 1 * , t 2 * . If t 1 * , t 2 * satisfies all the constraints indexed in J 2 , then t 1 * , t 2 * is the optimal solution of problem (12). Otherwise, if there exists a constraint indexed in J 2 that is not satisfied by t 1 * , t 2 * , then constraint C l 1 is strongly redundant, and we move to step 4. It seems that, in general, this method is slightly faster than the one described in the algorithm in the cases when n is sufficiently large.

4. Some Concrete Examples

In what follows, first, we present some examples on which we test Algorithm 1. Finally, we apply the algorithm to solve a concrete constrained OWA aggregation problem.
Example 1.
Let us consider the problem
t 1 + t 2 min ,
subject to
2 t 1 + 4 t 2 1 , 4 t 1 + t 2 1 , 3 t 1 + 2 t 2 1 , 7 t 1 + 5 t 2 1 , t 1 , t 2 0 .
We observe that a k * = min a k : k J = 2 and b k * * = min b k : k J = 1 . Since b k * = 4 > a k * and a k * * = 4 > b k * * , it follows that we can move to step three in Algorithm 1. It is easily seen that J 1 = { 2 , 3 , 4 } and J 2 = { 1 } . Clearly, the simplest way to reach the solution is to go with J 2 since it has just one element. Practically, this means that constraint C 1 is binding. According to the first algorithm, we solve the system of the constraints in the special case when t 2 = 1 2 t 1 4 . Thus, we obtain the system of inequalities
4 t 1 + 1 2 t 1 4 1 , 3 t 1 + 2 · 1 2 t 1 4 1 , 7 t 1 + 5 · 1 2 t 1 4 1 , 1 2 t 1 0 , t 1 0 .
By simple calculations, the above system has the solution t 1 1 4 , 1 2 . By using the first algorithm we get that t 1 * = 1 4 and by the substitutions used here, we get that t 2 * = 1 2 t 1 * 4 = 1 8 . Therefore, 1 4 , 1 8 is the optimal solution of our problem, and t 1 * + t 2 * = 3 8 is the optimal value. Even if J 1 has three elements, we do not need more iterations. Indeed, a l 1 = min a k : k J 1 = 3 . Thus, we need to solve the system of the constraints under the substitution t 2 = 1 3 t 1 2 . By simple calculations, we obtain t 1 1 5 , 1 4 ; hence, we obtain the optimal solution at the first attempt. Note that with the second algorithm, no additional iterations are needed.
The following example shows that sometimes we may need step 4 of the first algorithm.
Example 2.
Let us consider the problem
t 1 + t 2 min ,
subject to
1 2 t 1 + 1 6 t 2 1 , 1 3 t 1 + 1 12 t 2 1 , 1 4 t 1 + 1 4 t 2 1 , 1 5 t 1 + 1 3 t 2 1 , 1 6 t 1 + 1 2 t 2 1 , t 1 , t 2 0 .
Again, we cannot obtain the solution in the first two steps of Algorithm 1. Therefore, we move again to the third step. We have J 1 = { 1 , 2 , 3 } and J 2 = { 3 , 4 , 5 } . First, let us apply the algorithm for J 1 . We observe that a k * = min a k : k J 1 = a 3 = 4 . Therefore, we have to solve the system of the constraints in the special case when t 2 = 4 t 1 . After simple calculations, we get that this system has no solution. This means that constraint C 3 is redundant. This means that we have to go to step 4. We set J 1 : = J 1 { 3 } . Now, min a k : k J 1 = a 2 = 1 / 3 . We have to solve the system of constraints in the special case when t 2 = 12 4 t 1 . We obtain the system
1 2 t 1 + 1 6 · 12 4 t 1 1 , 1 4 t 1 + 1 4 · 12 4 t 1 1 , 1 5 t 1 + 1 3 · 12 4 t 1 1 , 1 6 t 1 + 1 2 · 12 4 t 1 1 , 12 4 t 1 0 , t 1 0 .
Example 3.
Let us now find the optimal solution of a constrained OWA aggregation problem with two comonotone constraints. We noticed that most often, we obtain a lot of redundant constraints in the auxiliary Problem (12). However, sometimes, we can obtain situations where Step 3 is needed in Algorithm 1. In our example, we consider the case n = 3 , which we believe suffices to explain how the algorithm work. Since Algorithm 1 can easily be implemented on computer, one may consider higher values for n.
Consider the problem
max 1 26 y 1 + 5 26 y 2 + 10 13 y 3 , 1 6 x 1 + 17 6 x 2 + 7 2 x 3 1 , 1 2 x 1 + 1 2 x 2 + 11 2 x 3 1 , x 1 , x 2 , x 3 0 .
Clearly, we have two comonotone constraints here; therefore, we need the auxiliary problem (12), and by simple calculations this problem is
min t 1 + t 2 , 13 3 t 1 + 13 t 2 1 , 13 t 1 + 13 3 t 2 1 , 13 2 t 1 + 13 2 t 2 1 , t 1 , t 2 0 .
We have
a k * = min a k : k { 1 , 2 , 3 } = a 1 = 13 3 < b 1
and
b k * * = min b k : k { 1 , 2 , 3 } = b 2 = 13 3 < a 2 .
This means that we need Step 3 in Algorithm 1. We have J 1 = { 2 , 3 } and min { a k : k J 1 } = a 3 = 13 2 . Therefore, we solve the system of constraints in (15) in the special case when
t 2 = 1 13 2 t 1 13 2 = 2 13 t 1 13 ,
that is, when constraint 3 is satisfied with an equality. After simple calculations, we obtain the solution of this system as the interval 1 26 , 3 26 . This means that case (iii) in Theorem 2 is applicable, and by applying the formula for this case, we obtain that an optimal solution for problem (14) is
x 1 = x 2 = x 3 = 1 α 1 + α 2 + α 3 = 2 13
and the optimal value of this problem is 2 13 .

5. Conclusions

In this paper, we extended the solving of the constrained OWA aggregation problem with two comonotone constraints to the case when the OWA weights are arbitrary non-negative numbers. Moreover, we proposed an iterative algorithm to approach the optimal solution. This algorithm indicates a constraint that is either biding or strictly redundant. We hope this is a first step towards the solving of constrained OWA aggregation problems with an arbitrary number of constraints.

Author Contributions

Conceptualization, formal analysis, writing—review and editing L.C.; Conceptualization, formal analysis, writing—review and editing R.F. All authors have read and agreed to the published version of the manuscript.

Funding

Robert Fullér was partially supported by the ELKH-SZE Research Group for Cognitive Mapping of Decision Support Systems. Lucian Coroianu was supported by a grant awarded by the University of Oradea and titled “Approximation and optimization methods with applications”.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The study did not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
OWAOrdered Weighted Averaging
MDPIMultidisciplinary Digital Publishing Institute
DOAJDirectory of open access journals
TLAThree letter acronym
LDLinear dichroism

References

  1. Yager, R.R. On ordered weighted averaging aggregation operators in multicriteria decision making. IEEE Trans. Syst. Cybern. 1988, 18, 183–190. [Google Scholar] [CrossRef]
  2. Coroianu, L.; Fullér, R.; Harmati, I.A. Best approximation of OWA Olympic weights under predefined level of orness. Fuzzy Sets Syst. 2022, 448, 127–144. [Google Scholar] [CrossRef]
  3. Coroianu, L.; Fullér, R.; Gagolewski, M.; James, S. Constrained ordered weighted averaging aggregation with multiple comonotone constraints. Fuzzy Sets Syst. 2020, 395, 21–39. [Google Scholar] [CrossRef]
  4. Emrouznejad, A.; Marra, M. Ordered weighted averaging operators 1988–2014: A citation-based literature survey. Int. Intell. Syst. 2014, 29, 994–1014. [Google Scholar] [CrossRef]
  5. Nguyen, T.H. Simplifying the minimax disparity model for determining OWA weights in large-scale problems. In New Trends in Emerging Complex Real Life Problems; Daniele, P., Scrimali, L., Eds.; AIRO Springer Series; Springer: Cham, Switzerland, 2018; Volume 1. [Google Scholar] [CrossRef]
  6. Yager, R.R. Constrained OWA aggregation. Fuzzy Sets Syst. 1996, 81, 89–101. [Google Scholar] [CrossRef]
  7. Carlsson, C.; Fullér, R.; Majlender, P. A note on constrained OWA aggregation. Fuzzy Sets Syst. 2003, 139, 543–546. [Google Scholar] [CrossRef]
  8. Coroianu, L.; Fullér, R. On the constrained OWA aggregation problem with single constraint. Fuzzy Sets Syst. 2018, 332, 37–43. [Google Scholar] [CrossRef]
  9. Kim, E.Y.; Ahn, B.S. An Efficient Approach to Solve the Constrained OWA Aggregation Problem. Symmetry 2022, 14, 724. [Google Scholar] [CrossRef]
  10. Coroianu, L.; Fullér, R. Minimum of constrained OWA aggregation problem with a single constraint. In Fuzzy Logic and Applications. WILF 2018. Lecture Notes in Computer Science; Fullér, R., Giove, S., Masulli, F., Eds.; Springer: Cham, Switzerland, 2019; Volume 11291, pp. 183–192. [Google Scholar]
  11. Ogryczak, W.; Śliwiński, T. On efficient WOWA optimization for decision support under risk. Int. J. Approx. Reason. 2009, 50, 915–928. [Google Scholar] [CrossRef] [Green Version]
  12. Ogryczak, W.; Śliwiński, T. On solving linear programs with the ordered weighted averaging objective. Eur. J. Oper. Res. 2003, 148, 80–91. [Google Scholar] [CrossRef]
  13. Fullér, R.; Majlender, P. An analytic approach for obtaining maximal entropy OWA operator weights. Fuzzy Sets Syst. 2001, 124, 53–57. [Google Scholar] [CrossRef]
  14. Sang, X.; Liu, X. An analytic approach to obtain the least squares deviation OWA operator weights. Fuzzy Sets Syst. 2014, 240, 103–116. [Google Scholar] [CrossRef]
  15. Beliakov, G.; James, S. Choquet integral optimisation with constraints and the buoyancy property for fuzzy measures. Inf. Sci. 2021, 578, 22–36. [Google Scholar] [CrossRef]
  16. Kang, B.; Deng, Y.; Hewage, K.; Sadiq, R. Generating Z-number based on OWA weights using maximum entropy. Int. J. Intell. Syst. 2018, 38, 1745–1755. [Google Scholar] [CrossRef]
  17. Ochoa, G.; Lizasoian, I.; Paternain, D.; Bustince, H.; Pal, N.R. From quantitative to qualitative orness for lattice OWA operators. Int. J. Gen. Syst. 2017, 46, 640–669. [Google Scholar] [CrossRef]
  18. Paternein, D.; Ochoa, G.; Lizasoian, I.; Bustince, H.; Mesiar, R. Quantitative orness for lattice OWA operators. Inf. Fusion 2016, 30, 27–35. [Google Scholar] [CrossRef]
  19. Torra, V. Andness directedness for operators of the OWA and WOWA families. Fuzzy Sets Syst. 2021, 414, 28–37. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Coroianu, L.; Fullér, R. An Iterative Approach for the Solution of the Constrained OWA Aggregation Problem with Two Comonotone Constraints. Information 2022, 13, 443. https://0-doi-org.brum.beds.ac.uk/10.3390/info13100443

AMA Style

Coroianu L, Fullér R. An Iterative Approach for the Solution of the Constrained OWA Aggregation Problem with Two Comonotone Constraints. Information. 2022; 13(10):443. https://0-doi-org.brum.beds.ac.uk/10.3390/info13100443

Chicago/Turabian Style

Coroianu, Lucian, and Robert Fullér. 2022. "An Iterative Approach for the Solution of the Constrained OWA Aggregation Problem with Two Comonotone Constraints" Information 13, no. 10: 443. https://0-doi-org.brum.beds.ac.uk/10.3390/info13100443

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop