Next Article in Journal
Multi-Objective Optimization of Braun-Type Exothermic Reactor for Ammonia Synthesis
Previous Article in Journal
Complexity of COVID-19 Dynamics
Previous Article in Special Issue
Shapley-Based Estimation of Company Value—Concept, Algorithms and Parameters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Goal Programming for Continuous Multi-Criteria Optimization to the Target Decision Rule for Mixed Uncertain Problems

by
Helena Gaspars-Wieloch
Department of Operations Research and Mathematical Economics, Poznan University of Economics and Business, Al. Niepodleglosci 10, 61-875 Poznań, Poland
Submission received: 30 September 2021 / Revised: 28 November 2021 / Accepted: 24 December 2021 / Published: 28 December 2021
(This article belongs to the Special Issue Decision Making, Classical and Quantum Optimization Methods)

Abstract

:
Goal programming (GP) is applied to the discrete and continuous version of multi-criteria optimization. Recently, some essential analogies between multi-criteria decision making under certainty (M-DMC) and scenario-based one-criterion decision making under uncertainty (1-DMU) have been revealed in the literature. The aforementioned similarities allow the adjustment of GP to an entirely new domain. The aim of the paper is to create a new decision rule for mixed uncertain problems on the basis of the GP methodology. The procedure can be used by pessimists, optimists and moderate decision makers. It is designed for one-shot decisions. One of the significant advantages of the novel approach is related to the possibility to analyze neutral criteria, which are not directly taken into account in existing classical procedures developed for 1-DMU.

1. Introduction

Goal programming (GP) is one of the procedures applied to multi-criteria decision making under certainty (M-DMC). This issue is related to the situation where the decision maker (DM) assesses particular courses of action (options, alternatives, decision variants) with the use of more than one criterion and all the parameters of the decision problem are known.
GP was first applied by Charnes et al. [1]. In subsequent years, this procedure has been extended to fuzzy multi-criteria problems [2,3] or combined with other methods for various applications [4,5].
Multi-criteria optimization involves two areas: Multiple Objective Decision Problems (MODP) and Multiple Attribute Decision Problems (MADP). Within MODP, the decision maker formulates and solves a mathematical optimization model with a set of objective functions and a set of constraints on the basis of which the set of possible solutions can be created. However, the number of options is not exactly known [6,7]. In MADP, the number of potential variants is precisely determined at the beginning of the decision making process. Additionally, the levels of analyzed attributes are assigned to each alternative [8]. GP may be used in the continuous and discrete versions of M-DMC, i.e., in MODP and MADP, respectively [9].
It is worth stressing that GP is especially designed for problems where neutral criteria are taken into consideration. “Neutral criteria are neither maximized nor minimized because they consist in reaching a specific value” [10]. The use of neutral criteria is quite frequent in solving real decision problems. They may concern, for instance, “the period of paying off the credit (the term of the loan), the rental time of office space, the duration of the project, the temperature level, the distance between two places, the number of rooms in a house, the surface of the plot or the level of precipitation” [10].
Recently, some vital analogies between multi-criteria decision making under certainty and scenario-based one-criterion decision making under uncertainty (1-DMU) have been revealed and discussed in [9,11]. These similarities allow for the adjustment of goal programming, initially designed for multi-criteria optimization, to an entirely new domain (1-DMU). In [10] the discrete case has been already investigated, which means that a new approach for 1-DMU pure strategy searching has been proposed by referring to the methodology applied within GP for MADP. Nevertheless, the aforementioned analogy in the continuous case has not been explored yet. That is why the aim of this paper is to create a novel method for mixed one-criterion uncertain problems on the basis of the GP ideas developed for MODP. We analyze diverse types of decision makers (pessimist, optimist, moderate). The significant advantage of the new decision rule is related to the possibility of analyzing neutral criteria, which are not directly taken into account in existing classical procedures developed for 1-DMU and mixed strategies [10], i.e., the Bayes, Hurwicz, Wald and max-max optimization models.
The rest of the paper is organized as follows. Section 2 (Materials and Methods) compares pure and mixed strategies; describes the idea of goal programming; reminds its subsequent steps for the continuous version of multi-criteria optimization; and presents M-DMC, 1-DMU and the analogies between both issues. The last part of this section is devoted to the description of a novel approach for 1-DMU and mixed strategies. The suggested procedure is based on the GP (initially designed for multi-criteria decision making). Section 3 (Calculation and Results) contains an illustrative example to show how the new decision rule may be applied to uncertain problems and mixed strategy searching. The characteristics of the suggested method are discussed in Section 4 (Discussion and Conclusions).

2. Materials and Methods

2.1. Pure and Mixed Strategies

In the previous section, pure and mixed strategies have been mentioned. What do both notions mean? In the case of a pure strategy the decision maker chooses and performs only one decision variant from a set of potential options [9]. For instance, one house from a set of five possible houses is bought, or one project from a set of ten projects is selected.
Nevertheless, in many cases mixed strategies may be more effective [12]. A mixed strategy occurs when the DM chooses and executes a combination of alternatives. Such an approach may be especially useful and advantageous in portfolio construction [13,14] and cultivation of different plants [9]. In the next subsections, the emphasis will be put on mixed strategies.

2.2. Goal Programming for MODP

The continuous version of goal programming may be used in different forms, e.g., weighted goal programming, lexicographic goal programming and Chebyshew goal programming [15], but in this paper we focus on the first variety. The steps are as follows:
  • Define the decision variables: x1, …, xj, …, xn.
  • Define the objectives C1, …, Ck, …., Cp and corresponding objective functions: f1(x), …, fk(x), …, fp(x) where p is the number of criteria (the problem may be presented by means of Table 1).
  • Declare the importance of each objective in the form of criteria weights: w1(x), …, wk(x), …, wp(x).
  • Define the desired levels of each criterion u1, …, uk, …, up.
  • Formulate the synthetic objective function. If criteria are expressed in the same units and scales, use Equation (1). Otherwise, apply Equation (2).
    G P ( x ) = k = 1 p w k | f k ( x ) u k |
    G P ( x ) = k = 1 p w k | g k ( x ) r k |
    where gk(x) denotes the normalized objective function for criterion k, and rk is the normalized desired level of this criterion. We can assume that the normalization is based on the following formulas (Equations (3) and (5) are related to maximized criteria, Equations (4) and (6) concern minimized criteria):
    g k ( x ) = f k ( x ) f k m i n ( x ) f k m a x ( x ) f k m i n ( x )
    g k ( x ) = f k m a x ( x ) f k ( x ) f k m a x ( x ) f k m i n ( x )
    r k = u k f k m i n ( x ) f k m a x ( x ) f k m i n ( x )
    r k = f k m a x ( x ) u k f k m a x ( x ) f k m i n ( x )
  • Add constraints describing the decision situation:
    x S F S
    where SFS denotes the set of feasible solutions.
  • Solve the problem.
The aforementioned optimization model is not easy to solve since the synthetic objective function contains elements with the absolute value. That is why it is recommended to transform the initial problem into a linear optimization model, in connection with the fact that yk and zk may represent the following expressions:
y k = m a x { f k ( x ) u k ; 0 }   or   y k = m a x { g k ( x ) r k ; 0 }
z k = m a x { f k ( x ) + u k ; 0 }   or   z k = m a x { g k ( x ) + r k ; 0 }
and that:
m a x { a ; 0 } + m a x { a ; 0 } = | a |  
m a x { a ; 0 } m a x { a ; 0 } = a  
the model (1), (7) or (2), (7) may be simplified to the linear model (12), (7), (13), (15) or (12), (7), (14), (15).
G P ( x , y , z ) = k = 1 p w k ( y k + z k ) m i n
f k ( x ) y k + z k = u k   k = 1 , , p
g k ( x ) y k + z k = r k   k = 1 , , p  
y k , z k 0   k = 1 , , p  
The above transformation results from the fact that since the absolute value of parameter a may be represented by means of Equation (10), similarly the absolute value of the expression f k ( x ) u k may be simplified to y k + z k if we assume that y k and z k denote the expressions given by Equations (8) and (9).
As we see, the essence of the weighted goal programming is to minimize the distances between the realized values of particular criteria and their desired levels. Additionally, the DM has the possibility to declare the importance of a given criterion, so the distances concerning the most significant objectives are the most punished.
The optimal solution obtained after solving the aforementioned models represents a mixed strategy, but note that sometimes the solution generated by the continuous version of the weighted GP may be a simple pure strategy. Such a situation occurs when only one decision variable is positive (and equal to one) and the remaining ones are equal to zero.
When analyzing the construction of the algorithm, we may formulate the following question—why does step 5 take into account maximized and minimized criteria if the essence of GP is to concentrate on neutral criteria, which have a defined desired level? Indeed, in GP neutral criteria are mainly applied, but in order to normalize different values it is conventionally assumed that particular objectives tend to be maximized or minimized.
As a matter of fact, there is another optimization method that may lead to similar (but not identical!) solutions. It is the SAW method (simple additive weighting method), where the sum of weighted normalized values are maximized, but in that case the normalization for neutral criteria requires the use of a different formula.
In this paper, we focus on the case where targets are given as points, but it is worth stressing that the GP may be applied to situations where targets are defined as intervals [10].

2.3. Analogies between M-DMC and 1-DMU

Analogies between multi-criteria decision making under certainty and scenario-based one-criterion decision making under uncertainty have been thoroughly discussed in [9,10]. Therefore, in this article we only mention some relationships between issues.
On the one hand, M-DMC is related to cases where the DM assesses particular courses of action in terms of many criteria (at least two) and the parameters of the problem are supposed to be known. On the other hand, 1-DMU “is connected with situations in which the DM evaluates a given decision variant in terms of one objective function, but, due to numerous unknown future factors, the parameters of the problem are not deterministic” [10]. This time a set of potential scenarios is available. These scenarios may be defined by experts, decision makers or by a person who is simultaneously an expert and a DM. “Scenario means a possible way in which the future might unfold” [9]. Scenario planning is a comfortable and relatively simple tool enabling uncertainty modeling [16,17]. There are diverse uncertainty levels [9,18]: I. Uncertainty with known probabilities—the DM knows the options, scenarios, scenario probabilities and particular payoffs; II. Uncertainty with partially known probabilities—the DM knows the options, scenarios, partial scenario probabilities and particular payoffs—probabilities may be given as interval values, sometimes scenarios are ordered according to their approximate chance of occurrence; III. Uncertainty with unknown probabilities—the DM knows the options, scenarios and particular payoffs—scenario probabilities are not known; and IV. Uncertainty with unknown scenarios—the DM knows the options only. In the paper we investigate the third level. The words “payoff”, “result” and “outcome” signify the effect gained by the decision maker if they select a given alternative and a given scenario occurs.
If we compare the structure of the table representing M-DMC (Table 1) with the structure of the table representing 1-DMU (Table 2), we will see a clear similarity.
In both cases, “there is a set of potential options and the set of significant objectives in M-DMC can correspond to the set of possible scenarios in 1-DMU” [9]. Another analogy is related to the final step of the decision making process. The decision maker, in both decision problems, can select and execute only one option (pure strategy) or a combination of several options (mixed strategy).
Of course, the analyzed issues also have essential differences:
  • Within 1-DMU, “if Aj is chosen, the final outcome (ai,j) is single and depends on the real scenario which will occur, meanwhile within M-DMC, if Aj is selected, there are p final outcomes, i.e., b1,j, …, bk,j, …, bp,j, as particular options are evaluated in terms of p objectives” [10].
  • In the case of M-DMC “initial values usually have to be normalized since they represent the performance of different criteria which are expressed by means of different scales and units. For 1-DMU the problem concerns one criterion. Thus, the normalization is useless” [9].
Despite the observed differences, relationships between both areas give the opportunity to adjust the initial GP model to a totally new issue, i.e., the scenario-based one-criterion decision making under uncertainty.

2.4. New Approach for 1-DMU and Mixed Strategies

By analogy to the weighted goal programming developed for the continuous version of M-DMC, the steps for the new approach could be as follows:
  • Define the decision variables: x1, …, xj, …, xn.
  • Define the scenarios: S1, …, Si, …, Sm and corresponding scenario functions: f1(x), …, fi(x), …, fm(x) (the problem may be presented by means of Table 2).
  • Declare the subjective chance of occurrence of particular scenarios: p1, …, pi, …, pm.
  • Define the desired levels of the analyzed criterion within particular scenarios: u1, …, ui, …, um.
  • Formulate the synthetic objective function.
    T D R ( x ) = i = 1 m p i | f i ( x ) u i | m i n
  • Add constraints describing the decision situation:
    x S F S
  • Solve the problem.
In connection with the fact that the synthetic objective function contains elements with the absolute value, the initial problem can also be transformed in this case to a linear optimization model. It suffices to assume that:
y i = m a x { f i ( x ) u i ; 0 }
z i = m a x { f i ( x ) + u i ; 0 }
and to solve the model (20), (17), (21), (22):
T D R ( x , y , z ) = i = 1 m w i ( y i + z i ) m i n
f i ( x ) y i + z i = u i   i = 1 , , m  
y i , z i 0   i = 1 , , m  
Step 3 is related to the subjective chance of occurrence. As we mentioned in Section 2.3, the paper focuses on the third level of uncertainty within which the objective probabilities are not known. Nevertheless, when these parameters are unknown, the decision maker may declare the subjective probabilities that represent their attitude towards risk, predictions, state of mind and soul. Within scenario planning, the set of scenarios does not need to be exhaustive, so the sum of subjective probabilities does not need to be equal to one. Note that step 3 from the algorithm for 1-DMU is much more complicated than step 3 within the original interactive programming designed for M-DMC. Decision makers are usually able to declare the importance of particular criteria, but they are not so confident when they are supposed to define the chance of occurrence of a given scenario. Such information does not concern the present, but the future is less known than the present. Therefore, decision makers may want to declare any subjective probabilities, without any additional analysis. On the one hand, a quick probability estimation may speed up the use of the algorithm, but on the other hand, a rapid declaration of these parameters is really not recommended since their level may significantly affect the final solution (see Section 3). Of course, after solving the problem, a sensitivity analysis connected with the subjective chance of occurrence may be performed if the DM intends to check how the solution changes under the influence of probability changes.
Step 4 may also be surprising, since it allows for declaring different desired levels for each scenario, though in each case this level is defined for the same and only criterion considered in the decision problem. The use of varied parameters can be justified in the following way. Sometimes the desired payoff may depend on the scenario; here are two examples. The first one is related to the portfolio construction, where the objective represents the revenue. Normally this criterion should be maximized, but in many countries tax law discourages people and institutions from making too much profit, because exceeding the threshold means a higher tax rate. If within particular scenarios different tax thresholds and tax rates are assumed, the decision maker may be interested in declaring different desired revenue levels for each scenario. The second example concerns a competition, where for each task the player may receive a certain number of points. This number depends on the scenario that will occur. Usually people tend to maximize the number of points. However, if the final prize depends on the number of points obtained and the player is interested in winning a specific award, which is not necessarily connected with the greater number of points, they may decide to treat this criterion as neutral. If, additionally, a given award is not intended for a specific number points, but for a specific place in the ranking, the use of different desired levels may be recommended. So, as we could observe, real decision situations are often more complex than simple theoretical problems. The decision maker has to take into consideration numerous circumstances and factors, which may bring them to the conclusion that the analyzed criterion, usually regarded as maximized or minimized, is neutral from their point of view.
Equation (16) minimizes the weighted deviations from the desired levels. The deviations related to scenarios with the highest subjective chance of occurrence are the most punished.
We have certainly noticed that the GP algorithm for 1-DMU is much less complicated than the original procedure developed for M-DMC, since this time the normalization is not required. It is a significant advantage of the suggested approach.
We have mentioned in the introduction that the novel decision rule would be suitable both for extreme decision makers (optimists and pessimists) and moderate people. It is a vital feature, since many classical decision rules are designed for a limited group of decision makers (see the Wald rule, Savage rule, max-max rule, Hayashi rule, Bayes rule [19]). The use of subjective probabilities enables the adjustment of the model to the decision maker’s attitude towards risk, but sometimes the estimation of these parameters may be quite complex. If for each option the payoffs of a given scenario are smaller than payoffs connected with another scenario, the decision maker has no difficulty in determining the worse scenario and the better one. However, if it is difficult to assign a status to each scenario due to similar payoff ranges, the intuitive estimation of subjective probabilities can be impossible. Therefore, in such cases we recommend the use of an algorithm which facilitates the scenario assessment, e.g., the first stage of the SF + AS procedure described in [20].

3. Calculation and Results

Let us call the new decision rule the Target Decision Rule (TDR) for mixed uncertain problems (the abbreviation TDR has been already used in Equation (16)) and analyze a concrete example solved by means of this approach. The investor intends to buy stocks of various companies: A1, A2, A3, A4 and A5. The decision variable xj denotes the share (in percent) of the capital invested in a given company (step 1). They consider six possible scenarios, which differ from each other by the scope of the political policy and the stock market situation. Table 3 represents predicted annual revenues (in EUR) (step 2). We assume that the investor is a moderate pessimist. In their opinion, the subjective chance of occurrence of particular scenarios is as follows: 0.15, 0.10, 0.05, 0.10, 0.40 and 0.20 (step 3). The desired levels are equal to 180,000, 180,000, 160,000, 160,000, 90,000 and 90,000, respectively (step 4).
Within step 5, the synthetic objective function is formulated:
T D R ( x ) = 0.15 | 400 x 1 + + 2000 x 5 180,000 | + + 0.2 | 600 x 1 + + 350 x 5 90,000 | m i n
In step 6, the investor may declare some constraints, for instance:
0 x 1 , , x 5 35
x 1 + + x 5 = 100
Now we can solve the problem, but, in order to simplify the objective function, we are going to transform the model (23)–(25) into (24)–(33):
T D R ( x ) = 0.15 ( y 1 + z 1 ) + + 0.2 ( y 6 + z 6 ) m i n
y 1 , z 1 , , y 6 , z 6 0
400 x 1 + + 2000 x 5 y 1 + z 1 = 180,000
1100 x 1 + + 3800 x 5 y 2 + z 2 = 180,000
1200 x 1 + + 3800 x 5 y 3 + z 3 = 160,000
900 x 1 + + 1350 x 5 y 4 + z 4 = 160,000
500 x 1 + + 300 x 5 y 5 + z 5 = 90,000
600 x 1 + + 350 x 5 y 6 + z 6 = 90,000
where:
y 1 = m a x { 400 x 1 + + 2000 x 5 180,000 ; 0 }
z 1 = m a x { 400 x 1 2000 x 5 + 180,000 ; 0 }
y 2 = m a x { 1100 x 1 + + 3800 x 5 180,000 ; 0 }
z 2 = m a x { 1100 x 1 3800 x 5 + 180,000 ; 0 }
y 3 = m a x { 1200 x 1 + + 1650 x 5 160,000 ; 0 }
z 3 = m a x { 1200 x 1 1650 x 5 + 160,000 ; 0 }
y 4 = m a x { 900 x 1 + + 1350 x 5 160,000 ; 0 }
z 4 = m a x { 900 x 1 1350 x 5 + 160,000 ; 0 }
y 5 = m a x { 500 x 1 + + 300 x 5 90,000 ; 0 }
z 5 = m a x { 500 x 1 300 x 5 + 90,000 ; 0 }
y 6 = m a x { 600 x 1 + + 350 x 5 90,000 ; 0 }
z 6 = m a x { 600 x 1 350 x 5 + 90,000 ; 0 }
The linear model contains 17 non-negative decision variables. The optimal solution is as follows: x1 = 0.53%, x2 = 35%, x3 = 16.10%, x4 = 35% and x5 = 13.36%. It means that the investor should allocate 0.53% of their capital to shares A1, 35% to shares A2, etc. The weighted sum of deviations is equal to 23,554.28 and scenario functions are equal to f1(x) = 89,909.23, f2(x) = 180,000, f3(x) = 201,551.6, f4(x) = 160,000, f5(x) = 72,674.96 and f6(x) = 84,834.66. Thus, for scenarios S2 and S4, the investor’s revenue would be exactly equal to the desired levels (y2 = z2 = y4 = z4 = 0). For scenarios S1, S5 and S6 (z1, z5, z6 > 0), this revenue would be lower than the desired levels, and for scenario S3 the revenue would be higher (y3 > 0).
Of course, constraint (24) has a strong impact on the final structure of the portfolio. If we remove this condition, the shares are equal to x1 = 0%, x2 = 54.39%, x3 = 22.97%, x4 = 0% and x5 = 22.64%.
Indeed, in the analyzed illustrative example the investor is a moderate pessimist, since the highest subjective chance of occurrence was connected with the scenario with the smallest average of payoffs. If the investor was a moderate DM declaring the following values: 0.35, 0.10, 0.05, 0.35, 0.05 and 0.10, the optimal structure of the mixed strategy recommended by TDR would be x1 = 0%, x2 = 20.88%, x3 = 9.13%, x4 = 35% and x5 = 35%. The weighted sum of deviations would be equal to 31,629 and scenario functions would equal f1(x) = 121,368.8, f2(x) = 235,956.3, f3(x) = 199,475, f4(x) = 160,000, f5(x) = 59,401.3 and f6(x) = 69,912.5. Thus, for scenario S4 the investor’s revenue would be exactly equal to the desired levels (y4 = z4 = 0). For scenarios S1, S5 and S6 (z1, z5, z6 > 0), this revenue would be lower than the desired levels, and for scenarios S2 and S3 the revenue would be higher (y2,y3 > 0).
Of course, again, Equation (24) strongly affects the final structure of the portfolio. If we remove this condition, the shares are equal to x1 = 0%, x2 = 0%, x3 = 18.52%, x4 = 0% and x5 = 81.48%.
Let us also examine the case of a moderate optimist (the subjective chance of occurrence is equal to 0.10, 0.30, 0.35, 0.15, 0.05 and 0.05, respectively. Then, the optimal solution is: x1 = 18.47%, x2 = 35%, x3 = 0%, x4 = 31.12% and x5 = 15.41%, and the weighted sum of deviations equals 15,861.74. Scenario functions are equal to f1(x) = 86,826, f2(x) = 180,000, f3(x) = 160,000, f4(x) = 127,276.3, f5(x) = 67,530 and f6(x) = 79,754.2. Thus, for scenarios S2 and S3, the investor’s revenue would be exactly equal to the desired levels (y2 = z2 = y3 = z3 = 0), and for scenarios S1, S4, S5 and S6 (z1, z4, z5, z6 > 0) this revenue would be lower than the desired levels.
In the last situation, Equation (24) also has a decisive influence on the final solution. If we remove this condition, the shares are equal to x1 = 0%, x2 = 51.22%, x3 = 0%, x4 = 35.17% and x5 = 13.61%.
Hence, as it can be observed, TDR can be applied by diverse decision makers.
Each problem has been solved by means of SAS/OR (Statistical Analysis System for Operations Research), but the construction of the problem (thanks to the elimination of the absolute values) is so simple that the use of such software as Solver in Excel is also possible.
The fictitious data given in Table 3 may look too unrealistic, but actually the choice of values in this research was rather random. When preparing the example, the only aim was to create scenarios with a high average of payoffs (1970—S3, 1850—S2), with a moderate average (1550—S4) and with a low average (634—S5, 738—S6), since such a situation may justify the use of varied subjective chances of occurrence and different desired levels.
In Section 2, we have stressed that the level of subjective probabilities may affect the final solution. Let us assume that the moderate pessimist, instead of values 0.15, 0.10, 0.05, 0.10, 0.40 and 0.20, declares the following probabilities: 0.20, 0.08, 0.02, 0.10, 0.35 and 0.25. They are very similar, because they still assign the highest chance of occurrence (0.35) to the scenario with the worst average of payoffs (S5), the next chance (0.25) to the scenario with a higher average of outcomes (S6), etc. If we keep constraint (24), the solution will not change, i.e., the shares will be still equal to x1 = 0.53%, x2 = 35%, x3 = 16.10%, x4 = 35% and x5 = 13.36% (of course, the objective function value will change from 23,554.28 to 26,204.28, since it depends on probabilities, but this modification does not affect the investor’s strategy). However, if we analyze the problem with new probability values and without the constraint concerning the maximum possible share (24), the solution will change significantly from x1 = 0%, x2 = 54.39%, x3 = 22.97%, x4 = 0% and x5 = 22.64% to x1 = 0%, x2 = 39.91%, x3 = 37.74%, x4 = 0% and x5 = 22.35%. This short analysis shows that the DM should declare the subjective chances of occurrence very carefully, especially in the case where decision variables are not bounded because even if the probabilities change only their levels (not their order), the final solution may change.

4. Discussion and Conclusions

The aim of this paper was to extend the applications of goal programming originally designed for multi-criteria optimization under certainty. It turns out that thanks to some analogies between M-DMC and scenario-based one-criterion decision making under uncertainty, the ideas of GP may also be applied to the second area, but, of course, the interpretation of the final results is different. In the first case, deviations represent the distances between the desired levels of particular criteria and the real performance of these objectives, while in the second case the deviations show the distances between the desired levels concerning only one criterion and the expected performance of this objective [10]. It is worth emphasizing that TDR (Target Decision Rule, for mixed strategies) is designed for one-shot decisions, i.e., for decisions chosen and executed only once, since after the implementation of the selected strategy, the decision maker has new experiences on the basis of which they can update their attitude towards risk.
The novel approach has three essential advantages:
  • It does not require the normalization of initial values, which means that TDR is less time consuming than the original goal programming designed for M-DMC.
  • It can be used by DMs representing different attitudes towards risk, since one of the steps of the algorithm allows defining the subjective chance of occurrence of subsequent scenarios.
  • It can be applied not only to maximized and minimized criteria, but also to neutral criteria that occur in numerous domains.
TDR for pure strategies described in [10] has the same benefits. The relationship between both procedures is very strong, since if an additional constraint with binary decision variables was introduced to the optimization model used in TDF for mixed strategies, then the final solution would always represent a pure strategy.
Now, let us answer the question “why is the use of the novel approach (TDR for mixed strategies) more advantageous than existing decision rules?”. Popular classical mixed decision rules developed within game theory are: the Wald optimization model, the Bayes optimization model, the max-max optimization model and the Hurwicz optimization model. The Wald rule is only applicable to an extreme pessimist. The Bayes rule refers to repetitive executions, since the average of payoffs is taken into considerations. Thus, it does not fit one-shot decisions. Furthermore, the attitude towards risk cannot be taken into account within this procedure. The max-max approach is merely useful for extreme optimists. The only classical method that could replace TDR for mixed strategies is the Hurwicz optimization model, since this technique enables applying diverse optimism coefficients. Nevertheless, due to the use of the weighted average of payoffs in the objective function, there is no possibility to compare the scenario function values with the desired levels. Additionally, the Hurwicz indices are computed on the basis of the extreme payoffs only. They do not take intermediate values into account, which may lead to illogical recommendations especially in the case of asymmetric payoffs [21]. Originally, classical mixed decision rules were not designed for neutral criteria, but a way to extend their applications could be the use of utility functions that allow for transforming initial payoffs into results, representing the subjective value from the point of view of a given decision maker. In connection with all the observations described above, we can conclude that the strengths of TDR for mixed strategies (compared with existing procedures) are: (1) the possibility to control all the payoffs (not only the selected ones) connected with particular alternatives; (2) the opportunity to generate different solutions (not one solution) for a given payoff matrix, depending on the DM’s predictions; and (3) the possibility to include any type of criterion. The suggested approach fills the research gap identified in the paper.
It is worth noting that, instead of referring to goal programming ideas in the case of a neutral criterion considered in 1-DMU, the optimization model could contain two constraints with an upper and a lower bound for each scenario, but such a way of including neutral objectives would often lead to the formulation of models with contradictory conditions and empty sets of feasible solutions. That is why the use of the goal programming ideas in 1-DMU is so beneficial—in this case, even if the DM declares desired levels difficult to obtain, the model has a solution because the aim of the model is to minimize the distance (between the desired levels and the real ones), not to reach a strictly defined result for each scenario.
Note that the essence of TDR (as with other classical decision rules) is to take the decision maker’s preferences (needs, expectations) into account the best way possible. Parameters used in the optimization model are supposed to reflect their attitude towards risk. Thus, within uncertain decision rules, the emphasis is not put on the final real effect, but on the way the model considers the DM’s state of mind and soul. In connection with this fact, the comparative analysis between the model solution and the actual result is not conducted in this article. Additionally, it is worth underlining that the payoff matrix has a significant impact on the solution generated by the decision rule. Hence, if the expert estimations used in the payoff matrix are entirely wrong, there is little chance of finding an effective strategy, even if the DM tries to apply the approach best suited to their preferences.
When analyzing the results obtained in the previous section, we can notice that TDR for mixed strategies also has a limitation. Though the weights in the synthetic objective function indicate the subjective chance of occurrence, the correlation between this factor and the absolute value of the deviation for each scenario is not always negative and strong. Furthermore, this limitation also occurs in the original version of goal programming for M-DMC—the correlation between criteria weights and the absolute values of the deviation can even be positive when solving a given problem. This means that in the case of GP, when two criteria have the same weight, their deviations from the desired levels may be different in the optimal solution. In addition, in the case of TDR for mixed strategies, if the DM assigns an identical subjective chance of occurrence to two scenarios, their deviations from the desired levels may also be different. This defect has not been revealed in the literature yet, but the observed phenomenon can be treated as a reason to improve the original version of goal programming for M-DMC as well as its new extension, i.e., TDR for mixed strategies. Currently, the lack of strong and negative correlation occurs even when constraints concerning the strategy structure are removed. As a matter of fact, the only situation, where the deviation is indeed always equal to zero for the scenario with the highest subjective probability (provided that there is only one such scenario), occurs when the DM is an extreme optimist or an extreme pessimist (i.e., when only one scenario has a positive subjective chance of occurrence). The same relationship is visible within M-DMC: when there is only one significant criterion (i.e., with a positive weight) and the remaining criteria obtain zero weights, the deviation concerning the aforementioned objective is always equal to zero. Of course, the aforementioned zero deviations in both optimization issues are possible only if the desired levels are properly defined by the DM (they are neither too high nor too low).
Another issue that could be explored in the future is related to the nature of the target. In this article, it was assumed that the target was given as a point, but in real situations this parameter is sometimes given as an interval (e.g., the temperature). Therefore, the model presented in the paper could be extended in the future.
Note that other possible future research directions are not limited to the issues already described above. The next step could be connected with the creation of a scenario-based hybrid referring to TDR and designed for uncertain multi-criteria problems (M-DMU). Such problems [22,23,24] occur more frequently in real economic decision situations than deterministic multi-criteria problems (M-DMC) or indeterministic one-criterion problems (1-DMU). M-DMU procedures already exist, but they instead focus on pure strategy searching. The mixed strategy searching process on the basis of multiple neutral criteria certainly needs further investigation.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the data used in the paper are fictitious and prepared by the author. Data are given in Table 3. Computations have been made in SAS/OR: https://v4e049.vfe.sas.com/SASStudioV/ (accessed on 6 September 2021)

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Charnes, A.; Cooper, W.W.; Ferguson, R.O. Optimal estimation of executive compensation by linear programming. Manag. Sci. 1955, 1, 138–151. [Google Scholar] [CrossRef]
  2. Ghaffar, A.; Razzaq, A.; Hasan, M.; Ashraf, Z.; Khan, M.F. Fuzzy goal programming with an imprecise intuitionistic fuzzy preference relations. Symmetry 2020, 12, 1548. [Google Scholar] [CrossRef]
  3. Khan, M.F.; Hasan, M.; Quddoos, A.; Fügenschuh, A.; Hasan, S.S. Goal programming models with linear and exponential fuzzy preference relations. Symmetry 2020, 12, 934. [Google Scholar] [CrossRef]
  4. Giokas, D. The use of goal programming and data envelopment analysis for estimating efficient marginal costs of outputs. J. Oper. Res. Soc. 1997, 48, 319–323. [Google Scholar] [CrossRef]
  5. Lin, H.; Nagalingam, S.; Lin, G. An interactive meta-goal programming-based decision analysis methodology to support collaborative manufacturing. Robot. Comput. Integr. Manuf. 2009, 25, 135–154. [Google Scholar] [CrossRef]
  6. Ding, T.; Liang, L.; Yang, M.; Wu, H. Multiple Attribute Decision Making based on cross-evaluation with uncertain decision parameters. Math. Probl. Eng. 2016, 2016, 4313247. [Google Scholar] [CrossRef] [Green Version]
  7. Tzeng, G.-H.; Huang, J.J. Multiple Attribute Decision Making, Methods and Applications. In Lecture Notes in Economics and Mathematical Systems 186; Springer: New York, NY, USA, 1981. [Google Scholar] [CrossRef]
  8. Singh, A.; Gupta, A.; Mehra, A. Matrix games with 2-tuple linguistic information. Ann. Oper. Res. 2020, 287, 895–910. [Google Scholar] [CrossRef]
  9. Gaspars-Wieloch, H. On some analogies between one-criterion decision making under uncertainty and multi-criteria decision making under certainty. Econ. Bus. Rev. 2021, 21, 17–36. [Google Scholar] [CrossRef]
  10. Gaspars-Wieloch, H. A new application for the Goal Programming–the Target Decision Rule for Uncertain Problems. J. Risk Financ. Manag. 2020, 13, 280. [Google Scholar] [CrossRef]
  11. Gaspars-Wieloch, H. From the interactive programming to a new decision rule for uncertain one-criterion problems. In Proceedings of the 16th International Symposium on Operational Research, Bled, Slovenia, 22–24 September 2021; pp. 669–674. [Google Scholar]
  12. Liuzzi, G.; Locatelli, M.; Piccialli, V.; Rass, S. Computing mixed strategies equilibria in presence of switching costs by the solution of nonconvex QP problems. Comput. Optim. Appl. 2021, 79, 561–599. [Google Scholar] [CrossRef]
  13. Latoszek, M.; Ślepaczuk, R. Does the inclusion of exposure of volatility into diversified portfolio improve the investment results? Portfolio construction from the perspective of a Polish investor. Econ. Bus. Rev. 2020, 20, 46–81. [Google Scholar] [CrossRef]
  14. Zhi, B.; Wang, X.; Xu, F. Portfolio optimization for inventory financing: Copula-based approaches. Comput. Oper. Res. 2021, 136, 105481. [Google Scholar] [CrossRef]
  15. Gür, S.; Tamer, E. Scheduling and planning in service systems with goal programming: Literature review. Mathematics 2018, 6, 265. [Google Scholar] [CrossRef] [Green Version]
  16. Durbach, I.N. Scenario planning in the analytic hierarchy process. Futures Foresight Sci. 2019, 2, e16. [Google Scholar] [CrossRef]
  17. Durbach, I.N.; Stewart, T.J. Modeling uncertainty in multi-criteria decision analysis. Eur. J. Oper. Res. 2012, 223, 1–14. [Google Scholar] [CrossRef]
  18. Waters, D. Supply Chain Risk Management. In Vulnerability and Resilience in Logistics, 2nd ed.; Kogan Page: London, UK, 2011. [Google Scholar]
  19. Gaspars-Wieloch, H. Critical analysis of classical scenario-based decision rules for pure strategy searching. Organ. Manag. Ser. 2020, 149, 155–165. [Google Scholar] [CrossRef]
  20. Gaspars-Wieloch, H. On a decision rule supported by a forecasting stage based on the decision maker’s coefficient of optimism. Cent. Eur. J. Oper. Res. 2015, 23, 579–594. [Google Scholar] [CrossRef] [Green Version]
  21. Gaspars-Wieloch, H. Modifications of the Hurwicz’s decision rule. Cent. Eur. J. Oper. Res. 2014, 22, 779–794. [Google Scholar] [CrossRef] [Green Version]
  22. Helber, S.; de Kok, T.; Kuhn, H.; Manitz, M.; Matta, A.; Stolletz, R. Quantitative approaches in production management. OR Spectr. 2019, 41, 867–870. [Google Scholar] [CrossRef] [Green Version]
  23. Kloos, K.; Pibernik, R.; Schulte, B. Allocation planning in sales hierarchies with stochastic demand service-level targets. OR Spectr. 2019, 41, 981–1024. [Google Scholar] [CrossRef]
  24. Zhang, S.; Tang, F.; Li, X.; Liu, J.; Zhang, B. A hybrid multi-objective approach for real-time flexible production scheduling and rescheduling under dynamic environment in Industry 4.0 context. Comput. Oper. Res. 2021, 132, 105267. [Google Scholar] [CrossRef]
Table 1. Payoff matrix for M-DMC.
Table 1. Payoff matrix for M-DMC.
Alternatives 1
CriteriaA1AjAn
C1b1,1b1,jb1,n
Ckbk,1bk,jbk,n
Cpbp,1bp,jbp,n
1n—number of alternatives, p—number of criteria, bk,j—performance of criterion Ck if option Aj is selected. Source: [9].
Table 2. Payoff matrix for 1-DMU with unknown probabilities.
Table 2. Payoff matrix for 1-DMU with unknown probabilities.
Alternatives 2
ScenariosA1AjAn
S1a1,1a1,ja1,n
Siai,1ai,jai,n
Smam,1am,jam,n
2n—number of alternatives, m—number of scenarios, ai,j—payoff obtained if option Aj is selected and scenario Si occurs. Source: [9].
Table 3. Payoff matrix (example).
Table 3. Payoff matrix (example).
Stocks
ScenariosA1A2A3A4A5
S140050065010002000
S211001200125019003800
S31200900350026001650
S4900700270021001350
S55001000770600300
S66001150850740350
Source: prepared by the author.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gaspars-Wieloch, H. From Goal Programming for Continuous Multi-Criteria Optimization to the Target Decision Rule for Mixed Uncertain Problems. Entropy 2022, 24, 51. https://0-doi-org.brum.beds.ac.uk/10.3390/e24010051

AMA Style

Gaspars-Wieloch H. From Goal Programming for Continuous Multi-Criteria Optimization to the Target Decision Rule for Mixed Uncertain Problems. Entropy. 2022; 24(1):51. https://0-doi-org.brum.beds.ac.uk/10.3390/e24010051

Chicago/Turabian Style

Gaspars-Wieloch, Helena. 2022. "From Goal Programming for Continuous Multi-Criteria Optimization to the Target Decision Rule for Mixed Uncertain Problems" Entropy 24, no. 1: 51. https://0-doi-org.brum.beds.ac.uk/10.3390/e24010051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop