Next Article in Journal
Boundary Value Problems for ψ-Hilfer Type Sequential Fractional Differential Equations and Inclusions with Integral Multi-Point Boundary Conditions
Next Article in Special Issue
Multi-Objective Optimum Design and Maintenance of Safety Systems: An In-Depth Comparison Study Including Encoding and Scheduling Aspects with NSGA-II
Previous Article in Journal / Special Issue
Assessment of Optimization Methods for Aeroacoustic Prediction of Trailing-Edge Interaction Noise in Axisymmetric Jets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparison of Archiving Strategies for Characterization of Nearly Optimal Solutions under Multi-Objective Optimization

Instituto Universitario de Automática e Informática Industrial, Universitat Politècnica de València, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Submission received: 8 March 2021 / Revised: 23 April 2021 / Accepted: 26 April 2021 / Published: 28 April 2021
(This article belongs to the Special Issue Evolutionary Algorithms in Engineering Design Optimization)

Abstract

:
In a multi-objective optimization problem, in addition to optimal solutions, multimodal and/or nearly optimal alternatives can also provide additional useful information for the decision maker. However, obtaining all nearly optimal solutions entails an excessive number of alternatives. Therefore, to consider the nearly optimal solutions, it is convenient to obtain a reduced set, putting the focus on the potentially useful alternatives. These solutions are the alternatives that are close to the optimal solutions in objective space, but which differ significantly in the decision space. To characterize this set, it is essential to simultaneously analyze the decision and objective spaces. One of the crucial points in an evolutionary multi-objective optimization algorithm is the archiving strategy. This is in charge of keeping the solution set, called the archive, updated during the optimization process. The motivation of this work is to analyze the three existing archiving strategies proposed in the literature ( A r c h i v e U p d a t e P Q , ϵ D x y , A r c h i v e _ n e v M O G A , and t a r g e t S e l e c t ) that aim to characterize the potentially useful solutions. The archivers are evaluated on two benchmarks and in a real engineering example. The contribution clearly shows the main differences between the three archivers. This analysis is useful for the design of evolutionary algorithms that consider nearly optimal solutions.

1. Introduction

Many real-world applications pose different objectives (usually in conflict) to optimize [1,2,3]. This leads to the proposal of a multi-objective optimization approach (MOOP—multi-objective optimization problem) [4,5,6,7]. In a posteriori multi-objective approach [8], after the MOOP definition and the optimization stage, a set of Pareto optimal solutions [9] is generated. The decision maker (DM) can then analyze, at the decision-making stage, the trade-off of the optimal alternatives for each design objective. This enables a better understanding of the problem and a better-informed final decision.
For the DM, it is useful to have a diverse set of solutions. Traditionally, diversity is sought in the objective space. However, obtaining a diverse set in the decision space also offers advantages [10]: (1) it enables the DM to obtain different (even significantly different) alternatives before the final decision; (2) it helps speed up the search, improving exploration, and preventing premature convergence towards a non-global minimum. In addition, the best solutions are sometimes too sensitive to disturbances, or are not feasible in practice [11,12,13,14]. In this scenario, the multimodal solutions or the nearly optimal solution set (also called approximate or ϵ -efficient solutions) plays a key role in enhancing the diversity of solutions. Two multimodal solutions are those that, being optimal, obtain the same performance. Nearly optimal solutions are those that have similar performance to optimal solutions. Generalizing, it can be considered that multimodal solutions are included in nearly optimal solutions. Nearly optimal solutions have been studied by many authors in the bibliography [15,16,17,18,19], have similar performance to optimal solutions and can sometimes be more adequate according to DM preferences (for instance, more robust [14]). Therefore, an additional challenge then arises: to obtain a set of solutions that, in addition to good performance in the design objectives, offer the greatest possible diversity.
However, considering all the nearly optimal solutions requires obtaining and analyzing a great number of alternatives and this causes two problems:
1
It slows down the optimization process. In evolutionary algorithms, an archive (a set to store solutions during the execution) is required. The computational cost of the optimization process largely depends on the archive size. This is because to check for the inclusion of a new candidate solution in the archive, it is necessary to check the dominance (or ϵ dominance) for each solution in the current archive. Many new candidate solutions are analyzed in an optimization process. Therefore, a large archive results in a significantly higher computational cost.
2
The decision stage is made more difficult. The designer must choose the final solution from a much larger number of alternatives.
Therefore, it is necessary to reduce the set of optimal solutions obtained by the designer. In the literature, there are different algorithms aimed at finding nearly optimal solutions in multi-objective optimization problems. The multimodal multi-objective evolutionary algorithms (MMEAs [20]) are intended for multimodal optimization problems. Some of the MMEAs take into account nearly optimal solutions in the optimization process, but most of them do not provide these solutions to the DM. Furthermore, evolutionary algorithms with an unbounded external archive [21] can also be interesting to analyze these solutions. These unbounded external archives can be analyzed to obtain the relevant nearly optimal solutions.
One of the crucial points in an evolutionary multi-objective optimization algorithm is the archiving strategy (or archiver). An archiving strategy is the strategy that selects and updates a solution set, called the archive, during the evolutionary process. Some archivers have been studied previously [19,22,23,24,25,26]. In this paper, we address the problem of discretization of the potentially useful alternatives. For this purpose, we compare different archiving strategies that aim to obtain the set of potentially useful nearly optimal solutions. An archiving strategy must take into account the decision space to ensure that the potentially useful nearly optimal solutions are not discarded.
For the comparison of the results in this real problem, we have chosen to embed the archiver in a basic evolutionary algorithm. First, to observe the impact of each archiver when incorporated into an evolutionary mechanism. In addition, second, because the computational cost associated with the objective functions of the real problem does not allow simulations on large numbers of points. Therefore, it is not feasible to test each archiver with a random or exhaustive search as has been done with the benchmarks.
These archivers have not been compared in the literature, so this work is useful for future designs of evolutionary algorithms that consider nearly optimal solutions or even to modify the archivers of the old evolutionary algorithms considering such solutions. Therefore, the purpose of the paper is: (1) to understand the properties of these archivers, (2) to provide an analysis for choosing one of these archivers and (3) to give ideas for designing new archivers. The design of these algorithms are currently open issues in this research area [18,20,27].
This work is structured as follows. In Section 2, a small state of the art on potentially useful nearly optimal solutions is introduced. In Section 3 some basic multi-objective backgrounds are presented. In Section 4, different archiving strategies to characterize the optimal and nearly optimal set are described. In Section 5 the MOOPs and the archivers comparison procedure are presented. The results obtained on the archivers are shown in Section 6. Finally, the conclusions are given in Section 7.

2. State of the Art

As discussed in the previous section, obtaining all nearly optimal solutions leads to problems. Considering only the most relevant solutions largely avoids the problems mentioned above. Not all nearly optimal solutions are equally useful to the DM. Therefore, if we manage to discard those that are less useful, we will reduce both mentioned problems. Let us see a graphic example to illustrate what we consider as potentially useful solutions. Suppose we have a MOOP with two design objectives and two decision variables (see Figure 1). Three solutions x 1 , x 2 and x 3 are selected. x 1 is an optimal solution (member of the Pareto set), and it slightly dominates the nearly optimal solutions x 2 and x 3 . x 1 and x 2 are very similar alternatives in their parameters (both belong to n e i g h b o r h o o d 1 , the same area in the parameter space), while x 3 is significantly different (it belongs to n e i g h b o r h o o d 2 ). In this scenario, x 2 does not provide new relevant information to the DM. This solution is similar to x 1 but with a worse performance in design objectives. Predictably, both will have similar characteristics, therefore the DM will choose x 1 since it obtains a better performance in the design objectives. However, x 3 does provide useful new information to the DM because it has a similar performance to the optimal ones and is in a different neighborhood. The solutions in n e i g h b o r h o o d 1 could be, for example, not very robust or not feasible in practice. In this context, x 3 (and the solutions in n e i g h b o r h o o d 2 ) could be a potentially useful solution due to their significantly different characteristics. It is possible, and often common, for the DM to analyze in the decision stage additional indicators/objectives not included in the optimization phase. Thus, the DM can assume a small loss of performance in the design objectives in exchange for an improvement in a new feature not contemplated in the optimization process. This analysis can decide the final choice in one way or another. In short, including solutions of n e i g h b o r h o o d 2 increases the diversity with useful solutions and enables the DM to make a better-informed final decision.
Therefore, the potentially useful nearly optimal solutions are those nearly optimal alternatives that differ significantly in the parameter space [28,29,30]. Thus, the new set must: (1) not neglect the diversity existing in the set of nearly optimal alternatives; (2) obtain the least number of solutions. To achieve both aims, it is necessary to employ an evolutionary algorithm that characterizes the set of solutions by means of a discretization which takes into account both the decision space and the objective space, simultaneously. A discretization that takes into account only the objective space can lead to the loss of significantly different nearly optimal alternatives in the decision space. This loss is a drawback because, as we have previously discussed, these alternatives are potentially useful. On the other hand, a discretization that takes into account only the decision space can lead to archives with a huge number of solutions [19] and cause the two problems previously mentioned.

3. Background

A multi-objective optimization problem can be defined as follows:
min x Q f ( x )
where x = [ x 1 , . . . , x k ] is defined as a decision vector in the domain Q     k and f : Q m is defined as the vector of objective functions f ( x ) = [ f 1 ( x ) , . . . , f m ( x ) ] . A maximization problem can be converted into a minimization problem. For each objective to be maximized max f i ( x ) = m i n ( f i ( x ) ) will be performed. The domain Q is defined by the set of constraints on x . For instance (but not limited to –any other constraints could be introduced in a general MOOP–):
x i ̲ x i x i ¯ ,   i = [ 1 , . . . , k ]
where x i ̲ and x i ¯ are the lower and upper bounds of x components.
Consequently, the MOOP obtains a Pareto set P Q (see Definition 2). This set has solutions non-dominated by any other solution (see Definition 1) in Q.
Definition 1
(Dominance [31]). A decision vector x 1 is dominated by another decision vector x 2 if f i ( x 2 ) f i ( x 1 ) for all i [ 1 , . . . , m ] and f j ( x 2 ) < f j ( x 1 ) for at least one j, j [ 1 , . . . , m ] . This is denoted as x 2 x 1 .
Definition 2.
(Pareto set P Q ): is the set of solutions in Q that is non-dominated by any other solution in Q: P Q : = { x Q | x Q : x x }
Definition 3.
(Pareto front f ( P Q ) ): given a set of Pareto optimal solutions P Q , the Pareto front is defined as:
f ( P Q ) : = { f ( x ) | x P Q }
In any MOOP, there is a set of solutions with objective values close to the Pareto front. These solutions receive several names in the bibliography: nearly optimal, approximate, or ϵ -efficient solutions. To formalize the treatment of the nearly optimal solutions, the following definitions are used:
Definition 4.
( ϵ -dominance [32]): define ϵ = [ ϵ 1 , . . . , ϵ m ] as the maximum acceptable degradation. A decision vector x 1 is ϵ -dominated by another decision vector x 2 if f i ( x 2 ) + ϵ i f i ( x 1 ) for all i [ 1 , . . . , m ] and f j ( x 2 ) + ϵ j < f j ( x 1 ) for at least one j, j [ 1 , . . . , m ] . This is denoted by x 2 ϵ x 1 .
Definition 5.
(Set of nearly optimal solutions, P Q , ϵ [30]): is the set of solutions in Q which are not ϵ -dominated by another solution in Q:
P Q , ϵ : = { x Q | x Q : x ϵ x }
The sets defined P Q and P Q , ϵ usually contain a great, or even infinite, number of solutions. Optimization algorithms try to characterize these sets using a discrete approximation P Q * and P Q , ϵ * . In general, if such an approach has a limited set of solutions, the computational cost falls. However, the number of solutions must be sufficient to obtain a good characterization of these sets.
To compare the archiving strategies, it is useful to use a metric. Different metrics are used in the literature to measure the convergence of the outcome set. An example of these is the Hausdorff distance ( d H [33,34,35] see Equation (3)).
d H ( A , B ) : = m a x ( d i s t ( A , B ) , d i s t ( B , A ) ) d i s t ( A , B ) : = sup u A d i s t ( u , B ) d i s t ( u , B ) : = inf v B | | u v | |
This metric is a measure of the distance between two sets. Therefore, d H can be used to measure convergence between the outcome set (or final archive) f ( A ) to the target set f ( H ) of a given MOOP (or archiving strategy). However, d H only penalizes the largest outlier of the candidate set. Thus, a high value of d H ( A , H ) can indicate both that A is a bad approximation of H and that A is a good approximation but contains at least one outlier. The d H is used by the archiver A r c h i v e U p d a t e P Q , ϵ D x y .
To avoid this problem, a new indicator appears in the literature: the averaged Hausdorff distance Δ p (the standard Hausdorff distance is recoverable from Δ p by taking the limit lim p Δ p = d H ). This metric (with 1 p < ) assigns a lower value to sets uniformly distributed throughout its domain. Δ p is based on the known generational distance metrics (GD [36] and represents how “far” f ( H ) is from f ( A ) ), and inverted generational distance (IGD [37] represents how “far” f ( A ) is from f ( H ) ). However, these metrics are slightly modified in Δ p ( G D p and I G D p , see Equation (5)). This modification means that the larger archive sizes and finer discretizations of the target set do not automatically lead to better approximations under Δ p [34]. Δ p measures the diversity and convergence in the decision and objective spaces. In this work we use Δ p with p = 2 (as in [18,38]) so that the influence of outliers is low.
Δ p ( X , Y ) : = m a x ( G D p ( X , Y ) , I G D p ( X , Y ) )
where
G D p = 1 n x i = 1 n x d i s t ( x i , Y ) p 1 / p I G D p = 1 n y i = 1 n y d i s t ( y i , X ) p 1 / p
where
d i s t ( u , A ) : = inf v     A | | u v | |
To use Δ p it is necessary to define the target set H with which to compare the final archive A obtained by the archivers. The target set is defined in the decision space (H) and it has its representation in the objective space ( f ( H ) ). The definition of H is possible on the benchmarks used in this work (where the global and local optimum are known), but this definition is not trivial. A r c h i v e _ n e v M O G A and A r c h i v e U p d a t e P Q , ϵ D x y archivers discard solutions similar in both spaces at the same time. However, A r c h i v e - _ n e v M O G A , unlike A r c h i v e U p d a t e P Q , ϵ D x y , discards dominated solutions in their neighborhood. t a r g e t S e l e c t looks for diversity in both spaces simultaneously. On the one hand, defining H as the optimal and nearly optimal solutions that are not similar in both spaces at the same time, would give the archive A r c h i v e U p d a t e P Q , ϵ D x y an advantage. If H is defined as the set of optimal and nearly optimal solutions non-dominated in their neighborhood, A r c h i v e _ n e v M O G A would be benefited. However, the archivers have a common goal: to obtain the solutions close to the optimals in the objective space but significantly different in the decision space (potentially useful solutions). Consequently, to be as “fair” as possible we must define H as the set that defines the common objective. Thus, the potentially useful solutions can be represented by the local Pareto set (see Definition 6).
Definition 6.
(Local Pareto set [5,39]): is the set of solutions in Q that is non-dominated by any another neighbor solution in Q (where n is a small positive number):
H : = { x Q | y Q :   | | x y | | n   a n d   y x }
Figure 2 shows an example of a MOOP with two design objectives and two decision variables. Sets S E T 1 and S E T 2 form the global Pareto set. Both sets, together with S E T 3 and S E T 4 form the local Pareto set (since the global Pareto set is also a local Pareto set). No solution of a local Pareto set is dominated by a neighboring solution. Furthermore, all the solutions neighboring the local Pareto set are dominated by a neighboring solution, and therefore they are not part of this set. This can be verified by the colored areas around the sets S E T 1 , S E T 2 , S E T 3 and S E T 4 . For example, solutions in the gray area, which are neighboring solutions to S E T 3 , obtain a worse objective value than S E T 3 . For this reason, the solutions of the gray area are dominated by neighboring solutions, and therefore are not part of the local Pareto set. Sets S E T 3 and S E T 4 provide the DM with alternatives potentially useful (significantly different to S E T 1 and S E T 2 ), enabling the DM to make a more informed final decision. For this work, the ϵ dominated solutions (solutions that are not in the set P Q , ϵ ) will not be considered to be local Pareto solutions (H) because their degradation in performance is significant for the DM.

4. Description of the Compared Archivers

As already discussed above, nearly optimal solutions can be very useful. However, it is necessary to discretize this set in order to find a reduced set of solutions to avoid the problems associated with an excessive number of solutions. Furthermore, it is necessary not to neglect the potentially useful nearly optimal solutions, i.e., nearly optimal alternatives significantly different (in the decision space) to the optimal solutions. To achieve both purposes, it is essential to discretize the set of solutions taking into account the decision and objective spaces at the same time. In this section, three archivers that discretize the set P Q , ϵ in both spaces are described.
There are MMEAs and algorithms that consider nearly optimal solutions that offer these solutions: P Q , ϵ -NSGA-II [28], P Q , ϵ -MOEA [30], nevMOGA [29], N ϵ SGA [18], DIOP [10], 4D-Miner [40,41], MNCA [42].
P Q , ϵ -NSGA-II [28] was one of the first algorithms aimed at finding approximate (nearly optimal) solutions. P Q , ϵ -NSGA-II uses the same classification strategy as the algorithm on which it is based, NSGA-II [43], and therefore, the highest pressure of the population is taken toward the Pareto set. Thus, this may result in the neighborhoods with only nearly optimal solutions not being adequately explored [30]. To avoid this problem, the algorithm P Q , ϵ -MOEA [30] is created. This algorithm was designed to avoid Pareto set bias. Nevertheless, P Q , ϵ -MOEA does not take into account the location of solutions in the decision space. P Q , ϵ -MOEA does not then guarantee that the potentially useful alternatives will not be discarded. To overcome this problem, the nevMOGA [29] algorithm was designed. This algorithm seeks to ensure that the potentially useful alternatives are not discarded. DIOP [10] is a set-based algorithm that can maintain dominated solutions. This algorithm simultaneously evolves two populations A and T. Population A approaches the Pareto front, and is not provided to the DM, while T is the target population that seeks to maximize diversity in the decision and objective spaces. MNCA [42] is an evolutive algorithm that simultaneously evolves multiple subpopulations. In MNCA each subpopulation converges to a different set of non-dominated solutions. Finally, 4D-Miner [40,41] is an algorithm especially designed for functional brain imaging problems.
One of the crucial points in an evolutionary multi-objective optimization algorithm is the archiving strategy. The P Q , ϵ -NSGA-II and P Q , ϵ -MOEA algorithms share the A r c h i v e U p d a t e P Q , ϵ archiver. This archiver seeks to characterize all nearly optimal solutions without taking into account the decision space. In [19], different archiving strategies are compared: A r c h i v e U p d a t e P Q , ϵ , A r c h i v e U p d a t e P Q , ϵ D x , A r c h i v e U p d a t e P Q , ϵ D y and A r c h i v e U p d a t e P Q , ϵ D x y . On the one hand, A r c h i v e U p d a t e P Q , ϵ gets an excessive number of solutions. On the other hand, A r c h i v e U p d a t e P Q , ϵ D x , A r c h i v e U p d a t e P Q , ϵ D y do not discretize the decision and objective spaces simultaneously. Therefore, these archivers do not achieve the two purposes discussed above. The mentioned work concludes that archiver A r c h i v e U p d a t e P Q , ϵ D x y is most practical use within stochastic search algorithms. Furthermore, this archiver is the only one of the archivers compared in this paper that discretizes the decision and objective spaces simultaneously [27], a factor that we consider necessary to obtain potentially useful solutions. The archiver A r c h i v e U p d a t e P Q , ϵ D x y has been employed in the recent N ϵ SGA algorithm to maintain a well-distributed representation in the decision and objective spaces. For this reason, the present work compares the archiver A r c h i v e U p d a t e P Q , ϵ D x y and not A r c h i v e U p d a t e P Q , ϵ , A r c h i v e U p d a t e P Q , ϵ D x and A r c h i v e U p d a t e P Q , ϵ D y .
The second archiver included in this comparison is the archiver of the nevMOGA algorithm ( A r c h i v e _ n e v M O G A ). This archiver characterizes the set of potentially useful solutions by discretizing both spaces simultaneously. Finally, the archiver of the DIOP algorithm ( t a r g e t S e l e c t ) is also compared in this work. t a r g e t S e l e c t seeks to find the population that maximizes an indicator that measures diversity in the decision and objective spaces simultaneously. Therefore, a metric-based archiver t a r g e t S e l e c t is compared to the distance-based archivers A r c h i v e U p d a t e P Q , ϵ D x y and A r c h i v e _ n e v M O G A . The three archivers compared in this work seek to characterize the potentially useful solutions. The archiver of the MNCA algorithm has not been included in the comparison because it looks for non-dominated solutions. The archiver of the 4D-Miner algorithm has also not been included in the comparison because 4D-Miner is a very specific algorithm for functional brain imaging problems.

4.1. A r c h i v e U p d a t e P Q , ϵ D x y

A r c h i v e U p d a t e P Q , ϵ D x y is the proposed archiving strategy in [19] (see Algorithm 1). As already mentioned, potentially useful solutions are those that obtain similar performance (in the design objectives) but differ significantly in their parameters (in the decision space). This archiver aims to maintain these solutions. This archiver uses, in addition to the parameter ϵ (maximum degradation acceptable to the DM, see Definition 4), the parameters Δ x and Δ y . Two solutions are considered similar if their distance in the decision space is less than Δ x . Therefore, the parameter Δ x is the maximum distance, in the decision space, between two similar solutions. Two alternative solutions obtain a similar performance if their distance in the objective space is less than Δ y . Therefore, the parameter Δ y is the maximum distance between two solutions to be considered similar in the objectives space. Both parameters are measured using the Hausdorff distance [34] ( d H , see Equation (3)). The archive A stores the set of obtained alternatives. A new solution p from P (new candidate solutions) will only be incorporated in A if: (1) p is a nearly optimal solution; and (2) A does not contain any solution similar to p, in the decision and objective spaces at the same time. If the new solution p is stored in archive A, the new set of optimal and nearly optimal solutions ( A ^ ) belonging to archive A is calculated. Thus, a solution a A will be removed if: (1) it is not a nearly optimal solution ( p ( ϵ + Δ y ) a ); and (2) the distance to the set A ^ fulfills the condition d i s t ( a , A ^ ) 2 Δ x .
Algorithm 1   A : = A r c h i v e U p d a t e P Q , ϵ D x y ( P , A 0 , ϵ , Δ x , Δ y )
Require: population P, archive A 0 , ϵ R + m ,   Δ x R + ,   Δ y R +
Ensure: update archive A
1:
A : = A 0
2:
for all p P do
3:
  if ( a 1 A : a 1 ϵ p ) and ( a 2 : ( d H ( f ( a 2 ) , f ( p ) ) Δ y and d H ( a 2 , p ) Δ x ) ) then
4:
     A A { p }
5:
     A ^ = { a 1 A | a 2 A : a 2 ( ϵ + Δ y ) a 1 }
6:
    for all a A \ A ^ do
7:
     if p ( ϵ + Δ y ) a and d i s t ( a , A ^ ) 2 Δ x then
8:
       A A \ {a}
9:
     end if
10:
    end if
11:
  end if
12:
end if
The archiver A r c h i v e U p d a t e P Q , ϵ D x y goes through all the set of candidate solutions p P , and in the worst case, the algorithm compares them with all the solutions a A . Thus, the complexity of the archiver is O ( | P | | A | ) [44]. Also, A r c h i v e U p d a t e P Q , ϵ D x y has a maximum number of solutions | A ( D x y ) | [19] which is given by:
| A ( D x y ) | | A ( D x y x ) | | A ( D x y y ) |
where | A ( D x y x ) | is the maximum number of neighborhoods that the decision space can contain (based on Δ x ) and | A ( D x y y ) | is the maximum number of solutions that can exist in each neighborhood (based on Δ y and ϵ ), and are defined as:
| A ( D x y x ) | i = 1 k 1 Δ x + 1 j = 1 k ( x j ¯ x j ̲ )
| A ( D x y y ) | 1 Δ y m 1 i = 1 m ϵ i Δ y + 3 i = 1 i j m ( M j m j + Δ y )
x j ̲ and x j ¯ are the bounds in the decision space ( P Q , ϵ + 2 Δ y is included in [ x 1 ̲ , x 1 ¯ ] [ x k ̲ , x k ¯ ] ) and M j and m j are the bounds in the objective space of the set to discretize f ( P Q , ϵ + 2 Δ y ) . Also, it is assumed that any ϵ i is greater than Δ y .

4.2. A r c h i v e _ n e v M O G A

A r c h i v e _ n e v M O G A is the archiving strategy used by the nevMOGA evolutionary algorithm [29]. This archiver, just as A r c h i v e U p d a t e P Q , ϵ D x y , aims to guarantee solutions that obtain similar performance to the optimals, but are significantly different in the decision space. The archiver A r c h i v e _ n e v M O G A uses the same three parameters as A r c h i v e U p d a t e P Q , ϵ D x y ( ϵ , Δ x , Δ y ). However, there are differences in the definition of some parameters in this archiver with respect to A r c h i v e U p d a t e P Q , ϵ D x y : (1) Δ x is a vector that contains the maximum distances (in the decision space) between similar solutions for each dimension. Thus, two individuals a and b are similar if: | a i b i | Δ x i   i [ 1 , , k ] . (2) Δ y is also a vector that contains the maximum distances (in the objective space) between solutions with similar performance for each dimension. Thus, two individuals a and b have a similar performance if: | f i ( a ) f i ( b ) | Δ y i   i [ 1 , , m ] .
The archiver A r c h i v e _ n e v M O G A will add a new candidate solution p to the archive A if the following conditions are met simultaneously: (1) p is a nearly optimal solution; (2) there is no similar solution to p (in the decision space) A that dominates it; and (3) there is no similar solution to p in A in both spaces at the same time (if it exists, and p dominates it, it will be replaced). If a solution p is incorporated in the archive A, it will remove from A: (1) the similar individuals (in the parameter space) that are dominated by p and (2) the individuals ϵ dominated by p.
The complexity of the archiver A r c h i v e _ n e v M O G A is equivalent to the complexity of A r c h i v e U p d a t e P Q , ϵ D x y previously defined ( O ( | P | | A | ) ). Moreover, A r c h i v e _ n e v M O G A has a maximum number of solutions | A ( n M O G A ) | which is given by:
| A ( n M O G A ) | | A ( n M O G A x ) | | A ( n M O G A y ) |
where | A ( n M O G A ) | x is the maximum number of neighborhoods that the decision space can contain (based on Δ x ) and | A ( n M O G A ) | y is the maximum number of solutions that can exist in each neighborhood (based on Δ y and ϵ ), and are defined as:
| A ( n M O G A x ) | i = 1 k 1 Δ x i + 1 j = 1 k ( x j ¯ x j ̲ )
| A ( n M O G A y ) | i = 1 m n _ b o x i n _ b o x m a x
where n _ b o x i = ( M i m i ) / Δ y i , n _ b o x m a x = max i n _ b o x i and M j and m j are the bounds in the objective space of the set to discretize f ( P Q , ϵ ) .
The archive size with respect to the decision space is equivalent for the compared archivers ( A ( D x y x ) and A ( n M O G A x ) ). However, there is a difference between A ( D x y y ) and A ( n M O G A y ) (objective space). The archiver A r c h i v e _ n e v M O G A , unlike A r c h i v e U p d a t e P Q , ϵ D x y , discards nearly optimal solutions dominated by a similar solution in the decision space. Thus, in the worst case, A r c h i v e _ n e v M O G A will obtain the best solutions (non-dominated) in each neighborhood. These solutions will have a maximum number of alternatives depending on Δ y . However, A r c h i v e U p d a t e P Q , ϵ D x y , in the worst case, will obtain, in each neighborhood, in addition to the best solutions, additional solutions. These additional solutions are dominated by neighboring solutions, but are considered solutions with different performance (based on Δ y ). As a result, the archive of A r c h i v e _ n e v M O G A will have fewer solutions than the archive of A r c h i v e U p d a t e P Q , ϵ D x y . Therefore, we can deduce that the archiver A r c h i v e _ n e v M O G A has a lower computational cost than A r c h i v e U p d a t e P Q , ϵ D x y because its archive contains fewer solutions (the candidate solutions are compared with a smaller number of solutions).

4.3. t a r g e t S e l e c t

t a r g e t S e l e c t is the archiving strategy used by the DIOP evolutionary algorithm [10]. This archiver seeks to obtain a diverse set of solutions (keeping solutions close to the Pareto set) in the decision and objective spaces. A = t a r g e t S e l e c t ( F , T , μ t , ϵ ) has as inputs: an approximation to the Pareto front F, the set of solutions to be analyzed T, the size of the set target μ t , and ϵ . This archiver selects μ t solutions from set T. The goal is to find the population A, with size μ t , to maximize G ( T ) (see Equation (13)). G ( T ) is defined as the sum of the product between a metric and its respective weight.
G ( T ) : = ω o · D o ( T ) + ω d · D d ( T ) , | T | = μ with q F ( t ) ϵ   t T , ω o + ω d = 1
where D o and D d are metrics that measure diversity in the objective and decision spaces respectively, and q F is a distance metric defined as:
q F ( x ) : = min { ϵ   |   y F : x ϵ y }
D o is an indicator that measures diversity and convergence to the Pareto front, and D d is an indicator that measures diversity in the decision space. In this work, as in [10], D o and D d were specified by the hypervolume indicator [45] and the Solow–Polasky diversity measure [46], respectively. An advantage of the archiver t a r g e t S e l e c t is that you can directly and arbitrarily specify the archive size.

5. Materials and Methods

In this section, the MOOPs, on which the three archivers will be compared, will be defined. In addition, the methodology for carrying out the comparison is introduced.

5.1. Definition of MOOPs

The archivers will be compared on two benchmarks and a real engineering example. Benchmarks have a very low computational cost for the objective function. For this reason, it is inexpensive to obtain a target set (with a very fine discretization in the decision space) H. This discrete set is necessary for the use of the metric Δ p (see Section 3). Furthermore, the definition of this set must be as “fair” as possible. Therefore, for the benchmarks, the target set H is defined as the local and global Pareto set (see justification in Section 3). This set is obtained by discretizing the decision space with 10,000,000 solutions in the range of the parameters. Furthermore, the target set H has its representation in the objective space. In the real engineering example, obtaining a set H would be computationally very expensive (or even unaffordable). Therefore, this set is not defined in this MOOP. By means of these problems it is possible to analyze the behavior of the archivers for different characteristics in the MOOP: multimodal solutions; local Pareto sets; or discontinuous Pareto fronts. However, there are other features of MOPs that are not analyzed in this article (such as MOPs with many objectives and/or decision variables).

5.1.1. Benchmark 1

Benchmark 1 (see Equation (15)) is a test problem called SYM-PART defined in [47] widely used in the literature [18,20,48,49,50] for the evaluation of algorithms that characterize nearly optimal or multimodal solutions. Benchmark 1 has the Pareto set located in a single neighborhood, and it also has eight local Pareto set that overlap in the objective space (see Equation (20) and Figure 3). Thus, this benchmark is very useful to observe if the compared archivers can adequately characterize the nine existing neighborhoods, and provide all the existing diversity to the DM.
m i n x   f ( x ) = [ f 1 ( x )   f 2 ( x ) ]
f 1 ( x ) = ( x 1 t 1 ( c + 2 a ) + a ) 2 + ( x 2 t 2 b ) 2 + δ t f 2 ( x ) = ( x 1 t 1 ( c + 2 a ) a ) 2 + ( x 2 t 2 b ) 2 + δ t
where
t 1 = s g n ( x 1 ) m i n | x 1 | a c / 2 2 a + c , 1
t 2 = s g n ( x 2 ) m i n | x 2 | b / 2 b , 1
and
δ t = 0 f o r   t 1 = 0   a n d   t 2 = 0 0.1 e l s e
subject to:
20 x 1 20 20 x 2 20
using a = 0.5 , b = 5 and c = 5 .
This MOOP contains one global Pareto set:
P 0 , 0 = [ 0.5 , 0.5 ] × { 0 } = P Q
as well as the following eight local Pareto sets:
P 1 , 1 = [ 6.5 , 5.5 ] × { 5 } P 0 , 1 = [ 0.5 , 0.5 ] × { 5 } P 1 , 1 = [ 5.5 , 6.5 ] × { 5 } P 1 , 0 = [ 6.5 , 5.5 ] × { 0 } P 1 , 0 = [ 5.5 , 6.5 ] × { 0 } P 1 , 1 = [ 6.5 , 5.5 ] × { 5 } P 0 , 1 = [ 0.5 , 0.5 ] × { 5 } P 1 , 1 = [ 5.5 , 6.5 ] × { 5 } L o c a l   P a r e t o   s e t
Therefore, the target set H is defined as:
H : = P 0 , 0 P 1 , 1 P 0 , 1 P 1 , 1 P 1 , 0 P 1 , 0 P 1 , 1 P 0 , 1 P 1 , 1
To evaluate the archivers on benchmark 1, the parameters are defined in Table 1. The parameters ϵ , Δ x and Δ y are defined based on prior knowledge of the problem. The t a r g e t S e l e c t archiver is based on an indicator, and therefore has different parameters from the rest of the compared archives. For the choice of the parameter μ t , based on the size of the archives obtained by the rest of the archivers, the following values have been analyzed: μ t = { 100 , 75 , 50 } . For the choice of the ω o parameter, the following parameters suggested in [10] have been analyzed: ω o = { 0 , 0.7692 , 0.9091 , 0.9677 , 1 } . Among all these values, μ t = 100 and ω o = 0.9677 have been defined as the parameters that obtain the best performance, with respect to Δ p , for the uniform dispersion. For random dispersion, ω o = 0.7692 obtain the best performance.

5.1.2. Benchmark 2

The benchmark 2 (see Equation (21)) is an adaptation of the modified Rastrigin benchmark [51,52,53,54]. Figure 4 show the global and local Pareto set H of benchmark 2. This benchmark has a discontinuous Pareto front made up of solutions in different neighborhoods. In addition, it also provides nearly optimal solutions significantly different from the optimal solutions (in different neighborhoods).
min x f ( x ) = [ f 1 ( x )   f 2 ( x ) ]
f 1 ( x ) = ( i = 1 2 10 + 9 c o s ( 2 π · k i · x i ) ) ( 1 ( x 1 0.65 ) 2 + ( x 2 0.5 ) 2 ) f 2 ( x ) = m i n ( ( x 1 0.65 ) + ( x 2 0.5 ) , ( x 1 0.65 ) + ( x 2 0.25 ) , ( x 1 0.65 ) + ( x 2 0.75 ) , ( x 1 0.35 ) + ( x 2 0.5 ) , ( x 1 0.35 ) + ( x 2 0.75 ) , ( x 1 0.35 ) + ( x 2 0.25 ) )
where k 1 = 2 and k 2 = 3 , and subject to:
0 x 1 4 0 x 2 4
To analyze the archivers on the benchmark 2, we define the parameters of Table 2. The parameters ϵ , Δ x and Δ y are defined based on prior knowledge of the problem. Following the same procedure as the previous benchmark, μ t = 150 (in this case μ t = { 200 , 150 , 100 } has been analyzed) and ω o = 0.9677 is defined as the parameters that obtains the best performance, with respect to Δ p , for the uniform dispersion. For random dispersion, ω o = 0.9091 obtains the best performance.

5.1.3. Identification of a Thermal System

Finally, a MOOP is defined to solve a real engineering problem: identification of a thermal system. In this problem, the energy contribution inside the process is due to the power dissipated by the resistance inside it (see Figure 5). Air circulation inside the process is produced by a fan, which constantly introduces air from outside. The actuator is made up of a voltage source which is controlled by voltage. The actuator input range is [ 0     100 ] % ( [ 0     7.5 ] V). Two thermocouples are used to measure the resistance temperature and the air temperature in the range [ 50     250 ] ° C. Figure 6 shows the signals that will be used in the identification process. The ambient temperature T a is considered constant and equal to 17 ° C for the entire identification test.
Taking into account the physical laws of thermodynamics [55], the initial structure of the model can be defined using the following differential equations, where heat losses due to convection and conduction are modeled, as well as losses due to radiation:
T ˙ ( t ) = 1 1000 x 1 v ( t ) 2 x 2 ( T ( t ) T a ( t ) ) x 3 273.0 + T ( t ) 100 4
where T ( t ) is the process output temperature and state variable in ° C, v ( t ) is the input voltage to the process in volts, T a ( t ) is the ambient temperature in ° C and x = [ x 1   x 2   x 3 ] are the parameters of the model to estimate.
The MOOP is defined as follows:
min x f ( x ) = [ f 1 ( x )   f 2 ( x ) ]
subject to:
x ̲ x x ¯
where:
f 1 = 1 τ 0 τ | T ^ T | d t
f 2 = max i     1 . . . τ | T i ^ T i |
τ = 2500 is the duration of the identification test, variables with circumflex accent are process outputs (experimental data), variables without circumflex accent are the model outputs, x the parameter vector:
x = [ x 1   x 2   x 3 ]
and x ̲ and x ¯ (see Table 3) the lower and upper limits of the parameter vector x which define the decision space Q.
In this MOOP, the design objectives measure the mean and maximum error between the temperatures of the process outlet and the model. The parameters to be estimated and the design objectives have a physical meaning. This fact makes it easier to choose the ϵ , Δ x and Δ y (see Table 4). The ϵ parameter (maximum acceptable degradation) is the same for all archivers. Δ y is similar for A r c h i v e U p d a t e P Q , ϵ D x y and A r c h i v e _ n e v M O G A , taking into account the difference in vector size. However, Δ x is different for A r c h i v e U p d a t e P Q , ϵ D x y and A r c h i v e _ n e v M O G A . For A r c h i v e U p d a t e P Q , ϵ D x y , Δ x = 0.1 . A lower value increases significantly higher number of solutions. A higher value gives a poor approximation to the set of optimal and nearly optimal solutions. For A r c h i v e _ n e v M O G A , Δ x = [ 0.0015   0.2   0.1 ] . In this case, Δ x is independent for each dimension in the decision space, being different for each parameter due to its different limits (see Table 3). For t a r g e t S e l e c t , μ t = 78 solutions to obtain the same number of solutions as A r c h i v e _ n e v M O G A . In this way, both archivers will have equal conditions. Additionally, ω o = 0.7692 is defined to give greater weight to diversity in the decision space.

5.2. Archivers Comparison Procedure

The procedure performed to carry out the archiver comparison is different on the benchmarks and on the real example. Benchmarks have a very low computational cost for the objective function. Therefore, it is possible to evaluate large populations that discretize the entire search space. These populations are entered in the archiver as input population. To analyze the behavior of the archivers on different types of populations, a uniform and random distribution is used to obtain these initial populations.
Because the computational cost of the objective functions of the engineering problem are significantly higher, it is not feasible to test each archiver with random or exhaustive searches as has been done with the benchmarks. Thus, the archiver has been embedded in a basic evolutionary algorithm. In addition to the reduction of computational cost, this enables observing the impact of each archiver when incorporated into an evolutionary mechanism.

5.2.1. Benchmarks

For the comparison of the archivers on the two benchmarks presented, the archivers will be fed in two ways: (1) by a uniform distribution; (2) by a random distribution of solutions in the search space. The comparison of the results will be made using the averaged Hausdorff distance Δ p [34] (see Equation (4)). This metric measures the averaged Hausdorff distance between the outcome set (or final archive) f ( A ) and the target set f ( H ) (local Pareto set in this paper) of a given MOOP. Since Δ p considers the averaged distances between the entries of f ( A ) and f ( H ) , this indicator is very insensitive to outliers. This metric measures the diversity and convergence towards the target set. Furthermore, this metric can be applied both in the decision and objective space.
To carry out the comparison of the archiving strategies with data of a uniform dispersion, each archiver is fed with a file of 100,000 solutions uniformly distributed throughout the domain Q (generating a hypergrid). These solutions are introduced in a random order. This process is repeated with 25 different files, where each file slightly displaces the generated hypergrid vertically and/or horizontally on the search space.
To carry out the analysis with data of a random dispersion, each archiver is fed with a file of 100,000 solutions randomly distributed throughout the domain Q. This process is repeated with 25 different files, avoiding as far as possible the random component.

5.2.2. Identification of Thermal System

In this example, we are going to analyze the different archivers on a multi-objective generic optimization algorithm [23,34] (see Algorithm 2). This algorithm generates the initial population randomly with N i n d P 0 individuals, obtaining the initial archive A 0 (through the archiver). Subsequently, in each iteration, new solutions are generated and the archive A t is updated (using the selected archiver). The G e n e r a t e ( ) function generates new individuals in each iteration. To do this, two solutions are randomly selected from the current file A t 1 . A random number u [ 0   1 ] is generated. If u > P c m (probability of crossing/mutation) a crossing is made. The crossover generates two new solutions using the simulated binary crossover (SBX [56]) technique. If u P c m a mutation is performed. The mutation generates two solutions through the polynomial mutation [43]. In this way, the three archivers are compared using the same evolutionary strategy. For this example, an initial population of 500 individuals ( N i n d P 0 = 500 ), a probability of crossing/mutation of 0.2 ( P c m = 0.2 ), and 5000 generations are used. Therefore, 10,500 solutions are evaluated (500 + 2 × 5000) for each archiver.
Algorithm 2 Generic multi-objective optimization algorithm
1:
P 0 Q             ▹ Random selection
2:
A 0 := A r c h i v e r ( P 0 , )
3:
fort := 1:Number of iterations do
4:
   P t := G e n e r a t e ( A t 1 )
5:
   A t := A r c h i v e r ( P t , A t 1 )
6:
end for

6. Results and Discussion

This section shows the results obtained on the two benchmarks and the real example previously introduced.

6.1. Benchmark 1 with Uniform Dispersion

The archivers are tested on 25 different input populations obtained by uniform dispersion. Figure 7 shows the median results of archive A on the benchmark 1 for both decision and objective spaces with respect to Δ p ( A , H ) in decision space. As can be seen, the archivers make a good approximation to the target set H, characterizing the nine neighborhoods that compose it. However, there are differences between the sets found by the archivers. First, the A r c h i v e _ n e v M O G A archiver obtains fewer solutions. The number of solutions μ t = 100 for the t a r g e t S e l e c t is user-defined, but a smaller size makes Δ p worse for both spaces. The archive A obtained by A r c h i v e U p d a t e P Q , ϵ D x y obtains a larger number of solutions.
The t a r g e t S e l e c t archiver obtains a better approximation to the Pareto front than the other archivers. This is because the weight ω o has a high value, giving greater weight to the D o indicator that measures convergence and diversity for the Pareto front. The A r c h i v e U p d a t e P Q , ϵ D x y and A r c h i v e _ n e v M O G A archivers do not select a candidate solution p (even if p belongs to the Pareto set) if an alternative already exists in the current archive that is similar in both spaces (in A r c h i v e _ n e v M O G A , p is selected if it dominates the similar solution). These archivers could obtain a better approximation to the Pareto front by reducing the parameter Δ y (parameter with which the degree of similarity in the objective space is decided), but it probably also implies obtaining a greater number of solutions.
Regarding the local Pareto set, the archive A r c h i v e _ n e v M O G A obtains a better approximation in the comparison. A r c h i v e U p d a t e P Q , ϵ D x y and t a r g e t S e l e c t obtain solutions in all neighborhoods where nearly optimal solutions exist. However, these solutions are rarely located on the lines that define the local Pareto set.
Notice that A r c h i v e U p d a t e P Q , ϵ D x y and A r c h i v e _ n e v M O G A archivers obtain ϵ dominated solutions. In A r c h i v e U p d a t e P Q , ϵ D x y , it is possible that a solution in A that is no longer nearly optimal due to the apparition of a new candidate solution p. p may not be removed because it does not satisfy condition in the line 7 of Algorithm 1. In A r c h i v e _ n e v M O G A , a new candidate solution p can be added to the archive A through the condition of line 8 of Algorithm 3. In some cases, solutions that are not nearly optimal due to the appearance of p (by line 8 of Algorithm 3) are not eliminated. Therefore, the archive A obtained by both archivers may contain solutions that do not belong to the nearly optimal set.
Figure 8 shows the boxplot of the indicators Δ p ( f ( A ) , f ( H ) ) , Δ p ( A , H ) and archive size for the 25 tests performed. A r c h i v e _ n e v M O G A achieves a better approximation in the decision and objective spaces, and obtains fewer solutions. t a r g e t S e l e c t also obtains a better approximation in both spaces with fewer solutions than A r c h i v e U p d a t e P Q , ϵ D x y . A r c h i v e U p d a t e P Q , ϵ D x y obtains greater variability among the 25 archives obtained. Therefore, A r c h i v e _ n e v M O G A has achieved, in a better way, the two main objectives: not neglecting the diversity of solutions (locates all nine neighborhoods) and obtains a reduced number of solutions (simplifying the optimization and decision stages).
Algorithm 3   A : = A r c h i v e _ n e v M O G A   ( P , A 0 , ϵ , Δ x , Δ y )
Require: population P, archive A 0 , ϵ R + m ,   Δ x R + k , Δ y R + m
Ensure: update archive A
1:
A : = A 0
2:
for all p P do
3:
  if ( a 1 A : a 1 ϵ p ) and ( a 2 A : | a 2 p | Δ x and a 2 p ) and ( a 3 : | a 3 p | Δ x and | f ( a 3 ) f ( p ) | Δ y ) then
4:
     A A p
5:
    if a 4 A : p ϵ a 4 or | a 4 p | Δ x and p a 4 then
6:
       A A \ a 4
7:
    end if
8:
  else if a 5 : | a 5 p | Δ x and | f ( a 5 ) f ( p ) | Δ y and p a 5 then
9:
     a 5 : = p
10:
  end if
11:
end if

6.2. Benchmark 1 with Random Dispersion

The archivers are tested on 25 different input populations obtained by random dispersion. Figure 9 shows the archive A, with median result for Δ p ( A , H ) . The archivers characterize the nine neighborhoods that form the target set H. The archive A obtained by A r c h i v e _ n e v M O G A has a smaller number of solutions. Decreasing the number of solutions μ t 100 for the archive t a r g e t S e l e c t makes Δ p worse in both spaces. Comparing Figure 7 and Figure 9, t a r g e t S e l e c t in random dispersion produces a worse approximation of the Pareto front than in uniform search. This is for two reasons: (1) the lower value of the weight ω o = 0.7692 (lower weight of the metric D o , which measures convergence in Pareto front); (2) the initial population has been obtained in a random way (meaning certain areas have not been adequately explored). Figure 10 shows the boxplot, of the 25 archives obtained in the tests. A r c h i v e _ n e v M O G A obtains better results in both spaces, also obtaining a smaller number of solutions. On the benchmark 1, the approximations obtained by the archivers in a random dispersion are slightly worse than in a uniform dispersion.

6.3. Benchmark 2 with Uniform Dispersion

The archive A, with the median result (in the decision space), for each archiver is shown in Figure 11. The archivers locate the neighborhoods where nearly optimal solutions are found. The archive of A r c h i v e _ n e v M O G A again obtains fewer solutions. Keep in mind that decreasing μ t = 150 for t a r g e t S e l e c t causes a considerable increase in the variability of the results obtained Δ p ( A , H ) for the 25 tests. t a r g e t S e l e c t obtains a better approximation to the Pareto front due to the high value of the weight ω o . However, A r c h i v e _ n e v M O G A obtains solutions closer to the local Pareto set. This is because t a r g e t S e l e c t seeks to achieve the greatest diversity in the decision space (through D d ) without taking into account whether these solutions are worse than a close solution (if they are not optimal).
Figure 12 shows the boxplot, of the 25 archives obtained for the archivers, for the indicator Δ p in the decision and objective spaces and the archive size. Regarding the decision space, A r c h i v e _ n e v M O G A obtains a better approximation to the target set than its competitors. Regarding the objective space, A r c h i v e _ n e v M O G A and t a r g e t S e l e c t obtain a similar minimum value. However, t a r g e t S e l e c t obtains worse variability. Therefore, as occurred with benchmark 1, the archiver A r c h i v e _ n e v M O G A achieves a better approximation in both spaces and obtains a smaller number of solutions.

6.4. Benchmark 2 with Random Dispersion

Figure 13 shows the archive A, which obtains a median result for Δ p ( A , H ) for the archivers. Again, the archiver A r c h i v e _ n e v M O G A obtains significantly fewer solutions while also obtaining a better characterization of the target set H. Figure 14 shows the boxplot of the archivers. The archiver A r c h i v e _ n e v M O G A obtains a better value of Δ p in both spaces. Regarding the objective space, t a r g e t S e l e c t obtains results similar to A r c h i v e _ n e v M O G A but worse variability. The archiver A r c h i v e _ n e v M O G A obtains fewer solutions, which simplifies the optimization and decision stages. Using the random search, A r c h i v e U p d a t e P Q , ϵ D x y and A r c h i v e _ n e v M O G A perform slightly worse than the uniform search. t a r g e t S e l e c t obtains slightly better results than the uniform search.

6.5. Identification of a Thermal System

Figure 15 shows the final archive A obtained when using the three archivers inside a compared basic optimization algorithm. In this example, the obtained archives are compared by pairs to better observe the differences between them.
The three archivers obtain diversity in the decision space, and convergence in the Pareto front. The first thing that stands out is the large number of solutions (610 solutions) obtained by the archive A r c h i v e U p d a t e P Q , ϵ D x y . This high number of solutions complicates the optimization and decision phases. For each iteration, the newly generated solutions must be compared with the solutions in the current file A ( t ) . Therefore, many solutions in the file A ( t ) implies a higher computational cost. In addition, a high number of solutions makes the final decision of the DM more difficult. This large number of solutions can be reduced by increasing the parameters Δ x and Δ y . However, this increase also implies a worse discretization in both spaces. A r c h i v e U p d a t e P Q , ϵ D x y obtains a worse approximation to the Pareto front.
The results show more similarities with respect to the other two archivers. Archiver A r c h i v e _ n e v M O G A obtains 78 solutions. To compare under similar conditions, we set the size of the file obtained by t a r g e t S e l e c t to 78 solutions ( μ t = 78 ). In this way, both archivers obtain the same number of solutions. The set of solutions found by both archivers are different. With respect to the Pareto front, both archivers achieve a good approximation in the range f 1 ( x ) = [ 0.3   0.8 ] . However, A r c h i v e _ n e v M O G A does not get solutions in the range f 1 ( x ) = [ 0.8   0.9 ] of the Pareto front. Therefore, in this example, t a r g e t s e l e c t gets a little more diversity in the Pareto front.
t a r g e t S e l e c t focuses on obtaining, in addition to a good convergence in the Pareto front, the greatest diversity in the decision space (using D d , see Section 4.3). However, solutions that provide greater diversity may be worse than neighboring solutions. For example, Figure 15 shows the solution x 1 obtained by A r c h i v e _ n e v M O G A . This solution has similar parameters to the n e i g h b o r h o o d 1 solutions (see decision space). x 1 performs better ( f ( x 1 ) ) than all the n e i g h b o r h o o d 1 solutions obtained by t a r g e t S e l e c t . Therefore, A r c h i v e _ n e v M O G A would eliminate all these solutions (dominated in its neighborhood by x 1 ). t a r g e t S e l e c t maintains them because they increase the diversity in the decision space. This happens repeatedly in this MOOP. For this reason, t a r g e t S e l e c t obtains nearly optimal solutions farther from the Pareto front than obtained by A r c h i v e _ n e v M O G A . These solutions are in the contour/ends of the plane that form the optimal and nearly optimal solutions in the decision space, and therefore, they obtain a better diversity under the D d indicator. A r c h i v e _ n e v M O G A could find solutions closer to the contour/ends of the plane formed in the decision space (as is the case with t a r g e t S e l e c t ) by reducing the parameter Δ x , although this would imply obtaining a larger number of solutions. Therefore, depending on the needs or preferences of the DM, the use of one archiver or another may be more appropriate. This archiver can be embedded in most of the multi-objective algorithms available. A r c h i v e U p d a t e P Q , ϵ D x y , A r c h i v e _ n e v M O G A and t a r g e t S e l e c t archivers are currently built into the algorithms N ϵ SGA, n e v M O G A and D I O P respectively.

7. Conclusions

In this paper, the characterization of nearly optimal solutions potentially useful in a MOOP has been addressed. In this type of problem, in practice, the DM may wish to obtain nearly optimal solutions, since they can play a relevant role in the decision-making stage. However, an adequate approximation to this set is necessary to avoid an excessive number of alternatives that could hinder the optimization and decision-making stages. Not all nearly optimal solutions provide the same useful information to the DM. To reduce the number of solutions to be analyzed, we consider potentially useful solutions (in addition to the optimals) that are close to the optimals in objective space—but which differ significantly in the decision space. To adequately characterize this set, it is necessary to discretize the nearly optimal solutions by analyzing the decision and objective spaces simultaneously.
This article compares different archiving strategies that perform this task: A r c h i v e - U p d a t e P Q , ϵ D x y , A r c h i v e _ n e v M O G A and t a r g e t S e l e c t . The main objective of the archivers is to obtain potentially useful solutions. This analysis is of great help to designers of evolutionary algorithms who wish to obtain such solutions. In this way, designers will have more information to choose their archivers based on their preferences. A r c h i v e U p d a t e P Q , ϵ D x y and A r c h i v e _ n e v M O G A are two distance-based archivers. Both archivers simultaneously discard solutions that are similar in decision and objective spaces. However, A r c h i v e _ n e v M O G A , in contrast to A r c h i v e U p d a t e P Q , ϵ D x y , discards solutions dominated by a neighboring solution in the decision space. t a r g e t S e l e c t is an archive based on an indicator that measures the diversity in both spaces simultaneously. t a r g e t S e l e c t , unlike the other archivers, can directly and arbitrarily specify the archive size. This can be an advantage. The archivers are evaluated using two benchmarks. They obtain a good approximation to the set of potentially useful solutions, characterizing the diversity existing in the set of nearly optimal solutions. As discussed in [19], the A r c h i v e U p d a t e P Q , ϵ D x y archiver is more practical than other archivers in the literature. However, this archiver, as demonstrated in this paper, obtains significantly more solutions than its competitors in this paper. This can make the optimization and decision phase more difficult. In addition, A r c h i v e _ n e v M O G A obtains a better approximation to the target set H under the averaged Hausdorff distance Δ p . In addition, A r c h i v e _ n e v M O G A obtains a smaller number of solutions, which speeds up the optimization process and facilitates the decision-making stage. However, fewer solutions can also decrease diversity (which can lead to degraded global search capabilities).
Finally, the compared archivers are analyzed on a real engineering example. This real example is the identification of a thermal system. To carry out this analysis, a generic multi-objective optimization algorithm is used, in which it is possible to select different archivers. This enables a more realistic comparison of the impact of the archivers on the entire optimization process.
The three archivers obtain the existing diversity in the set of optimal and nearly optimal solutions. In this last example, we can see how the archive obtained by A r c h i v e U p d a t e P Q , ϵ D x y obtains a very high number of solutions, complicating the optimization and decision stages. A r c h i v e _ n e v M O G A and t a r g e t S e l e c t obtain the same number of solutions. Both archivers obtain an adequate Pareto front. However, t a r g e t S e l e c t gets more diversity on the Pareto front. The main difference between the two archivers is in the set of nearly optimal solutions. On the one hand, A r c h i v e _ n e v M O G A obtains solutions closer to the Pareto front, but significantly different in the decision space. On the other hand, t a r g e t S e l e c t obtains the solutions that provide the greatest diversity in the decision space, even though these solutions are farther away from the Pareto front, and therefore, offer significantly worse performance.
Finally, this analysis suggests two possible future lines of research: (1) design of new evolutionary algorithms, which characterize the nearly optimal solutions, using some of the archivers compared in this work and (2) design of new archivers that improve the current ones. For example, the clustering techniques could improve the archivers compared in this work. These techniques, not analyzed in this work, allow the location of new neighborhoods, allowing their exploration and evaluation. In this way, the optimization process could be improved.

Author Contributions

Conceptualization, A.P.; Methodology, A.P., X.B. and J.M.H.; Software developed, A.P.; Validation, A.P.; Writing—original draft preparation, A.P.; Writing—review and editing, X.B., J.M.H. and M.A.M.; Supervision, M.A.M.; Funding acquisition, X.B. and J.M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Ministerio de Ciencia, Innovación y Universidades (Spain) (grant number RTI2018-096904-B-I00), by the Generalitat Valenciana regional government through project AICO/2019/055 and by the Universitat Politècnica de València (grant number SP20200109).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rhinehart, R.R. Engineering Optimization: Applications, Methods and Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2018. [Google Scholar]
  2. Rao, S.S. Engineering Optimization: Theory and Practice; John Wiley & Sons: Hoboken, NJ, USA, 2019. [Google Scholar]
  3. Dai, C.; Lei, X. A multiobjective brain storm optimization algorithm based on decomposition. Complexity 2019, 2019. [Google Scholar] [CrossRef]
  4. Miettinen, K. Nonlinear Multiobjective Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 12. [Google Scholar]
  5. Deb, K. Multi-Objective Optimization Using Evolutionary Algorithms; John Wiley & Sons: Hoboken, NJ, USA, 2001; Volume 16. [Google Scholar]
  6. Reynoso-Meza, G.; Blasco, X.; Sanchis, J.; Herrero, J.M. Controller Tuning with Evolutionary Multiobjective Optimization: A Holistic Multiobjective Optimization Design Procedure; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  7. Gunantara, N. A review of multi-objective optimization: Methods and its applications. Cogent Eng. 2018, 5, 1502242. [Google Scholar] [CrossRef]
  8. Sanchis, J.; Martínez, M.A.; Blasco, X. Integrated multiobjective optimization and a priori preferences using genetic algorithms. Inf. Sci. 2008, 178, 931–951. [Google Scholar] [CrossRef]
  9. Zitzler, E.; Deb, K.; Thiele, L. Comparison of multiobjective evolutionary algorithms: Empirical results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [Green Version]
  10. Ulrich, T.; Bader, J.; Thiele, L. Defining and optimizing indicator-based diversity measures in multiobjective search. In International Conference on Parallel Problem Solving from Nature; Springer: Berlin/Heidelberg, Germany, 2010; pp. 707–717. [Google Scholar]
  11. Beyer, H.G.; Sendhoff, B. Robust optimization—A comprehensive survey. Comput. Methods Appl. Mech. Eng. 2007, 196, 3190–3218. [Google Scholar] [CrossRef]
  12. Jin, Y.; Branke, J. Evolutionary optimization in uncertain environments—A survey. IEEE Trans. Evol. Comput. 2005, 9, 303–317. [Google Scholar] [CrossRef] [Green Version]
  13. Deb, K.; Gupta, H. Introducing robustness in multi-objective optimization. Evol. Comput. 2006, 14, 463–494. [Google Scholar] [CrossRef]
  14. Pajares, A.; Blasco, X.; Herrero, J.M.; Reynoso-Meza, G. A new point of view in multivariable controller tuning under multiobjetive optimization by considering nearly optimal solutions. IEEE Access 2019, 7, 66435–66452. [Google Scholar] [CrossRef]
  15. Loridan, P. ε-solutions in vector minimization problems. J. Optim. Theory Appl. 1984, 43, 265–276. [Google Scholar] [CrossRef]
  16. White, D.J. Epsilon efficiency. J. Optim. Theory Appl. 1986, 49, 319–337. [Google Scholar] [CrossRef]
  17. Engau, A.; Wiecek, M.M. Generating ε-efficient solutions in multiobjective programming. Eur. J. Oper. Res. 2007, 177, 1566–1579. [Google Scholar] [CrossRef]
  18. Hernández Castellanos, C.I.; Schütze, O.; Sun, J.Q.; Ober-Blöbaum, S. Non-Epsilon Dominated Evolutionary Algorithm for the Set of Approximate Solutions. Math. Comput. Appl. 2020, 25, 3. [Google Scholar] [CrossRef] [Green Version]
  19. Schütze, O.; Hernandez, C.; Talbi, E.G.; Sun, J.Q.; Naranjani, Y.; Xiong, F.R. Archivers for the representation of the set of approximate solutions for MOPs. J. Heuristics 2019, 25, 71–105. [Google Scholar] [CrossRef]
  20. Tanabe, R.; Ishibuchi, H. A review of evolutionary multimodal multiobjective optimization. IEEE Trans. Evol. Comput. 2019, 24, 193–200. [Google Scholar] [CrossRef]
  21. Li, M.; Yao, X. An empirical investigation of the optimality and monotonicity properties of multiobjective archiving methods. In International Conference on Evolutionary Multi-Criterion Optimization; Springer: Berlin/Heidelberg, Germany, 2019; pp. 15–26. [Google Scholar]
  22. Knowles, J.D.; Corne, D.W. Approximating the nondominated front using the Pareto archived evolution strategy. Evol. Comput. 2000, 8, 149–172. [Google Scholar] [CrossRef] [PubMed]
  23. Laumanns, M.; Thiele, L.; Deb, K.; Zitzler, E. Combining convergence and diversity in evolutionary multiobjective optimization. Evol. Comput. 2002, 10, 263–282. [Google Scholar] [CrossRef]
  24. Schütze, O.; Laumanns, M.; Coello, C.A.C.; Dellnitz, M.; Talbi, E.G. Convergence of stochastic search algorithms to finite size Pareto set approximations. J. Glob. Optim. 2008, 41, 559–577. [Google Scholar] [CrossRef] [Green Version]
  25. Schütze, O.; Lara, A.; Coello, C.A.C.; Vasile, M. Computing approximate solutions of scalar optimization problems and applications in space mission design. In Proceedings of the IEEE Congress on Evolutionary Computation, Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
  26. Zhou, X.; Long, J.; Xu, C.; Jia, G. An external archive-based constrained state transition algorithm for optimal power dispatch. Complexity 2019, 2019. [Google Scholar] [CrossRef]
  27. Schütze, O.; Hernández, C. Archiving Strategies for Evolutionary Multi-Objective Optimization Algorithms; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  28. Schütze, O.; Vasile, M.; Coello, C.A.C. Approximate solutions in space mission design. In International Conference on Parallel Problem Solving from Nature; Springer: Berlin/Heidelberg, Germany, 2008; pp. 805–814. [Google Scholar]
  29. Pajares, A.; Blasco, X.; Herrero, J.M.; Reynoso-Meza, G. A multiobjective genetic algorithm for the localization of optimal and nearly optimal solutions which are potentially useful: nevMOGA. Complexity 2018, 2018. [Google Scholar] [CrossRef]
  30. Schütze, O.; Vasile, M.; Coello, C.A.C. Computing the set of epsilon-efficient solutions in multiobjective space mission design. J. Aerosp. Comput. Inf. Commun. 2011, 8, 53–70. [Google Scholar] [CrossRef] [Green Version]
  31. Pareto, V. Manual of Political Economy; Oxford University Press: Oxford, UK, 1971. [Google Scholar]
  32. Schütze, O.; Coello, C.A.C.; Talbi, E.G. Approximating the ε-efficient set of an MOP with stochastic search algorithms. In Mexican International Conference on Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2007; pp. 128–138. [Google Scholar]
  33. Teytaud, O. On the hardness of offline multi-objective optimization. Evol. Comput. 2007, 15, 475–491. [Google Scholar] [CrossRef] [PubMed]
  34. Schütze, O.; Esquivel, X.; Lara, A.; Coello, C.A.C. Using the averaged Hausdorff distance as a performance measure in evolutionary multiobjective optimization. IEEE Trans. Evol. Comput. 2012, 16, 504–522. [Google Scholar] [CrossRef]
  35. Karimi, D.; Salcudean, S.E. Reducing the hausdorff distance in medical image segmentation with convolutional neural networks. IEEE Trans. Med. Imaging 2019, 39, 499–513. [Google Scholar] [CrossRef] [Green Version]
  36. Van Veldhuizen, D.A.; Lamont, G.B. Multiobjective Evolutionary Algorithm Research: A History and Analysis; Technical Report; Air Force Institute of Technology: Dayton, OH, USA, 1998. [Google Scholar]
  37. Zhou, A.; Jin, Y.; Zhang, Q.; Sendhoff, B.; Tsang, E. Combining model-based and genetics-based offspring generation for multi-objective optimization using a convergence criterion. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 892–899. [Google Scholar]
  38. Mahbub, M.S.; Wagner, T.; Crema, L. Improving robustness of stopping multi-objective evolutionary algorithms by simultaneously monitoring objective and decision space. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, Lille, France, 10–14 July 2015; pp. 711–718. [Google Scholar]
  39. Khare, V.; Yao, X.; Deb, K. Performance scaling of multi-objective evolutionary algorithms. In International Conference on Evolutionary Multi-Criterion Optimization; Springer: Berlin/Heidelberg, Germany, 2003; pp. 376–390. [Google Scholar]
  40. Sebag, M.; Tarrisson, N.; Teytaud, O.; Lefevre, J.; Baillet, S. A Multi-Objective Multi-Modal Optimization Approach for Mining Stable Spatio-Temporal Patterns. In Proceedings of the 19th International Joint Conference on Artificial Intelligence, Edinburgh, UK, 30 July–5 August 2005; pp. 859–864. [Google Scholar]
  41. Krmicek, V.; Sebag, M. Functional brain imaging with multi-objective multi-modal evolutionary optimization. In Parallel Problem Solving from Nature-PPSN IX; Springer: Berlin/Heidelberg, Germany, 2006; pp. 382–391. [Google Scholar]
  42. Zechman, E.M.; Giacomoni, M.H.; Shafiee, M.E. An evolutionary algorithm approach to generate distinct sets of non-dominated solutions for wicked problems. Eng. Appl. Artif. Intell. 2013, 26, 1442–1457. [Google Scholar] [CrossRef]
  43. Deb, K.; Agrawal, S.; Pratap, A.; Meyarivan, T. A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In International Conference on Parallel Problem Solving from Nature; Springer: Berlin/Heidelberg, Germany, 2000; pp. 849–858. [Google Scholar]
  44. Hernández Castellanos, C.I. Set Oriented Methods for Multi-Objective Optimization. Ph.D. Thesis, Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Mexico City, Mexico, 2017. [Google Scholar]
  45. Ulrich, T.; Bader, J.; Zitzler, E. Integrating decision space diversity into hypervolume-based multiobjective search. In Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, Portland, OR, USA, 7–11 July 2010; pp. 455–462. [Google Scholar]
  46. Solow, A.R.; Polasky, S. Measuring biological diversity. Environ. Ecol. Stat. 1994, 1, 95–103. [Google Scholar] [CrossRef]
  47. Rudolph, G.; Naujoks, B.; Preuss, M. Capabilities of EMOA to detect and preserve equivalent Pareto subsets. In International Conference on Evolutionary Multi-Criterion Optimization; Springer: Berlin/Heidelberg, Germany, 2007; pp. 36–50. [Google Scholar]
  48. Zhang, K.; Chen, M.; Xu, X.; Yen, G.G. Multi-objective evolution strategy for multimodal multi-objective optimization. Appl. Soft Comput. 2021, 101, 107004. [Google Scholar] [CrossRef]
  49. Yue, C.; Qu, B.; Liang, J. A multiobjective particle swarm optimizer using ring topology for solving multimodal multiobjective problems. IEEE Trans. Evol. Comput. 2017, 22, 805–817. [Google Scholar] [CrossRef]
  50. Zhang, X.; Liu, H.; Tu, L. A modified particle swarm optimization for multimodal multi-objective optimization. Eng. Appl. Artif. Intell. 2020, 95, 103905. [Google Scholar] [CrossRef]
  51. Li, X.; Engelbrecht, A.; Epitropakis, M.G. Benchmark Functions for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization; RMIT University, Evolutionary Computation and Machine Learning Group: Melbourne, Australia, 2013. [Google Scholar]
  52. Wang, H.; Jin, Y.; Doherty, J. Committee-based active learning for surrogate-assisted particle swarm optimization of expensive problems. IEEE Trans. Cybern. 2017, 47, 2664–2677. [Google Scholar] [CrossRef] [Green Version]
  53. Ariyaratne, M.; Fernando, T.; Weerakoon, S. A Hybrid Algorithm to Solve Multi-model Optimization Problems Based on the Particle Swarm Optimization with a Modified Firefly Algorithm. In Proceedings of the Future Technologies Conference; Springer: Berlin/Heidelberg, Germany, 2020; pp. 308–325. [Google Scholar]
  54. Marek, M.; Kadlec, P.; Čapek, M. FOPS: A new framework for the optimization with variable number of dimensions. Int. J. RF Microw. Comput. Aided Eng. 2020, 30, e22335. [Google Scholar] [CrossRef]
  55. Lienhard, I.; John, H. A Heat Transfer Textbook; Phlogiston Press: Cambridge, MA, USA, 2005. [Google Scholar]
  56. Deb, K.; Agrawal, R.B. Simulated binary crossover for continuous search space. Complex Syst. 1995, 9, 115–148. [Google Scholar]
Figure 1. A MOOP example. On the left, the objective space is shown, and on the right, the decision space is shown. S E T 1 is the Pareto optimal set and S E T 2 is a potentially useful nearly optimal set.
Figure 1. A MOOP example. On the left, the objective space is shown, and on the right, the decision space is shown. S E T 1 is the Pareto optimal set and S E T 2 is a potentially useful nearly optimal set.
Mathematics 09 00999 g001
Figure 2. Visualization of an MOOP in the objective space (on the left) and decision space (on the right). The sets S E T 1 and S E T 2 are the optimal solutions. Both sets and S E T 3 and S E T 4 form the local Pareto set.
Figure 2. Visualization of an MOOP in the objective space (on the left) and decision space (on the right). The sets S E T 1 and S E T 2 are the optimal solutions. Both sets and S E T 3 and S E T 4 form the local Pareto set.
Mathematics 09 00999 g002
Figure 3. Target set H for the benchmark 1 formed by the global (blue) and local (red) Pareto set.
Figure 3. Target set H for the benchmark 1 formed by the global (blue) and local (red) Pareto set.
Mathematics 09 00999 g003
Figure 4. Target set H for the benchmark 2 formed by the global (blue) and local (red) Pareto set.
Figure 4. Target set H for the benchmark 2 formed by the global (blue) and local (red) Pareto set.
Mathematics 09 00999 g004
Figure 5. Block diagram of the thermal system.
Figure 5. Block diagram of the thermal system.
Mathematics 09 00999 g005
Figure 6. Identification test of the thermal system.
Figure 6. Identification test of the thermal system.
Mathematics 09 00999 g006
Figure 7. Median result of archive A obtained by A r c h i v e U p d a t e P Q , ϵ D x y , A r c h i v e _ n e v M O G A and t a r g e t S e l e c t with a uniform dispersion on benchmark 1.
Figure 7. Median result of archive A obtained by A r c h i v e U p d a t e P Q , ϵ D x y , A r c h i v e _ n e v M O G A and t a r g e t S e l e c t with a uniform dispersion on benchmark 1.
Mathematics 09 00999 g007
Figure 8. Boxplot of Δ p ( A , H ) (decision space), Δ p ( f ( A ) , f ( H ) ) (objective space) and archive size with a uniform dispersion on the benchmark 1.
Figure 8. Boxplot of Δ p ( A , H ) (decision space), Δ p ( f ( A ) , f ( H ) ) (objective space) and archive size with a uniform dispersion on the benchmark 1.
Mathematics 09 00999 g008
Figure 9. Median result of Archive A obtained by A r c h i v e U p d a t e P Q , ϵ D x y , A r c h i v e _ n e v M O G A and t a r g e t S e l e c t with a random dispersion on benchmark 1.
Figure 9. Median result of Archive A obtained by A r c h i v e U p d a t e P Q , ϵ D x y , A r c h i v e _ n e v M O G A and t a r g e t S e l e c t with a random dispersion on benchmark 1.
Mathematics 09 00999 g009
Figure 10. Boxplot of Δ p ( A , H ) (decision space), Δ p ( f ( A ) , f ( H ) ) (objective space) and archive size with a random dispersion on the benchmark 1.
Figure 10. Boxplot of Δ p ( A , H ) (decision space), Δ p ( f ( A ) , f ( H ) ) (objective space) and archive size with a random dispersion on the benchmark 1.
Mathematics 09 00999 g010
Figure 11. Median result of archive A obtained by A r c h i v e U p d a t e P Q , ϵ D x y , A r c h i v e _ n e v M O G A and t a r g e t S e l e c t with a uniform dispersion on the benchmark 2.
Figure 11. Median result of archive A obtained by A r c h i v e U p d a t e P Q , ϵ D x y , A r c h i v e _ n e v M O G A and t a r g e t S e l e c t with a uniform dispersion on the benchmark 2.
Mathematics 09 00999 g011
Figure 12. Boxplot of Δ p ( A , H ) (decision space), Δ p ( f ( A ) , f ( H ) ) (objective space) and archive size with a uniform dispersion on the benchmark 2.
Figure 12. Boxplot of Δ p ( A , H ) (decision space), Δ p ( f ( A ) , f ( H ) ) (objective space) and archive size with a uniform dispersion on the benchmark 2.
Mathematics 09 00999 g012
Figure 13. Median result of Archive A obtained by A r c h i v e U p d a t e P Q , ϵ D x y , A r c h i v e _ n e v M O G A and t a r g e t S e l e c t with a random dispersion on benchmark 2.
Figure 13. Median result of Archive A obtained by A r c h i v e U p d a t e P Q , ϵ D x y , A r c h i v e _ n e v M O G A and t a r g e t S e l e c t with a random dispersion on benchmark 2.
Mathematics 09 00999 g013
Figure 14. Boxplot of Δ p ( A , H ) (decision space), Δ p ( f ( A ) , f ( H ) ) (objective space) and archive size with a random dispersion on the benchmark 2.
Figure 14. Boxplot of Δ p ( A , H ) (decision space), Δ p ( f ( A ) , f ( H ) ) (objective space) and archive size with a random dispersion on the benchmark 2.
Mathematics 09 00999 g014
Figure 15. Archive A obtained with a generic algorithm and the archivers A r c h i v e U p d a t e P Q , ϵ D x y , A r c h i v e _ n e v M O G A and t a r g e t S e l e c t for the identification of the thermal system.
Figure 15. Archive A obtained with a generic algorithm and the archivers A r c h i v e U p d a t e P Q , ϵ D x y , A r c h i v e _ n e v M O G A and t a r g e t S e l e c t for the identification of the thermal system.
Mathematics 09 00999 g015
Table 1. Parameters for benchmark 1.
Table 1. Parameters for benchmark 1.
ParametersValue
ϵ [ 0.15   0.15 ]
A r c h i v e U p d a t e P Q , ϵ D x y
Δ x 1
Δ y 0.2
A r c h i v e _ n e v M O G A
Δ x [ 1   1 ]
Δ y [ 0.2   0.2 ]
t a r g e t S e l e c t
μ t 100
ω o 0.9677 (uniform dispersion)
ω o 0.7692 (random dispersion)
Table 2. Parameters for benchmark 2.
Table 2. Parameters for benchmark 2.
ParametersValue
ϵ [ 2.5   0.15 ]
A r c h i v e U p d a t e P Q , ϵ D x y
Δ x 0.15
Δ y 0.25
A r c h i v e _ n e v M O G A
Δ x [ 0.15   0.15 ]
Δ y [ 0.25   0.25 ]
t a r g e t S e l e c t
μ t 150
ω o 0.9677 (uniform dispersion)
ω o 0.9091 (random dispersion)
Table 3. Lower ( x ̲ ) and upper ( x ¯ ) limits of the parameters x .
Table 3. Lower ( x ̲ ) and upper ( x ¯ ) limits of the parameters x .
Limits θ 1 θ 2 θ 3
x ̲ 0.0120
x ¯ 0.15100.8
Table 4. Parameters for identification of thermal system.
Table 4. Parameters for identification of thermal system.
ParametersValue
ϵ [ 0.25   0.25 ]
A r c h i v e U p d a t e P Q , ϵ D x y
Δ x 0.1
Δ y 0.025
A r c h i v e _ n e v M O G A
Δ x [ 0.0015   0.2   0.1 ]
Δ y [ 0.025   0.025 ]
t a r g e t S e l e c t
μ t 78
ω o 0.7692
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pajares, A.; Blasco, X.; Herrero, J.M.; Martínez, M.A. A Comparison of Archiving Strategies for Characterization of Nearly Optimal Solutions under Multi-Objective Optimization. Mathematics 2021, 9, 999. https://0-doi-org.brum.beds.ac.uk/10.3390/math9090999

AMA Style

Pajares A, Blasco X, Herrero JM, Martínez MA. A Comparison of Archiving Strategies for Characterization of Nearly Optimal Solutions under Multi-Objective Optimization. Mathematics. 2021; 9(9):999. https://0-doi-org.brum.beds.ac.uk/10.3390/math9090999

Chicago/Turabian Style

Pajares, Alberto, Xavier Blasco, Juan Manuel Herrero, and Miguel A. Martínez. 2021. "A Comparison of Archiving Strategies for Characterization of Nearly Optimal Solutions under Multi-Objective Optimization" Mathematics 9, no. 9: 999. https://0-doi-org.brum.beds.ac.uk/10.3390/math9090999

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop