Next Article in Journal
Blind Quality Evaluation for Screen Content Images Based on Regionalized Structural Features
Next Article in Special Issue
Algorithms for Finding Shortest Paths in Networks with Vertex Transfer Penalties
Previous Article in Journal
A Weighted Ensemble Learning Algorithm Based on Diversity Using a Novel Particle Swarm Optimization Approach
Previous Article in Special Issue
The Use of an Exact Algorithm within a Tabu Search Maximum Clique Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solution Merging in Matheuristics for Resource Constrained Job Scheduling

by
Dhananjay Thiruvady
1,*,
Christian Blum
2 and
Andreas T. Ernst
3
1
School of Information Technology, Deakin University, Geelong 3126, Australia
2
Artificial Intelligence Research Institute (IIIA-CSIC), Campus of the UAB, 08193 Bellaterra, Spain
3
School of Mathematics, Monash University, Melbourne 3800, Australia
*
Author to whom correspondence should be addressed.
Submission received: 7 September 2020 / Revised: 29 September 2020 / Accepted: 1 October 2020 / Published: 9 October 2020
(This article belongs to the Special Issue Algorithms for Graphs and Networks)

Abstract

:
Matheuristics have been gaining in popularity for solving combinatorial optimisation problems in recent years. This new class of hybrid method combines elements of both mathematical programming for intensification and metaheuristic searches for diversification. A recent approach in this direction has been to build a neighbourhood for integer programs by merging information from several heuristic solutions, namely construct, solve, merge and adapt (CMSA). In this study, we investigate this method alongside a closely related novel approach—merge search (MS). Both methods rely on a population of solutions, and for the purposes of this study, we examine two options: (a) a constructive heuristic and (b) ant colony optimisation (ACO); that is, a method based on learning. These methods are also implemented in a parallel framework using multi-core shared memory, which leads to improving the overall efficiency. Using a resource constrained job scheduling problem as a test case, different aspects of the algorithms are investigated. We find that both methods, using ACO, are competitive with current state-of-the-art methods, outperforming them for a range of problems. Regarding MS and CMSA, the former seems more effective on medium-sized problems, whereas the latter performs better on large problems.

1. Introduction

Large optimisation problems often cannot be solved by off-the-shelf solvers. Solvers based on exact methods (e.g., integer programming and constraint programming) have become increasingly efficient, though, they are still limited in their performance due to large problem sizes and their complexities. Since solving such problems requires algorithms that can still identify good solutions in a time-efficient manner, alternative incomplete techniques, such as integer programming decompositions, as well as metaheuristics and their hybridisations, have been given a lot of attention.
Metaheuristics aim to alleviate the problems associated with exact methods, and have been shown to be very effective across a rage of practical problems [1]. A number of the most effective methods are inspired from nature—for example, evolutionary algorithms [2,3] and swarm intelligence [4,5]. Despite their success, metaheuristics are limited in their applicability as they are especially inefficient when dealing with non-trivial hard constraints. Constraint programming [6] is capable of dealing with complex constraints but does generally not scale well on problems with large search spaces. Mixed integer (linear) programming (MIP) [7] can deal with large search spaces and also deal with complex constraints. However, the efficiency of general purpose MIP solvers drops sharply when reaching a certain, problem-dependent, instance size.
Techniques for solving large problems with MIPs have been an active area of research in the recent past. Decomposition methods such as Lagrangian relaxation [8], column generation [7] and Benders’ decomposition [9] have proved to be effective on a range of problems. Hierarchical models [10] and large neighbourhood search methods have also proven to be successful [11].
Recently, matheuristics, or hybrids of integer programming and metaheuristics, have been gaining in popularity [12,13,14,15,16]. References [12,17] provide an overview of these methods and [13] provide a survey of matheuristics applied to routing problems. The study by [14] shows that nurse rostering problems can be solved efficiently by a matheuristic based on a large neighbourhood search. Reference [15] apply an integer programming based heuristic to a liner shipping design problem with promising results. Reference [16] show that an integer programming heuristic based on Benders’ decomposition is very effective for scheduling medical residents’ training at university hospitals.
This paper focuses on two rather recent novel MIP-based matheuristic approaches, which rely on the concept of solution merging to learn from a population of solutions. Both of these methods use the same basic idea: using a MIP to generate a “merged” solution in the subspace that is spanned by a pool of heuristic solutions. The first is a very recent approach—Merge Search (MS) [18] and the second method is Construct, Merge, Solve and Adapt (CMSA) [19,20,21,22]. The study by [18] shows that MS is well suited for solving the constrained pit problem. Reference [19] apply CMSA to minimum common string partition and to the minimum covering arborescence problem, ref. [20] investigate the repetition-free longest common subsequence problem, and [21] examines the unbalanced common string partition problem. The study by [22] investigates a hybrid of CMSA and parallel ant colony optimisation (ACO) for resource constrained project scheduling. Both MS and CMSA aim to search for high quality solutions to a problem in a similar way: (a) initialise a population of solutions, (b) solve a restricted MIP to obtain a “merged” solution, (c) update the solution population by incorporating new information and (d) repeat until some termination criteria are fulfilled. However, they differ in the details in how they implement these steps.
In principle, a merge step could be added to any heuristic optimisation algorithm that randomly samples the solution space. The question is whether this helps to improve the performance of a heuristic search. Additionally, what type of merge step should be used, given that CMSA and MS use two slightly different merge methods? This study attempts to provide some answers to these questions in the context of a specific optimisation problem.
This study investigates MS and CMSA with the primary aim of comparing the effects of the differences between these related algorithms to better understand what elements are important in obtaining the best performance. The case study used for the empirical evaluation of the algorithms is the Resource Constrained Job Scheduling (RCJS) problem [23]. The RCJS problem was originally motivated by an application from the mining industry and aims to capture the key aspects of moving iron ore from mines to ports. The objective is to minimise the tardiness of batches of ore arriving at ports, which has been a popular objective with other scheduling problems [24,25]. Several algorithms have been attempted on this problem, particularly hybrids incorporating Lagrangian relaxation, column generation, metaheuristics (simulated annealing, ant colony optimisation and particle swarm optimisation), genetic programming, constraint programming and parallel implementations of these methods [23,26,27,28,29,30,31,32]. The problem is a relatively simple-to-state scheduling problem with a single shared resource. Nevertheless, the problem is sufficiently well studied to provide a baseline for performance while still having room for improvement with many larger instances not solved optimally. The primary aim, though, is not to improve the state-of-the-art for this particular type of problem—even though our approaches outperform the state-of-the-art across a number of problem instances—but to get a better understanding of the behaviour of the two considered algorithms.
The paper is organised as follows. First, we briefly summarise the scheduling problem used as a case study and provide some alternative ways of formulating this as a mixed integer linear program (Section 2). Then, in Section 3 we provide the details of the two matheuristic methods MS and CMSA, and we outline how they are applied to the RCJS problem, and discuss briefly the intuition behind these methods. We also discuss ways to generate the population, including a constructive heuristic, ACO and their associated parallel implementations. We detail the motivations for this study in Section 4 and this is followed by an experimental set-up and empirical evaluation in Section 5. Then, a short discussion of the results is given in Section 6. Finally, the paper concludes in Section 7, where possibilities for future work are discussed.

2. Resource Constrained Job Scheduling

The resource constrained job scheduling (RCJS) problem consists of a number of nearly independent single machine weighted tardiness problems that are only linked by a single shared resource constraint. It is formally defined as follows. A number of jobs J = { 1 , , n } must execute on machines M = { m 1 , , m l } . Each jobs i J has the following data associated with it: a release time r i , a processing time p i , a due time d i , the amount g i required from the resource, a weight w i , and the machine m i to which it belongs. The maximum amount of resource available at any time is  G . Precedence constraints C may apply to two jobs on the same machine: i j requires that job i completes executing before job j starts. Given a sequence of jobs, π , the objective is to minimise the total weighted tardiness:
T ( ß ) = i = 1 n w π i × T ( π i ) , where T ( π i ) = max { 0 , c π i d π i } ,
where c π i denotes the completion time of the job at position i of π , which—given π —can be derived in a well-defined way.

2.1. Network Design Formulation

This problem can be thought of as a network design problem to create a directed, acyclic graph representing a partial ordering of the job start times (see Figure 1). Here, node 0 represents the start of the schedule. For any pair of jobs i j that have a precedence there is an arc with duration p i separating the start time of the two jobs. Additionally, for any job i that has a release time which is not already implied by the precedence relationships, the graph contains arcs 0 i of duration r i . To capture the tardiness we also introduce an additional dummy node i for each i J .
The problem can now be formulated as follows using variables s i to denote the start time of node (job) i with u i , the completion time or due date for job i (this associated with the dummy node i ). Finally the binary variables y i j are one if arc i j is to be added to the graph. Using these variables we can write the network design problem:
min i J w i ( u i d i ) ( i J w i ) s 0
s . t . s i s 0 r i i J
u i s i p i i J
u i s 0 d i i J
s j s i p i y i j M ( 1 y i j ) i , j J : i j
y i j = 1 i , j J : i j
y i j + y j i = 1 i , j J : m i = m j
y i j + y j i 1 i , j J : m i m j
i K j K \ { i } y i j 1 K K y i j { 0 , 1 } i , j J .
Here K is the set of all minimal cliques K in the complement of the precedence graph (that is collection of jobs that do not have a given precedence relationship) such that each of the jobs belongs to a different machine and i K g i > G . It should be noted that constraints (3)–(5), which capture the release times, processing time and due date requirement, all have the same form of difference between two variables greater than a constant. The same also applies for (6) for fixed values of the y variables. Hence, for given y variables, this is simply the dual of a network flow problem. This means that, for integer data, the optimal times u i and s i will all be integers. Note that the objective (2) includes a constant term ( i w i d i ) and has the variable s 0 with a constant, such that adding a constant to all of the s and u variables does not change the objective. (Without this the problem would be unbounded). Alternatively we could arbitrarily fix s 0 = 0 . Constraints (7) fix in the given precedence arcs of the network. The remaining constraints on y variables relate to the network design part of the problem. Constraints (8) and (9) enforce a total ordering of jobs on the same machine and a partial ordering amongst the remaining jobs, respectively. Finally, (10) prevents more jobs from running simultaneously (without an ordering) than can be accommodated within the resource limit.
This formulation, while illustrating the network design nature of the problem, suffers from somewhat poor computational performance. While (10) includes a lot of constraints, these can be added as lazy constraints. However the main problem is due to the “big-M” constraints (6), which are very weak. To make this problem more computationally tractable, we will look at time discretization-based formulations next.

2.2. Time Discretised Mixed Integer Programs

There are many ways of formulating this problem as a mixed integer linear program (MIP). Different formulations can be expected to cause MIP solvers to exhibit different performances. More importantly, the MIP formulation acts as a representation or encoding of our solutions and, hence, impacts in more subtle ways on how the heuristics explore the solution space. Therefore, we present two further alternative formulations of the problem. In this section, we restrict ourselves to some basic observations regarding the characteristics of the formulations and defer the discussion of how these interact with the meta-heuristic search until we have presented our search methods. Both of the following formulations rely on data comprising integers so that only discrete times need to be considered.

2.3. Model 1

A common technique in the context of exact methods for scheduling is to discretise time [33]. Let  T = { 1 , , t max } be a set of time intervals (with t max being sufficiently large) and let x j t be a binary variable for all j J and t T , which takes value 1 if the processing of job j completes at time t. By defining the weighted tardiness for a job j at time t as w j t : = max { 0 , w j ( t d j ) } , we can formulate an MIP for the RCJS problem, as follows.
min j J t T ( w j t · x j t )
s . t . t T x j t = 1 j J
x j t = 0 t { 1 , , r j + p j 1 } , j J
t T t · x b t t T t · x a t p b ( a , b ) C
t ^ = t t + p j x j t ^ + t ^ = t t + p k x k t ^ 1 j , k J , t T
j J t ^ = t t + d i 1 g j · x j t ^ G t T .
Constraints (12) ensure that all jobs are complete. Constraints (13) ensure that the release times are satisfied and are typically implemented by excluding all of these variables from the model. Constraints (14) take care that the precedences between jobs a and b are satisfied and Constraints (16) ensure that no more than one job is processed at the same time on one machine. Constraints (16) ensure that the resource constraints are satisfied.
This model is certainly the most natural way to formulate the RCJS problem, though not necessarily the computationally most effective. The linear programming (LP) bounds could be strengthened by replacing each precedence constraint of the form (14), with a set of constraints specifying that t < τ + p b x b t t τ x a t τ T . Additionally, the branching behaviour for this formulation tends to be very unbalanced: forcing a fractional variable to take value 1 in the branch and the bound tree can be expected to have a large effect with the completion time of the job now fixed. On the other hand, setting some x j t = 0 is likely to result in a very similar LP solution in the branch-and-bound child node with perhaps x j , t 1 or x j , t + 1 having some positive (fractional) value with relatively little change to the completion time objective. Finally, the repeated uses of sums over T in the constraint mean that the coefficient density is relatively high, potentially impacting negatively on the solving time for each linear program within the branch and bound method. All of these considerations typically lead to the following alternative formulation being preferred in the context of branch and bound.

2.4. Model 2

Let z j t be a binary variable for all j J and t T , which takes value 1 if job j is completed at time t or earlier; that is, we effectively define z j t : = s t x j s or x j t : = z j t z j , t 1 . Substituting into the above model, we obtain our second formulation:
min j J t T w j t · ( z j t z j t 1 )
s . t . z j t max = 1 j J
z j t z j t 1 0 j J , t { 1 , , t max }
z j t = 0 t { 1 , , r j + p j 1 } , j J
z b t z a , t p b 0 ( a , b ) C , t T
j J i z j , t + p j z j t 1 i M , t T
j J g j · ( z j , t + p j z j t ) G t T .
Constraints (18) ensure that all jobs are complete. Constraints (19) make sure that a jobs stays completed once it completes. Constraints (20) enforce the release times. Constraints (21) specify precedences between jobs a and b and Constraints (22) require that no more than one job is executed on a machine. Constraints (23) ensure that the resource constraints are satisfied. Previous work indicates that this type of formulation tends to perform better for branch and bound algorithms to solve scheduling problems [23,34,35].

3. Methods

In this section, we provide details of Merge search (MS), construct, solve, merge and adapt (CMSA), ant colony optimisation (ACO) (we refer to the original ACO implementation for the RCJS problem from [27]), and the heuristic used to generate an initial population of solutions.
Note that, in contrast to previous studies on CMSA, we study the use of ACO for generating the population of solutions of each iteration. This is because in preliminary experiments we realized that simple constructive heuristics are—in the context of the RCJS—not enough for guiding the CMSA algorithm towards areas in the space containing high-quality solutions. ACO has the advantage of incorporating a learning component, which we will show to be very beneficial for the application of CMSA to the RCJS. Moreover, ACO has been applied to the RCJS before.

3.1. Merge Search

Algorithm 1 presents an implementation of MS for the resource constrained job scheduling (RCJS) problem. The algorithm has five input parameters, which are (1) an RCJS problem instance, (2) the number of solutions ( n s ), (3) the total computational (wall-clock) time ( t total ), (4) the wall-clock time limit of the mixed integer programming (MIP) solver at each iteration ( t iter ), and (5) the number of random subsets generated from a set of variables (K). Note that our implementation of MS is based on the MIP model from Section 2.4—that is, on Model 2. The reasons for that will be outlined below. In the following section, V denotes the set of variables of the complete MIP model—that is, V : = { z j t j = 1 , , n , t T } . Moreover, in the context of MS, a valid solution S to the RCJS problem consists of a value for each variable from S (such that all constraints are fulfilled). In particular, the value of a variable z j t in a solution S is henceforth denoted by S j t . The objective function value of solution S is denoted by f ( S ) .
Algorithm 1 MS for RCJS.
1:
input: RCJS instance, n s , t t o t a l , t i t e r , K
2:
Initialisation: S b s : = NULL
3:
while time limit t total not expired do
4:
      if S b s NULL then S : = { S b s } else S : = Ø end if
5:
      for i = 1 , 2 , , n s do                                                                       # note that this is done in parallel
6:
             S : = GenerateSolution( S b s )
7:
             S S S
8:
      end for
9:
       P : = Partition( S )
10:
      P : = RandomSplit ( P , K)
11:
      S i b : = Apply_MIP_Solver( P , S b s , t i t e r )
12:
     if S b s = NULL or f ( S i b ) < f ( S b s ) then S b s : = S i b end if
13:
end while
14:
output: S b s
First, the algorithm initialises the best-so-far solution to NULL —that is, S b s : = NULL . The main loop of the algorithm executes between Lines 3–12 until a terminating criteria is attained. For the experiments in this study, we impose a time limit of one hour of wall-clock time. A number of feasible solutions ( n s ) is constructed between Lines 5 and 8 and all solutions found are added to the solution set S , which only contains the best-so-far solution S b s at the start of each iteration (if S b s NULL ). In this study, we consider two methods of generating solutions: (1) a constructive heuristic and (2) ACO. The details of both methods are provided in the subsequent sections. Additionally, in both methods, the n s solutions are constructed in parallel, leading to a very quick solution generating procedure. In the case of ACO, n s threads execute n s independent of ACO colonies, leading to n s independent solutions (see Section 3.3 for full details).
The variables from V are then partitioned on the basis of the solutions in S . In particular, a partition P = { P 1 , , P p } is generated, such that:
  • P i P j = Ø for all P i , P j P .
  • i = 1 p P i = V .
  • S j t = S r t , S S , z j t , z r t P i , P i P . That is, for each solution S S , the values of all the variables in each partition of P have the same value.
Partition P is generated in function Partition( S ) (see Line 9 of Algorithm 1).
Depending on the solutions in S , the number of the partitions in P can vary greatly. For example, a large number of similar solutions will lead to very few partitions. Hence, an additional step is used in function RandomSplit ( P , K) to (potentially) augment the number of partitions depending on parameter K. More specifically, this function randomly splits each partition from P into K disjoint subsets of equal size (if possible), generating in this way an augmented partition P . The concepts of partitioning and random splitting are further explained with the help of an example in the next section.
Next, an MIP solver is applied in function Apply_MIP_Solver( P , S b s , t i t e r ) to a restricted, which is obtained from the original MIP by adding the following constraints: z = z , z , z P i , P i P —that is, the variables from the same partition must take the same value in any solution in S . This ensures that any of the solutions in S are feasible to the restricted MIP. Moreover, solution S b s is used for warm-starting the MIP solver (if S b s NULL ). Note that S b s is always a feasible solution to the restricted MIP because it forms part of the set of solutions used for generation P . The restricted MIP is solved with a time limit of t i t e r seconds. Since S b s is provided as an initial solution to the MIP solver, this always produces a solution that is at least as good as S b s , but often producing an even better solution in the neighbourhood of the current solution set S . Improved solutions lead to updating the best-so-far solution (see Line 12) and, in the final step, the algorithm returns the best-so-far solution (Line 14).

3.1.1. MS Intuition

The following example illustrates how partitioning and random splitting in MS is achieved. Figure 2 deals with a simple example instance of the RCJS problem with three jobs that must be executed on the same machine. Moreover, the graphic shows the values of the z-variables of three different solutions, where for each job, each of the three rows of binary values represents the variable values of a solution. Note that these three solutions lead to the set of variables V being partitioned into six sets, as indicated by the background of the cells shaded in different levels of grey. More specifically, the portion P corresponding to the example in Figure 2 consists of the following six sets (ordered from the lightest shade of grey to the darkest shade of grey):
  • P 1 : = { z 1 , 1 , , z 1 , 3 , z 2 , 1 , , z 2 , 4 , z 3 , 1 , , z 3 , 3 }
  • P 2 : = { z 1 , 4 , z 2 , 5 }
  • P 3 : = { z 1 , 5 , , z 1 , 7 , z 2 , 6 , , z 2 , 8 }
  • P 4 : = { z 3 , 4 , , z 3 , 6 }
  • P 5 : = { z 3 , 7 }
  • P 6 : = { z 1 , 8 , , z 1 , 11 , z 2 , 9 , , z 2 , 11 , z 3 , 8 , , z 3 , 11 }
The sets from Figure 2 can now be used to generate the restricted MIP, potentially leading to a better solution than the original three solutions used to generate it. However, the sets can be limiting (since there could be very few) and hence random splitting may be used to generate more sets in order to expand the neighbourhood around the current set of solutions that will be searched by the restricted MIP. Figure 3 shows such an example, where the original set P 3 was split further (darkest shade) into subsets { z 1 , 5 , z 1 , 6 , z 2 , 6 } and { z 1 , 7 , z 2 , 7 , z 2 , 8 } , and the original set P 4 was split into subsets { z 3 , 4 , z 3 , 5 } and { z 3 , 6 } (black), that is, with a value 2 for parameter K. Solving the resulting restricted MIP allows for a larger number of solutions and potential improvements. Note, however, that splitting too many times—that is, with a large value of K—can lead to a very complex MIP. For sufficiently large K each set P i P is a singleton and the “restricted” MIP is simply the original problem.

3.1.2. Reasons for Choosing Model 2 in MS

The reasons for choosing Model 2 over Model 1 in the context of MS are as follows. First, general-purpose MIP solvers are more efficient in solving Model 2. Second, the variables of Model 2 adapt better to the way in which variables are split into sets in MS. Attempting the same aggregation using Model 1 (the model defined in Section 2.3) would lead to several inefficiencies. For example, it would only be possible to identify very few partitions, most of which would be disjointed and random splitting would not be effective. By contrast, in Model 2 if multiple solutions include z j t = 1 for some variables z j t , this simply means that, in all of the solutions, job j is completed at time t or earlier, even if these solutions differ in exactly when job j is completed. To make this more concrete, consider the example in Figure 2 with the only requirements being that each job is scheduled exactly once. Now, the merge neighbourhood of Model 2 permits any combination of starting times of jobs 1 and 2 at (3,4), (4,5) or (7,8), respectively, combined with job 3 starting at times 3, 6 or 7 for a total of nine possible solutions. By contrast, if we used Model 1 then the only binary patterns generated by the x j t solutions would be (0,0,0), (1,0,0), (0,1,0) and (0,0,1) and the merge search neighbourhood would only include the three original solutions.

3.2. Construct, Solve, Merge and Adapt

Algorithm 2 presents the pseudo-code of the CMSA heuristic for the RCJS problem. The inputs to the algorithms are (1) an RCJS problem instance, (2) the number of solutions constructed per iteration ( n s ), (3) the total computational (wall-clock) time ( t total ), (4) the wall-clock time limit of the MIP-solver at each iteration ( t iter ), and (5) the maximum age limit ( a max ). The algorithm maintains a set of variables, V , which is a subset of the total set of variables in the MIP model, denoted by V. In contrast to MS, CMSA uses Model 1 for the restricted MIP solved at each iteration. In the context of CMSA, a valid solution S to the RCJS problem is a subset of V—that is, S V . The corresponding solution is obtained by assigning the value 1 to all variables in S and 0 to all variables in V \ S . Again, f ( S ) represents the objective function value of solution S.
Algorithm 2 CMSA for the RCJS Problem.
1:
input: An RCJS instance, n s , t total , t iter , a max
2:
Initialisation: V : = Ø , S b s : = Ø , a j t : = 0 x j t V
3:
while time limit t total not expired do
4:
      for i = 1 , 2 , , n s do                                                                  # note that this is done in parallel
5:
             S i : = GenerateSolution()
6:
             V : = V { S i }
7:
      end for
8:
       S i b Apply_MIP_Solver( V , S b s , t i t e r )
9:
      if f ( S i b ) < f ( S b s ) then S b s : = S i b end if
10:
    Adapt ( V , S b s , a m a x )
11:
end while
12:
output: S b s
The algorithm starts by initialising relevant variables and parameters: (1) V : = Ø , where V is the subset of variables that should be considered by the restricted MIP, (2) S b s : = Ø —that is, no best-so-far solution exists, (3) a j t : = 0   x j t V , where a j t is the so-called age value of variable x j t . With this last action, all age values are initialized to zero.
Between Lines 3 and 11, the main algorithm executes. The algorithm runs up to a time limit and, as mentioned earlier, this is one hour of wall-clock time. As in MS, n s solutions are constructed at each iteration. Remember that the variables contained in a solution S indicate the completion times of every job. As specified earlier, for the purpose of this study, we investigate a constructive heuristic and ACO. Each solution that is found is incorporated in V (Line 6) by setting a flag for the associated variables to be free when solving the restricted MIP in function Apply_MIP_Solver( V , S b s , t i t e r ) (Line 8). In other words, the restricted MIP is obtained from the original/complete one by only allowing the variables from V to take on the value 1. As in the case of MS, solving the restricted MIP is warm-started with the best-so-far solution S b s (if any). The restricted MIP is solved with a time limit of t i t e r seconds and returns a possibly improved solution S i b . Note that this solution is at least as good as the original seed solution. In Line 9, a solution improving on S b s is accepted as the new best-so-far solution. In Adapt ( V , S b s , a m a x ) , the age parameter of all variables is incremented, except for those that appear in S b s . If a variable’s age has exceeded a m a x , it is removed from V . Moreover, its age value is set back to zero after terminating, the best-so-far solution found is output by the algorithm.

3.2.1. Reasons for Choosing Model 1 in CMSA

As mentioned already above, CMSA makes use of Model 1, as defined in Section 2.3, in the context of solving the restricted MIP. This is, indeed, the most natural formulation for CMSA, given that we can exactly specify which variables should take value zero, or should be left free. However, note that it would also be possible to use Model 2 instead. In this case, a range of variables would have to be left free, including the earliest and latest times that a job can be completed. However, we found in preliminary experiments that this results in very long run-times as there are many more open variables, which leads to an inefficient overall procedure.

3.2.2. CMSA Intuition

Figure 4 shows the same solutions as in Figure 2. However, the variable values displayed in this graphic are now according to Model 1—that is, only the variables corresponding to the finishing times of the three jobs in the three solutions take value one. All other variables take value zero. When the restricted MIP is solved, these are the only times that will be allowed for the jobs to complete.
A size of the search space spanned by the restricted MIP in CMSA is controlled by the number of solutions generated at each iteration ( n s ) and by the degree of determinism used for generating these solutions. For example, the higher n s and the lower the degree of determinism, the larger the search space of the restricted MIP. Similar ideas to those of random splitting could also be used within CMSA, where more variables could be freed than only those that appear in the solution pool. This may be useful for problems which require solving an MIP with a large number of variables to find better solutions. However, the original implementation [19] did not use such a mechanism and hence we do not explore it here.

3.3. Parallel Ant Colony Optimisation

An ACO model for the RCJS was originally proposed by [27]. This approach was extended to a parallel method in a multi-core shared memory architecture by [29]. For the sake of completeness, the details of the ACO implementation are provided here.
As in the case of the constructive heuristic, a solution in the ACO model is represented by a permutation of all tasks ( π ). This is because there are potentially too many parameters if the ACO model is defined to explicitly learn the finishing times of the tasks. Given a permutation, a serial scheduling heuristic (see [35]) can be used to generate a resource and precedence feasible schedule consisting of finishing times for all tasks in a well-defined way. This is described in Section 3.3.1, below. Moreover, based on the finishing times, the MS/CMSA solutions can be derived. The objective function value of an ACO solution π is denoted by f ( π ) .
The pheromone model of our ACO approach is similar to that used by [36]—that is, the set of pheromone values ( Φ ) consist of values τ i j that represent the desirability of selecting job j for position i in the permutations to be built. Ant colony system (ACS) [37] is the specific ACO-variant that was implemented.
The ACO algorithm is shown in Algorithm 3. An instance of the problem and the set of pheromone values Φ are provided as input. Additionally, a solution ( π b s ) can be provided as an input which serves the purpose of initially guiding the search towards this solution. If no solution is provided, π b s  is initialised to be an empty solution.
Algorithm 3 ACO for the RCJS Problem.
1:
input: An RCJS instance, Φ , π b s (optional)
2:
Initialise π b s (if given as input, otherwise not)
3:
while termination conditions not satisfied do
4:
      for j = 1 to n a n t s do π j : = ConstructSolution( Φ )
5:
       π i b : = arg min j = 1 , , n a n t s f ( π j )
6:
       π i b : = Improve( π i b )
7:
       π b s : = Update( π i b )
8:
      PheromoneUpdate( Φ , π b s )
9:
  end while
10:
output: π b s (converted into a MS/CMSA solution)
The main loop of the algorithm at Lines 3–9 runs until a time or iteration limit is exceeded. Within the main loop, a number of solutions ( n ants ) are constructed (ConstructSolution( Φ )). Hereby, a permutation π is built incrementally from left to right by selecting, at each step, a task for the current position i = 1 , , n , making use of the pheromone values. Henceforth, J ^ denotes the tasks that can be chosen for position i—that is, J ^ consists of all tasks not assigned already to an earlier position of π . In ACS, a task is selected in one of two ways. A random number q ( 0 , 1 ] is generated and a task is selected deterministically if q < q 0 . That is, task k is chosen for position i of π using
k = argmax j J ^ τ i j .
Otherwise, a probabilistic selection is used where job k is selected according to
P ( π i = k ) = τ i k j J ^ τ i j .
Every time a job k is selected at position i, a local pheromone update is applied:
τ i k max ( ( 1.0 ρ ) × τ i k , τ m i n ) ,
where τ m i n = 0.001 is a small value that ensures that a job k may always be selected for position i.
After the construction of n ants solutions, the iteration-best solution π i b is determined (Line 5). This solution is improved by way of local search (Improve( π i b )), as discussed in [29]. The global best solution π b s is potentially updated in function Update( π i b ): f ( π i b ) > f ( π b s ) π b s : = π i b ) . Then, all pheromone values from Φ are updated using the solution components from π b s in function PheromoneUpdate( π b s ):
τ i π ( i ) = τ i π ( i ) · ( 1.0 ρ ) + δ ,
where δ : = Q / f ( π b s ) and Q is a factor introduced to ensure that 0.01 δ 0.1 . The value of the evaporation rate ρ is set at 0.1 —the same value used in the original study [35].

3.3.1. Scheduling Jobs

Given permutation π of all jobs, a resource and precedence feasible solution specifying the start times for every job can be obtained efficiently [23]. This procedure is also called the serial scheduling heuristic.
Jobs are considered in the order in which they appear in the permutation π . A job is selected and examined to see if its preceding jobs have been completed. If so, the job is scheduled as early as possible, respecting the resource constraints. If not, the jobs are placed on a waiting list. If it is possible to schedule a job, the waiting list is examined to see if any waiting job can be scheduled. If yes, the waiting job is immediately scheduled (after its preceding job(s)) and the waiting list is re-examined. This repeats until the waiting list is empty or no other job on the waiting list can be scheduled. At this point the algorithm returns to consider the next job from π .

3.3.2. Using Parallel ACO within MS and CMSA

As mentioned earlier, parallelization is achieved by running each colony on its own thread, without any information sharing. Since several colonies run concurrently, a larger (total) run-time allowance can be provided to the solution construction components of MS and CMSA. Note that the ACO algorithm may be seeded with a solution (see Algorithm 3). This effectively biases the search process of ACO towards the seeding solution. In the case that this solution is not provided, the ACO algorithm is run without any initial bias. Since the restricted MIPs of MS and CMSA benefit greatly from diversity, one of the n s colonies is seeded with the current best-so-far solution of MS, respectively CMSA, while the other colonies do not receive any seeding solution (Note, we performed tests where two or more of the colonies were seeded with the best solution. We found that there was no significant difference up to five colonies, after which the solutions were worse. Hence, we chose one colony to be seeded with the best solution).

4. Motivation and Hypotheses

In this section we briefly outline the motivation behind the solution-merging-based approaches and some hypotheses regarding the behaviour of the algorithms that were to be tested informally in the empirical results. There are several interrelated aspects of the algorithms to be investigated and we broadly categorise these by their similarities and differences.
Learning patterns of variable values: Given a population of solutions, both algorithms learn information about patterns of variable values that are likely to occur in any (good) solution. This aspect is similar to other population-based metaheuristics, such as genetic algorithms [38].
The main difference between construct, merge, solve and adapt (CMSA) and Merge search (MS) is that the former focuses on identifying one large set of variables that have a fixed value in good solutions. The remaining set of variables is subject to optimization. MS, on the other hand, looks for aggregations of variables—that is, groups of variables that have a consistent (identical) value within good solutions. However, their specific value is subject to optimization. In the case of MS, very large populations can still lead to a restricted mixed integer program (MIP) with reasonable run-times, since the method uses aggregated variables.
Static heuristic information vs learning: Constructive heuristics, such as greedy heuristics, are typical methods for generating solutions to most scheduling problems and we investigate one such method in this study. However, we are very interested to see if using a more costly learning mechanism can lead to inputs for MS and CMSA, such that their overall performance improves. This aspect is implemented with ant colony optimisation (ACO) in this paper. ACO is more likely to find good regions in the search space. However, running an ACO algorithm is computationally much more expensive than generating a single solution. We aim to identify if this trade-off is beneficial.
Strong bias towards improvements over time: Both methods generate, at each iteration, restricted MIPs whose search space includes all the solutions that contributed to the definition of the MIPs, in addition to combinations of those. Hence, the solution generated as a result of the merge step is at least as good as the best one of the input solutions. The question here in the absence of any hill climbing mechanism, relying only on random solution generation is sufficient to prevent these methods from becoming stuck in local optima.
Population size: As with any neighbourhood search or population based approach we expect there to be a trade-off between diversification and intensification, which—in MS and CMSA—is essentially controlled both by the size and by the diversity of the populations used in each merge step. Given the difference in the merge operations, we can expect the best-working population size to be somewhat different for the two algorithms. In fact, we expect it to be smaller for CMSA as compared to MS.
Random splitting in MS: This mechanism is nearly equivalent to increasing the search space by throwing in more solutions, except that (a) it is faster than generating more solutions and (b) it provides some extra flexibility that might be hard to achieve in problems that are very tightly constrained and, hence, have relatively few and quite varied solutions.
Neighbourhood size: For a given set of input solutions, we expect the restricted MIPs in CMSA to have a larger search space in the neighbourhood of the input solutions than the MIPs in MS (with random splitting based on K = 2 ) leading to better solutions from the merge step, but leading to longer computation times for each iteration. This aspect can change substantially with an increasing value of K.

5. Experiments and Results

C++ was used to implement the algorithms, and the implementations were compiled with GCC-5.2.0. The mixed integer programming (MIP) component was implemented using Gurobi 8.0.0 [39] and the parallel ant colony optimisation (ACO) component using OpenMP [40]. The experiments were conducted on Monash University’s Campus Cluster with nodes of 24 cores and 256 GB RAM. Each physical core consisted of two hyper-threaded cores with Intel Xeon E5-2680 v3 2.5GHz, 30M Cache, 9.60GT/s QPI, Turbo, HT, 12C/24T (120W).
The experiments were conducted on a dataset from [23]. This dataset consists of problem instances with 3 to 20 machines, with three instances per machine size. There is an average of 10.5 jobs per machine. This means that an instance with 3 machines has approximately 32 jobs. Further details of the problem instances, and how the job characteristics (processing times, release times, weights, etc.) were determined, can be obtained from the original study.
To compare against existing methods for resource constrained job scheduling (RCJS), we ran column generation and ACO (CGACO) of [28], column generation and differential evolution (CGDELS) of [41], the MIP (Model 2—which is most efficient), column generation (CG) on its own and parallel ACO. The results for the MIP, CG and ACO are presented in Appendix D and are not discussed in the following sections as they prove not to be competitive.
Thirty runs per instance were conducted and each run was allowed one hour of wall-clock time. Based on the results obtained in Section 5.4, 15 cores were allowed for each run (that is, Gurobi uses 15 cores when solving the involved MIPs and the ACO component is run with n s = 15 ). To allow a fair comparison with CGACO and CGDELS, the same algorithm was run on the same infrastructure using 15 cores per run. The parameter settings for the individual merge search (MS) and construct, merge, solve and adapt (CMSA) runs were obtained by systematic testing (see Appendix B and Appendix C). The detailed results are provided in the following sections. The parameter settings for each individual ACO colony were the same as those used in [27,29]: ρ = 0.1 , q 0 = 0.9 and n a n t s = 10 .
The result tables presented in the next sections are in the following format. The first column shows the name of the problem instance (e.g., 3–5 is an instance with 3 machines and id 5). For each algorithm we report the value of the best solution found in 25 runs (Best), the average solution quality across 25 runs (Mean), and the corresponding standard deviation ( S D ). The number of performed iterations, as an average across the 25 runs, is also provided ( I t e r . ). The best results in each table are marked in boldface. Moreover, all statistically significant results, obtained by conducting a pairwise t-test and using a confidence interval of 95%, are marked in italics.

5.1. Study of Merge Search

MS relies on a “good” diverse pool of solutions to perform well. There are two approaches one could take to this: (1) simply constructing a diversity of random solutions as quickly as possible, or (2) searching for a population of good solutions in the neighbourhood of the best found. In the literature, CMSA takes the first approach while MS takes the second. We conducted experiments with a constructive heuristic (see Appendix A) and ACO (Table 1) for the first and second approaches, respectively. The parameters for MS are the MIP time limit ( t iter = 120 s), the number of ACO iterations (5000) and the value of the random splitting parameter ( K = 2 ). Parameter values were chosen according to parameter tuning experiments (see Appendix B).
We see that using ACO within MS (called MS-ACO) is far superior to using the constructive heuristic. ACO provides a distinct advantage across all problem instances, which must be due to the fact that the solutions for the generation of the restricted MIPs are very good, due to the computation time invested in learning (made efficient via parallelization). Not surprisingly, the number of iterations performed by MS-Heur is much larger than that by MS-ACO, within the given time limits. This is mainly due to the fact that the solution construction in MS-ACO lasts more than 5000 times longer than that of MS-Heur. This demonstrates conclusively that, for MS, having a set of good solutions clustered around the best known solution is better than a diverse set of randomly generated solutions. Next, we will test whether the same conclusion holds for CMSA.

5.2. Study of CMSA

As with MS, we can also use the constructive heuristic or ACO (labelled CMSA-ACO) within CMSA to generate solutions. The parameters of CMSA, including the time limit for solving the restricted MIPs ( t iter = 120 s), ACO iterations (5000) and the maximum age limit ( a max = 5, for instances with 8 or fewer machines and a max = 3, for instances with 9 or more machines) were determined as a result of the parameter tuning experiments (See Appendix C). As in the case of MS, we can observe that ACO (Table 2) provides a distinct advantage across all problem instances.
Overall, constructing the solutions with ACO seems to help the iterative process of CMSA to focus on very good regions of the search space. For RCJS, the results demonstrate that, irrespective of the details of the merge method, solution merging works much better as an additional step to improve results in a population based metaheuristics than as the main solution intensification method on its own.
In contrast to the case of MS-Heur and MS-ACO, it is interesting to observe that the number of iterations performed by CMSA-Heur is of the same order of magnitude as that of CMSA-ACO. Even though CMSA-Heur usually performs more iterations than CMSA-ACO, in a small number of cases—concerning the small problem instances with up to four machines, in addition to instance 5–21—there are fewer iterations conducted by CMSA-Heur. Investigating this more closely by examining the restricted MIP models generated within the CMSA versions, we found that the constructive heuristic provides slightly more diversity as several ACO colonies converge to the same solution. In the context of CMSA, this leads to more variables in the restricted MIP and hence to a significant increase in MIP solving time. This increase in time for the merge step consumes most of the time saved in the faster time of the constructive heuristic compared to ACO.

5.3. Comparing CGACO, CGDELS, BRKGA, MS and CMSA

We now investigate how the best versions of MS and CMSA (both using ACO for generating solutions) perform against the current state-of-the-art methods for the RCJS problem. Reference [28] showed that the CGACO hybrid is very effective, while [41] (CGDELS) and [42] (BRKGA) are current state-of-the-art approaches. For a direct comparison, we run these methods, allowing the same computational resources with the same run time limits. The results are shown in Table 3.
The comparison here is with respect to upper bounds, as we are only interested in feasible solutions in this study. We see that within one hour of wall clock time, CGACO is always outperformed by MS and CMSA. With increasing problem size, the differences are accentuated. The comparison with CGDELS shows that MS and CMSA perform better on 20/36 problem instances. For the smallest problem instances (3–5 machines), MS and/or CMSA are best. The results are split for small to medium-sized instances (6–9 machines) followed by a clear advantage of CGDELS for medium to large sized instances (9–12 machines). For the largest instances, CMSA regains the advantage. The best performing method is clearly BRKGA, but for the small to medium instances, MS and CMSA are able to find better solutions (on 11/36 problem instances).
Comparing MS and CMSA, we can observe that both algorithms are very effective in finding good solutions within the given time limit. MS finds best solutions (best in 17 out of 36 instances) for nearly half of the instances. This is mainly the case for the problem instance of small and medium sizes (up to 10 machines). CMSA, on the other hand, is very effective for small instances (up to 5 machines) and then more effective again on the larger instances (≥10 machines), finding the best solution in 26 out of 36 cases. For instances of 10 machines and beyond, CMSA is clearly the best-performing method. This aspect is also summarized in Figure 5, where, for each method, the average performance across instances on the same number of machines is plotted, with respect to the percentage difference to the best performance. CGACO is always outperformed by all other methods. CGDELS performs the best for problem instances with 9, 10, 11 and 12 machines, and for the remaining machine sizes (except 15 machines), MS and CMSA are best. The case with 15 machines is interesting because CMSA or MS are generally more effective, but CGDELS is overwhelmingly more effective in one instance (15–2), thereby skewing the average. We see that while MS is effective for instances with a low number of machines and CMSA is more effective for the larger instances.
Comparing MS and CMSA in terms of iterations (Table 1 and Table 2) show that there are many more iterations performed by MS for problem instances of small and medium size (up to 8 machines). This is due to very small restricted MIPs being generated at each iteration in MS, which—in turn—is due to the large amount of overlap among the generated solutions. MS-ACO and CMSA-ACO are much closer to each other in terms of the number of iterations performed, but we see that CMSA-ACO generally performs fewer iterations. This is again due to the larger solving times of the MIPs and validates our hypothesis that the solution space induced by CMSA is larger when MS has very few sets.
The above experiments demonstrate the efficacy of MS-ACO and CMSA-ACO compared to the state-of-the-art method for RCJS. It has been previously shown [28] that ACO on its own is not very effective for this problem. However, so far it remains unclear how much of the improvement in solution quality can be attributed to solution merging and to the ACO component of the search. To further understand this aspect, we measured the relative contribution of the merge step of MS and CMSA in the MS/CMSA hybrid—that is, the contribution obtained by solving the restricted MIPs. Table 4 shows these results as the percentage contribution of the merge step (MS/CMSA-MIP) relative to the total improvement (MS/CMSA+ACO). For example, suppose we have the following steps in one run of MS-ACO:
  • Solve MIP: starting objective 5000, final objective 4500: g 1 = ( 5000 4500 ) 5000 × 100 = 10.0%.
  • Solve ACO: objective 4200.
  • Solve MIP: starting objective 4200, final objective 4000: g 2 = ( 4200 4000 ) 5000 × 100 = 4.0%.
  • Solve ACO: objective 4000.
  • Solve MIP: starting objective 4000, final objective 3500: g 3 = ( 4000 3500 ) 5000 × 100 = 10.0%.
The contribution of MS-MIP is g 1 + g 2 + g 3 = 24 % and MS+ACO is ( 5000 3500 ) 5000 × 100 = 30 % . This calculation shows that, in the example, the MIP component plays a more substantial role than the ACO component in improving the solutions.
Table 4 provides the complete set of results for this solution improvement analysis. We see that, for a number of instances, the contributions of the merge step and ACO are similar. However, the cases in which the contribution of the merge step is more substantial concern the smaller and the medium-sized instances (e.g., 5–7 and 6–28), whereas ACO contributes more substantially in the context of the larger instances (e.g., with 12, 15, and 20 machines). This is not surprising, as ACO actually consumes the majority of the runtime of the algorithm in those cases, in which only a small number of iterations can be completed within the time limit.

5.4. Study Concerning the Number of Cores Used

Remember that the number of allowed cores influences two algorithmic components: (1) the ACO component, where the number of cores corresponds to the number of colonies and (2) the solution of the restricted MIPs (which is generally more efficient when more cores are available). However, with a growing number of allowed cores, the restricted MIPs become more and more complex, due to being based on more and more solutions. In any case, the number of allowed cores should make a significant difference to the performance of both MS-ACO and CMSA-ACO.
The results are presented in Figure 6 and Figure 7. The figures show the average over all instances with the same number of machines of 25 runs for the gap to the best solution found by any of the methods. The results for MS-ACO show that using 15 or 20 cores is preferable compared to using only 10 cores. The difference between 15 and 20 cores is small with a slight advantage using 20 cores.
As mentioned already above, the diversity of the solutions generated when using 20 cores leads to large restricted MIPs (many more variables) to be solved, which can be very time consuming and even very inefficient. Hence, on occasion, 15 cores are preferable to 20 cores. Compared to using 10 cores, using 15 or 20 cores cores provides sufficient diversity leading to good areas of the search space for the restricted MIPs within the 120 s time limit.
The results for CMSA-ACO show that overall the use of 15 or 20 cores is preferable over only using 10 cores. In CMSA-ACO, several instances are solved more efficiently with 20 cores. Though, for the large instances (10 or more machines), the use of 15 cores is most effective. For CMSA the effect of increased number of solutions on increasing the size and complexity of the restricted MIP used in the merge step is even more pronounced than in MS. Hence, overall the use of 15 cores proves to be most effective.
The conclusion of this comparison is that, while solution merging benefits from having a number of different solutions available, a too large pool of solutions can also have a detrimental effect. For RCJS, using 15 solutions is generally the best option irrespective of the type of merge step used (MS vs. CMSA). To better understand the effect of the alternative solution merging approaches, we next test these using exactly the same pools of solutions.

5.5. Comparing MS and CMSA Using the Same Solution Pool

To remove the randomness associated with the generation of a population of solutions, we investigated the alternative merge steps of MS and CMSA using the same solution pool. For this direct comparison we carried out just one algorithm iteration for each instance. The solutions were obtained using ACO, using the same seed—that is, leading to exactly the same solutions available for generating the MIPs of both methods. The time limit provided for solving the restricted MIPs was again 120 s. Random splitting in MS-ACO was set as before to K = 2 , while the age limit in CMSA-ACO had no effect in this case.
With increasing neighbourhood size, the quality of the solution that can be found in this neighbourhood can be expected to improve. Hence, we use the solution quality as a proxy for the size of the space searched by the MIP subproblems in the two algorithms. Figure 8 shows the gap ( U B M S U B C M S A U B C M S A ) or difference in performance (in percent) after one algorithm iteration.
A positive value indicates that CMSA-ACO performed better than MS-ACO, whereas a negative value means the opposite. We see that CMSA-ACO is generally more effective for smaller problems, but this difference reduces in the context of the larger problems, where sometimes MS-ACO can be more effective.
Figure 9 shows a similar comparison, as in Figure 8, but instead considers the time required to solve the restricted MIPs. We see that, for small problems, MS-ACO is a lot faster (except, for instance, 4–61). This further validates the hypothesis that the CMSA-ACO solution space is larger than that of MS-ACO. From seven machines onwards, the MIP-solving exhausts the time limit both in MS-ACO and in CMSA-ACO, hence we see no difference.

6. Discussion

From the experiments conducted, we make the following observations regarding the use of solution merging in the context of heuristic search for the RCJS problem. Both MS and CMSA are more effective when the populations contain solutions from good regions of the search space. While good solutions can be found with heuristics, the results show that the learning mechanism of ACO is critical to the performance of both algorithms. Despite ACO requiring substantially more computational effort, the quality of the solutions used to build the restricted MIPs are vital to the performance of the overall algorithm. This contrasts with the way CMSA was originally proposed using a simple randomised heuristic to generate a pool of solutions.
Both algorithms are very effective and achieve almost the same performance on the smallest problem instances. MS is more effective for medium-size problem instances, whereas CMSA is more effective for the large instances. A key aspect of this is that the solutions generated by ACO are often very good for the problem instances of small and medium size. Thus, the smaller search space of the restricted MIPs in MS, achieved through aggregating variables, allows solving the MIPs much more quickly than in CMSA. However, for the large problem instances, the restricted MIPs in CMSA are more diverse, and hence enable the algorithm to find better quality solutions. Comparing with CGACO, MS and CMSA perform significantly better across all problem instances given the run-time limits.
Given the stopping criteria of one hour of wall-clock time, a large amount of random splitting in MS does not seem beneficial. In fact, the time limit for solving the restricted MIPs (120 s) is too low for solving the increasingly large-scale restricted MIPs obtained from increasing the amount of random splitting (even in the context of problem instances with eight machines, for example). However, increasing this time limit does not prove useful, because more and more of the total computational allowance will be spent on solving fewer and fewer restricted MIPs (leading to fewer algorithm iterations) without necessarily finding improving solutions.
As we hypothesized (Section 4), the way of generating the restricted MIPs in CMSA leads to a larger search space when compared to that of MS. This is validated by the run-times: the restricted MIPs of CMSA usually take longer to be solved than the restricted MIPs of MS (if the 120 s time limit is not exhausted).
Analysing the parallel computing aspect, we were able to observe that—with a total wall-clock time limit of one hour—using 15 cores leads to the best results for both MS and CMSA. Parallel computing is particularly useful when using a learning mechanism such as ACO, where more computational effort is needed to achieve good diverse solutions. However, with too many cores (20), the performance of both methods generally drops. This is because the mechanism for utilising the additional cores results in a larger, more diverse set of solutions. CMSA in particular is severely affected for the large problem instances, where often the process of trying to solve the restricted MIPs is unable to find improvements over the seeding solutions.
Finally, we would like to remark that, in the context of a discrete time MIP formulation of a scheduling problem with a minimum Makespan objective, the restricted MIP could never produce any solution better than the best of the input solutions. This is because improvements in the objective function are limited to values available in the inputs. Hence, an alternative MIP modelling approach should be considered in these cases—for example, based on the order of the jobs (sequencing) without any fixed completing times. This might even be beneficial for the problem considered in this work. Investigating this effect of solution representation for solution merging of RCJS is beyond the scope of the current work, but represents a promising avenue for future research.

7. Conclusions

This study investigates the efficacy of two population-based matheuristics—Merge search (MS) and construct, merge, solve & adapt (CMSA)—for solving resource constrained job scheduling (RCJS). Both methods are shown to be more effective when hybridized with a learning mechanism—i.e., ant colony optimisation (ACO). Furthermore, the whole framework is parallelized in a multi-core shared memory architecture, leading to large gains in run-time. We find that both hybrids are overall more effective than both the individual methods on their own, and better than previous attempts to combine ACO with integer programming, which used column generation and Lagrangian relaxation in combination with ACO. Furthermore, MS and CMSA are competitive with the state-of-the-art hybrid of column generation and differential evolution, especially outperforming this method on small-medium and large problem instances. Comparing MS and CMSA, we see that both methods easily solve small problems (up to five machines), while MS is more effective for medium-sized problem instances (up to eight machines) and CMSA for large problem instances (starting from 11 machines).
We investigate in detail several aspects of the algorithms, including their parallel components, the search spaces considered at each iteration, and algorithm specific components (e.g., random splitting in MS). We find that parallel ACO is very important for identifying good areas of the search space, within which MS and CMSA can very efficiently find improving solutions. The search spaces considered by CMSA at each iteration are typically larger than those of MS, which is advantageous for large problem instances but generally disadvantageous for problem instances of medium size.

Future Work

The generic nature of MS and CMSA mean that they can be applied to a wide range of problems. There are two main requirements: (1) a method of generating good (and diverse) feasible solutions and (2) an efficient model for an exact approach. Given these aspects, both algorithms are capable of being applied to other problems with little overhead and the promise of good results. Individually, they are already proven on some problems [18,19,20,21], with more studies having been conducted with CMSA. Hence, there are several possibilities of applying both approaches to different problems. We are currently investigating the efficacy of MS and CMSA on the resource constrained project scheduling maximising the net present value [33,43,44,45].
The parallelisation is effective for both methods. Extending this aspect to a message passing interface framework [46] can be of great potential. In particular, running multiple MSs or CMSAs concurrently and passing (good) solutions between the nodes could lead to the possibility of exploring much larger search spaces. We are currently investigating a parallel MS approach for open pit mining [47,48] with promising preliminary results.
We have briefly discussed the possibility of investigating additional mixed integer programming (MIP) models for their use in MS and CMSA (Section 6). As we pointed out, a sequence-based formulation could be very effective for the RCJS problem. Moreover, given that several problems can be modelled by similar sequence-based formulations, it can be expected that this aspect can transfer to those problems in a straightforward manner. Furthermore, different solvers could be attempted instead of the MIP solvers. For example, for very tightly constrained problems, constraint programming could prove very useful.
The similarities between MS and CMSA suggest that both algorithms can be combined into one high-level algorithm as a generic procedure for solving combinatorial optimisation problems. This approach combines the relative strengths of both methods and can prove to be very beneficial on a wide range of applications. For example, a straightforward extension to CMSA is to incorporate random splitting of the search space, and conversely, the age parameter could be incorporated in MS. In fact, this method can be included within state-of-the-art commercial solvers to provide good heuristic solutions during the exploration of the branch and bound search tree.
Finally, note that computational intelligence techniques other than the ones utilized and studied in this work might be successfully used to solve the considered problem. For a recent overview on alternative techniques with a special focus on bio-inspired algorithms we refer the interested reader to [49].   

Author Contributions

Methodology, D.T., C.B. and A.T.E.; Writing—original draft, D.T.; Writing—review and editing, D.T., C.B. and A.T.E. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by project CI-SUSTAIN funded by the Spanish Ministry of Science and Innovation (PID2019-6GB-I00).

Acknowledgments

We acknowledge administrative and technical support by Deakin University (Australia), the Spanish National Research Council (CSIC), and Monash University (Australia).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

MIPMixed Integer Program
CMSAConstruct, Solve, Merge and Adapt
MSMerge search
ACOAnt Colony Optimisation
RCJSResource Constrained Job Scheduling
TWTTotal Weighted Tardiness
J , M , C The set of jobs, machines and precedences, respectively
G Maximum available resource
T The time horizon
r, p, d, w, gThe release time, processing time, due time, weight and resource consumption of a job, respectively
cCompletion time of a job
s, uVariables that represent start and completion time in the Network Flow model
yVariable to include arcs in the Network Flow model
xVariable that represents completion time of a job in MIP model 1
zVariable that represents completion time of a job in MIP model 2
CMSA-HeurCMSA with constructive heuristic
MS-HeurMerge search with constructive heuristic
CMSA-ACOCMSA ACO
MS-ACOMerge search with ACO
BRKGABiased Random Key Genetic Algorithm
CGACOColumn Generation and ACO
CGDELSColumn Generation, Differential Evolution and Local Search
π A solution represented by a permutation of jobs
π i b The iteration best solution
π b s The global best solution
ϕ The pheromone trails
n a n t s Number of solutions to be constructed per iteration
f ( π ) TWT of solution π
τ Pheromone value
ρ Learning rate

Appendix A. Construction Heuristic

Using a constructive heuristic (in a probabilistic way) is one of the options of generating solutions at each iteration. The constructive heuristic that we developed builds a sequence of all jobs from left to right. For that purpose it starts with an initially empty sequence π . At each construction step it chooses exactly one of the so-far unscheduled jobs, and it appends this job to π . Henceforth, let J π J be the set of jobs that are already scheduled with respect to a partial sequence π . For the technical description of the heuristic, let max t : = max j = 1 n r j + j = 1 n p j .
Note that max t is a crude upper bound for the Makespan of any feasible solution. Moreover, let  C j be the set of jobs that—according to the precedence constraints in C —must be executed before j, and let M m h J be the subset of jobs that must be processed on machine m h , h = 1 , , l . Furthermore, given a partial solution π , let g π , t 0 be the sum of the already consumed resource at time t.
Given a partial sequence π , the set of feasible jobs—that is, the set of jobs from which the next job to be scheduled can be chosen—is defined as follows: J ^ : = { j J \ J π C j J π = C j } . In words, the set of feasible jobs consists of those jobs that (1) are not scheduled yet and (2) whose predecessors with respect to P are already scheduled. A time step t 0 is a feasible starting time for a job j J ^ , if and only if
  • t s k + p k , for all k J π C j ;
  • t s k + p k , for all k M m j J π (remember that m j refers to the machine on which job j must be processed);
  • g π , t + g j G , for all t = t , , t + p j .
Here, T is the set of feasible starting times for a job j J ^ and the earliest starting time s j is defined as s j : = min { t t T } . Finally, for choosing a feasible job at each construction step, the jobs from j J ^ are ordered in the following way. First, a job j has priority over a job k, if s j < s k . In the case of a tie, job j has priority over job k if w j > w k . Finally, in the case of a further tie, job j has priority over job k if d j < d k . If there is still a tie, the order between j and k is randomly chosen. The jobs from J ^ are ordered in this way, and this job order is stored in sequence π . The first job in π is then chosen to be scheduled next. A pseudo-code of the heuristic is provided in Algorithm A1.
Finally, note that a solution constructed by the heuristic can be easily transformed into a Merge search (MS) (respectively, construct, merge, solve and adapt (CMSA)) solution. This is because the heuristic derives starting times for all jobs. The finishing time of each job is calculated by adding its processing time to its starting time. The corresponding variable values of both mixed integer programming (MIP) models can then be derived on the basis of the finishing times.
This heuristic is used in a probabilistic way, as follows. Instead of choosing, at each construction step, the first job from π and appending it to π , the following procedure is applied. First, a random number r [ 0 , 1 ] is produced. If r d rate , the first job from π is chosen and appended to π . Hereby,  d rate  is a parameter which we set to value Y for all experiments. Otherwise, the first l size jobs from π are placed into a candidate list, and one of these candidates is chosen uniformly at random and appended to π .
Algorithm A1 Constructive Heuristic.
1:
input: An RCJS instance
2:
Initialise an empty permutation π
3:
g π , t : = 0 , for all t = 0 , , max t
4:
while J π J do
5:
      Let j * be the first job from π with earliest start time s j *
6:
       g π , t : = g π , t + g j * , for all t { s j * , , s j * + p j * }
7:
      Append j * to π
8:
end while
9:
output: π together with the earliest starting times of each job

Appendix B. Merge Search Parameter Value Selection

The Merge search (MS) parameters of interest are the mixed integer program (MIP), time limit (120 and 300 s), the number of iterations in ant colony optimisation (ACO) (500, 1000, 2000 and 5000 iterations) and the number of random sets to split into (2, 4 and 8 sets) (Note that a larger number of split sets implies more diversity, especially since the sets are obtained randomly.). Ten runs per instance were conducted and all runs were conducted for 30 min with 10 cores (The number of cores is also a parameter of interest, but these experiments are conducted in Section 5.4 instead). The results are presented in Table A1. For each parameter of interest, the results are averaged across the results obtained with all values of all the other parameters.
Regarding the MIP time limit, we see that, overall, 120 s is preferable, particularly for eight machines or more. For four and six machines, there are not many differences in the results, but 300 s is slightly preferable for six machines.
The results concerning the number of iterations in ACO are straightforward, with a setting of 5000 iterations providing the best average results across all machines. This suggests that a higher number of ACO iterations could be considered. However, for the 20 machine problem instances, only four iterations of MS complete with 5000 iterations. More ACO iterations will cause—in these cases—that the MS component does not contribute significantly to the final solutions.
The amount of random splitting is almost independent of machine size. Marginally, the smallest number of sets is most beneficial (three out of six problem instances).
Table A1. The results of experiments conducted to determine which set of parameter values lead to the best solutions for MS. For each parameter, the results are averaged across the results obtained by the settings of all other parameters.
Table A1. The results of experiments conducted to determine which set of parameter values lead to the best solutions for MS. For each parameter, the results are averaged across the results obtained by the settings of all other parameters.
Machines
468101220
ACO Iter.
50066.76240.34790.28703.332729.489700.10
100066.73239.38727.79671.922636.119631.67
200066.73238.90692.31644.352546.229411.80
500066.73238.68670.24633.832454.079026.49
MIP Time
12066.73238.75665.19633.442444.989013.80
30066.73238.60675.29634.232463.169039.19
Split Sets
266.73238.55664.28641.772441.729000.46
466.73238.73666.12631.062443.59015.77
666.73238.51665.18627.482449.739025.16

Appendix C. Construct, Merge, Solve and Adapt Parameter Value Selection

The construct, merge, solve and adapt (CMSA) parameters of interest are the mixed integer program (MIP) time limit (120 and 300 s), the number of iterations in ant colony optimisation (ACO) (500, 1000, 2000 and 5000 iterations) and the maximum age limit (3, 5 and 10). Ten runs per instance were conducted and all runs were conducted for 30 min with 10 cores. The results are presented in Table A2. For each parameter of interest, the results averaged across the results obtained by all possible settings of all the other parameters.
Table A2. The results of experiments conducted to determine which set of parameter values leads to the best performance of CMSA. For each parameter, the results are averaged across the results of all the settings for all other parameters.
Table A2. The results of experiments conducted to determine which set of parameter values leads to the best performance of CMSA. For each parameter, the results are averaged across the results of all the settings for all other parameters.
Machines
468101220
ACO Iter.
50066.58239.87736.98672.922656.529709.08
100066.50239.59702.92650.482568.989613.49
200066.49239.46675.86633.322489.599321.38
500066.54238.68663.95627.482431.258997.16
MIP Time
12066.62239.22662.23623.992426.168976.99
30066.46238.14665.66630.962436.359017.34
Age
366.50238.33667.08620.132423.198953.46
566.27238.40656.76621.922429.58986.49
1066.60237.69662.86629.932425.788991.02
Regarding the MIP time limit, we see that, overall, 120 s is preferable, particularly for eight machines or more. For four and six machines, 300 s is preferable; however, the results are close here.
The results concerning the number of ACO iterations show that, generally, 5000 iterations provide the best average results across all machines. It is only for the instance with four machines that 2000 iterations works better. However, for these instances, the results are very close across all possible settings. As with MS, more ACO iterations could be considered, but this would lead to very few CMSA iterations being completed within the total time limit.
The results concerning the maximum age parameter are not as clear. A low maximum age (3) seems the best option overall.

Appendix D. Results of the Mixed Integer Program, Column Generation and Ant Colony Optimisation

Table A3 shows the results for the mixed integer program (MIP) (a single run per instance), column generation (CG) on its own CG and parallel ant colony optimisation (ACO) [29]. Since the MIP is deterministic, a single run per instance was conducted. For CG and ACO, which are stochastic, thirty runs per instance were conducted.
Table A3. MIP, CG and ACO for the RCJS problem. UB is the upper bound; Best is the best solution found across 25 runs; Mean is the average solution quality across 24 runs. The best results are highlighted in boldface.
Table A3. MIP, CG and ACO for the RCJS problem. UB is the upper bound; Best is the best solution found across 25 runs; Mean is the average solution quality across 24 runs. The best results are highlighted in boldface.
InstanceMIPCGACO
UBBestMeanSDBestMeanSD
3 - 5505.00558.81558.810.00505.00505.361.08
3 - 23149.07151.91156.903.27149.07149.070.00
3 - 5367.4276.8776.870.0069.3669.360.00
4 - 2823.8127.7827.780.0023.9424.551.33
4 - 4266.7399.4699.460.0067.6468.191.47
4 - 6142.4255.9755.970.0045.9645.960.00
5 - 7315.46291.02291.020.00255.03272.0912.93
5 - 21174.83235.72235.720.00168.63170.634.36
5 - 62372.97289.55289.550.00264.97271.105.39
6 - 10-980.76988.624.74828.92860.8623.17
6 - 28213.58276.29278.201.15218.37226.914.13
6 - 58254.07275.50282.1711.06236.05248.5510.43
7 - 5563.85483.98491.5811.61433.53453.7011.90
7 - 23-639.37643.912.50553.45597.0226.85
7 - 47-574.14574.140.00446.10469.3018.87
8 - 3-761.38769.905.46671.12703.8127.67
8 - 53-529.67545.0110.04464.28488.4111.66
8 - 77-1408.131424.6224.391232.721291.0930.83
9 - 20-1017.121040.2818.91925.65961.8627.72
9 - 47-1416.231416.230.001243.271275.2627.42
9 - 62-1587.651587.650.001455.661512.0933.61
10 - 7-2730.182779.3137.382549.462704.64107.84
10 - 13-2446.872454.6910.282245.942302.9239.89
10 - 31-679.91679.910.00611.71643.6525.05
11 - 21-1119.511119.510.001008.001057.3826.02
11 - 56-1926.501956.8617.841845.931875.6525.62
11 - 63-2209.792214.597.962032.362092.4447.81
12 - 14-1935.001967.1028.071830.311882.3726.23
12 - 36-3248.843275.5315.083033.783138.6066.32
12 - 80-2683.732684.360.382433.862495.5561.31
15 - 2-4274.774403.9370.533961.824121.60106.77
15 - 3-4655.974775.44146.924368.664514.54121.22
15 - 5-3799.433823.0973.363512.333618.1067.65
20 - 2-9173.919249.9149.758788.978935.63119.47
20 - 5-16,033.4016,192.51170.1214,779.8015,050.51137.54
20 - 6-8273.118273.110.007865.088048.25156.97

References

  1. Blum, C.; Roli, A. Metaheuristics in Combinatorial Optimization: Overview and Conceptual Comparison. ACM Comput. Surv. 2003, 35, 268–308. [Google Scholar] [CrossRef]
  2. Back, T. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms; Oxford University Press: Oxford, UK, 1996. [Google Scholar]
  3. Dulebenets, M.A.; Kavoosi, M.; Abioye, O.; Pasha, J. A Self-adaptive Evolutionary Algorithm for the Berth Scheduling Problem: Towards Efficient Parameter Control. Algorithms 2018, 11, 100. [Google Scholar] [CrossRef] [Green Version]
  4. Slowik, A.; Kwasnicka, H. Nature inspired methods and their industry applications—Swarm intelligence algorithms. IEEE Trans. Ind. Inf. 2017, 14, 1004–1015. [Google Scholar] [CrossRef]
  5. Anandakumar, H.; Umamaheswari, K. A Bio-inspired Swarm Intelligence Technique for Social Aware Cognitive Radio Handovers. Comput. Electr. Eng. 2018, 71, 925–937. [Google Scholar] [CrossRef]
  6. Marriott, K.; Stuckey, P. Programming With Constraints; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  7. Wolsey, L.A. Integer Programming; Wiley-Interscience: New York, NY, USA, 1998. [Google Scholar]
  8. Fisher, M. The Lagrangian Relaxation Method for Solving Integer Programming Problems. Manag. Sci. 2004, 50, 1861–1871. [Google Scholar] [CrossRef] [Green Version]
  9. Geoffrion, A.M. Generalized Benders decomposition. J. Optim. Theory Appl. 1972, 10, 237–260. [Google Scholar] [CrossRef]
  10. Speranza, M.G.; Vercellis, C. Hierarchical Models for Multi-project Planning and Scheduling. Eur. J. Oper. Res. 1993, 64, 312–325. [Google Scholar] [CrossRef]
  11. Pisinger, D.; Ropke, S. Large Neighborhood Search. In Handbook of Metaheuristics; Gendreau, M., Potvin, J.Y., Eds.; Springer US: Boston, MA, USA, 2010; pp. 399–419. [Google Scholar]
  12. Boschetti, M.A.; Maniezzo, V.; Roffilli, M.; Bolufé Röhler, A. Matheuristics: Optimization, Simulation and Control. In Hybrid Metaheuristics; Blesa, M.J., Blum, C., Di Gaspero, L., Roli, A., Sampels, M., Schaerf, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 171–177. [Google Scholar]
  13. Archetti, C.; Speranza, M.G. A Survey on Matheuristics for Routing Problems. EURO J. Comput. Optim. 2014, 2, 223–246. [Google Scholar] [CrossRef] [Green Version]
  14. DellaCroce, F.; Salassa, F. A Variable Neighborhood Search based Matheuristic for Nurse Rostering Problems. Ann. Oper. Res. 2014, 218, 185–199. [Google Scholar] [CrossRef]
  15. Brouer, B.D.; Desaulniers, G.; Pisinger, D. A Matheuristic for the Liner Shipping Network Design Problem. Transp. Res. Part E Logist. Transp. Rev. 2014, 72, 42–59. [Google Scholar] [CrossRef] [Green Version]
  16. Brech, C.H.; Ernst, A.T.; Kolisch, R. Scheduling Medical Residents’ Training at University Hospitals. Eur. J. Oper. Res. 2018. [Google Scholar] [CrossRef]
  17. Blum, C.; Raidl, G.R. Hybrid Metaheuristics: Powerful Tools for Optimization; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  18. Kenny, A.; Li, X.; Ernst, A.T. A Merge Search Algorithm and Its Application to the Constrained Pit Problem in Mining. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ’18), Kyoto, Japan, 15–19 July 2018; pp. 316–323. [Google Scholar]
  19. Blum, C.; Pinacho, P.; López-Ibáñez, M.; Lozano, J.A. Construct, Merge, Solve & Adapt A New General Algorithm for Combinatorial Optimization. Comput. Oper. Res. 2016, 68, 75–88. [Google Scholar]
  20. Blum, C.; Blesa, M.J. Construct, Merge, Solve & Adapt: Application to the Repetition-free Longest Common Subsequence Problem. In Proceedings of the EvoCOP 2016—16th European Conference on Evolutionary Computation in Combinatorial Optimization, Porto, Portugal, 30 March–1 April 2016; Volume 9595, pp. 46–57. [Google Scholar]
  21. Blum, C. Construct, Merge, Solve and Adapt: Application to Unbalanced Minimum Common String Partition. In Proceedings of the HM 2016—10th International Workshop on Hybrid Metaheuristics, Plymouth, UK, 8–10 June 2016; Volume 9668, pp. 17–31. [Google Scholar]
  22. Thiruvady, D.; Blum, C.; Ernst, A.T. Maximising the Net Present Value of Project Schedules Using CMSA and Parallel ACO. In Hybrid Metaheuristics; Blesa Aguilera, M.J., Blum, C., Gambini Santos, H., Pinacho-Davidson, P., Godoy del Campo, J., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 16–30. [Google Scholar]
  23. Singh, G.; Ernst, A.T. Resource Constraint Scheduling with a Fractional Shared Resource. Oper. Res. Lett. 2011, 39, 363–368. [Google Scholar] [CrossRef]
  24. Abdul-Razaq, T.; Potts, C.N.; Van Wassenhove, L.N. A Survey of Algorithms for the Single Machine Total Weighted Tardiness Scheduling Problem. Discret. Appl. Math. 1990, 26, 235–253. [Google Scholar] [CrossRef] [Green Version]
  25. Congram, R.K.; Potts, C.N.; van de Velde, S.L. An Iterated Dynasearch Algorithm for the Single-machine Total Weighted Tardiness Scheduling Problem. INFORMS J. Comput. 2002, 14, 52–67. [Google Scholar] [CrossRef] [Green Version]
  26. Ernst, A.T.; Singh, G. Lagrangian Particle Swarm Optimization for a Resource Constrained Machine Scheduling Problem. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
  27. Thiruvady, D.; Singh, G.; Ernst, A.T.; Meyer, B. Constraint-based ACO for a Shared Resource Constrained Scheduling Problem. Int. J. Prod. Econ. 2012, 141, 230–242. [Google Scholar] [CrossRef]
  28. Thiruvady, D.; Singh, G.; Ernst, A.T. Hybrids of Integer Programming and ACO for Resource Constrained Job Scheduling. In Hybrid Metaheuristics; Blesa, M.J., Blum, C., Voß, S., Eds.; Springer International Publishing: Cham, Switzerland, 2014; Volume 8457, pp. 130–144. [Google Scholar]
  29. Thiruvady, D.; Ernst, A.T.; Singh, G. Parallel Ant Colony Optimization for Resource Constrained Job Scheduling. Ann. Oper. Res. 2016, 242, 355–372. [Google Scholar] [CrossRef]
  30. Cohen, D.; Gómez-Iglesias, A.; Thiruvady, D.; Ernst, A.T. Resource Constrained Job Scheduling with Parallel Constraint-Based ACO. In Artificial Life and Computational Intelligence; Wagner, M., Li, X., Hendtlass, T., Eds.; Springer International Publishing: Cham, Switzerland, 2017; Volume 10142, pp. 266–278. [Google Scholar]
  31. Nguyen, S.; Thiruvady, D.; Ernst, A.; Alahakoon, D. Genetic Programming Approach to Learning Multi-pass Heuristics for Resource Constrained Job Scheduling. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ’18), Kyoto, Japan, 15–19 July 2018; pp. 1167–1174. [Google Scholar]
  32. Nguyen, S.; Thiruvady, D. Evolving Large Reusable Multi-pass Heuristics for Resource Constrained Job Scheduling. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  33. Kimms, A. Maximizing the Net Present Value of a Project Under Resource Constraints Using a Lagrangian Relaxation Based Heuristic with Tight Upper Bounds. Ann. Oper. Res. 2001, 102, 221–236. [Google Scholar] [CrossRef]
  34. Kenny, A.; Li, X.; Ernst, A.T.; Thiruvady, D. Towards Solving Large-scale Precedence Constrained Production Scheduling Problems in Mining. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ’17), Berlin, Germany, 15–19 July 2017; pp. 1137–1144. [Google Scholar]
  35. Thiruvady, D.; Wallace, M.; Gu, H.; Schutt, A. A Lagrangian Relaxation and ACO Hybrid for Resource Constrained Project Scheduling with Discounted Cash Flows. J. Heuristics 2014, 20, 643–676. [Google Scholar] [CrossRef]
  36. den Besten, M.; Stützle, T.; Dorigo, M. Ant Colony Optimization for the Total Weighted Tardiness Problem. Lect. Notes Comput. Sci. 2000, 1917, 611–620. [Google Scholar]
  37. Dorigo, M.; Stützle, T. Ant Colony Optimization; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
  38. Mitchell, M. An Introduction to Genetic Algorithms; MIT Press: Cambridge, MA, USA, 1996. [Google Scholar]
  39. Optimization, G. Gurobi Optimizer Version 5.0. 2010. Available online: http://www.gurobi.com/ (accessed on 9 October 2020).
  40. Dagum, L.; Menon, R. OpenMP: An Industry-Standard API for Shared-Memory Programming. IEEE Comput. Sci. Eng. 1998, 5, 46–55. [Google Scholar] [CrossRef] [Green Version]
  41. Nguyen, S.; Thiruvady, D.; Ernst, A.T.; Alahakoon, D. A Hybrid Differential Evolution Algorithm with Column Generation for Resource Constrained Job Scheduling. Comput. Oper. Res. 2019, 109, 273–287. [Google Scholar] [CrossRef]
  42. Blum, C.; Thiruvady, D.; Ernst, A.T.; Horn, M.; Raidl, G.R. A Biased Random Key Genetic Algorithm with Rollout Evaluations for the Resource Constraint Job Scheduling Problem. In AI 2019: Advances in Artificial Intelligence; Liu, J., Bailey, J., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 549–560. [Google Scholar]
  43. Kolisch, R.; Sprecher, A. PSPLIB—A project scheduling problem library. Eur. J. Oper. Res. 1997, 96, 205–216. [Google Scholar] [CrossRef] [Green Version]
  44. Vanhoucke, M.; Demeulemeester, E.; Herroelen, W. On Maximizing the Net Present Value of a Project under Renewable Resource Constraints. Manag. Sci. 2001, 47, 1113–1121. [Google Scholar] [CrossRef]
  45. Vanhoucke, M. A Scatter Search Heuristic for Maximising the Net Present Value of a Resource-constrained Project with Fixed Activity Cash Flows. Int. J. Prod. Res. 2010, 48, 1983–2001. [Google Scholar] [CrossRef]
  46. Gropp, W.; Lusk, E.; Skjellum, A. Using MPI: Portable Parallel Programming with the Message-Passing Interface; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
  47. Hochbaum, D.S.; Chen, A. Performance Analysis and Best Implementations of Old and New Algorithms for the Open-Pit Mining Problem. Oper. Res. 2000, 48, 894–914. [Google Scholar] [CrossRef] [Green Version]
  48. Meagher, C.; Dimitrakopoulos, R.; Avis, D. Optimized Open Pit Mine Design, Pushbacks and the Gap Problem—A Review. J. Min. Sci. 2014, 50, 508–526. [Google Scholar] [CrossRef]
  49. Del Ser, J.; Osaba, E.; Molina, D.; Yang, X.S.; Salcedo-Sanz, S.; Camacho, D.; Das, S.; Suganthan, P.N.; Coello Coello, C.A.; Herrera, F. Bio-inspired computation: Where we stand and what’s next. Swarm Evol. Comput. 2019, 48, 220–250. [Google Scholar] [CrossRef]
Figure 1. The precedence graph of an instance of the RCJS problem.
Figure 1. The precedence graph of an instance of the RCJS problem.
Algorithms 13 00256 g001
Figure 2. A simple example where three jobs have to execute on one machine. There are three solutions and the completion times of the jobs on each machine are different for the three solutions (first occurrence of a 1). The first row of binary values for each job shows the values of the variable in the first solutions. In the first solution, for example, job 1 completes at time point 3, job 2 at time point 4 and job 3 at time point 7. Moreover, the variable for job 1 at time point 5, for example, has value 1—that is, z 1 , 5 = 1 . The same holds for the second solution. However, variable z 1 , 5 has value 0 for the third solution.
Figure 2. A simple example where three jobs have to execute on one machine. There are three solutions and the completion times of the jobs on each machine are different for the three solutions (first occurrence of a 1). The first row of binary values for each job shows the values of the variable in the first solutions. In the first solution, for example, job 1 completes at time point 3, job 2 at time point 4 and job 3 at time point 7. Moreover, the variable for job 1 at time point 5, for example, has value 1—that is, z 1 , 5 = 1 . The same holds for the second solution. However, variable z 1 , 5 has value 0 for the third solution.
Algorithms 13 00256 g002
Figure 3. A larger number of sets generated compared to those in Figure 2. Two of the original sets (indicated by bold borders) are split further into the dark grey and black sets.
Figure 3. A larger number of sets generated compared to those in Figure 2. Two of the original sets (indicated by bold borders) are split further into the dark grey and black sets.
Algorithms 13 00256 g003
Figure 4. This example considers the same toy instance, as in Figure 2, in which three jobs have to execute on three machines. The variable values (with respect to Model 1) are indicated for the same three solutions as those displayed in Figure 2.
Figure 4. This example considers the same toy instance, as in Figure 2, in which three jobs have to execute on three machines. The variable values (with respect to Model 1) are indicated for the same three solutions as those displayed in Figure 2.
Algorithms 13 00256 g004
Figure 5. Comparison of the algorithms concerning the percentage difference to the best result, averaged over the instances with the same number of machines. The scale of the vertical axis is logarithmic.
Figure 5. Comparison of the algorithms concerning the percentage difference to the best result, averaged over the instances with the same number of machines. The scale of the vertical axis is logarithmic.
Algorithms 13 00256 g005
Figure 6. The performance of MS-ACO with 10, 15 and 20 cores. The results are averaged over instances with the same number of machines and show the gap to the best solution found by MS-ACO or CMSA-ACO.
Figure 6. The performance of MS-ACO with 10, 15 and 20 cores. The results are averaged over instances with the same number of machines and show the gap to the best solution found by MS-ACO or CMSA-ACO.
Algorithms 13 00256 g006
Figure 7. The performance of CMSA-ACO with 10, 15 and 20 cores. The results are averaged over instances with the same number of machines and show the gap to the best solution found by MS-ACO or CMSA-ACO.
Figure 7. The performance of CMSA-ACO with 10, 15 and 20 cores. The results are averaged over instances with the same number of machines and show the gap to the best solution found by MS-ACO or CMSA-ACO.
Algorithms 13 00256 g007
Figure 8. Comparing the solution quality obtained by MS and CMSA when they are provided with the same set of solutions for one iteration. The boxes show the improvement in MS over CMSA (in percent), with a positive value indicating that CMSA performed better.
Figure 8. Comparing the solution quality obtained by MS and CMSA when they are provided with the same set of solutions for one iteration. The boxes show the improvement in MS over CMSA (in percent), with a positive value indicating that CMSA performed better.
Algorithms 13 00256 g008
Figure 9. Comparing the computing time needed by MS and CMSA for solving the restricted MIPs when they are provided with the same set of solutions for one iteration. The boxes show the percentage difference of MS and CMSA, with a positive value indicating that MS took more computation time.
Figure 9. Comparing the computing time needed by MS and CMSA for solving the restricted MIPs when they are provided with the same set of solutions for one iteration. The boxes show the percentage difference of MS and CMSA, with a positive value indicating that MS took more computation time.
Algorithms 13 00256 g009
Table 1. MS with ACO. For 25 runs conducted, the best ( B e s t ) and average ( M e a n ) solution qualities with associated standard deviations ( S D ) are provided. The table also shows the average number of iterations ( I t e r . ) conducted for each problem instance. Statistically significant results, using a pairwise t-test, at the 95% confidence interval are highlighted in boldface.
Table 1. MS with ACO. For 25 runs conducted, the best ( B e s t ) and average ( M e a n ) solution qualities with associated standard deviations ( S D ) are provided. The table also shows the average number of iterations ( I t e r . ) conducted for each problem instance. Statistically significant results, using a pairwise t-test, at the 95% confidence interval are highlighted in boldface.
Inst.MS-HeurMS-ACO
BestMeanSDIter.BestMeanSDIter.
3 -5509.27557.2722.5117,404.2505.00505.000.00489.2
3 -23151.83161.715.0320,000.6149.07149.070.00619.5
3 -5369.3670.080.5230,170.969.3669.360.00774.5
4 -2825.6228.571.4117,529.323.8123.810.00398.7
4 -4267.6470.562.1120,066.866.0766.680.18257.1
4 -6145.9649.182.2416,211.545.9645.960.00362.2
5 -7288.38309.199.6711,643.4252.90252.900.0066.1
5 -21168.63177.503.5710,962.0168.63168.630.00293.6
5 -62292.16300.224.158095.0249.50255.423.2120.2
6 -10981.361031.4821.782546.7819.74834.227.6312.4
6 -28261.38290.749.5110,087.2218.37218.370.0039.4
6 -58276.75297.799.179739.5236.05237.871.3017.7
7 -5511.57538.1512.781549.9419.52430.836.4228.5
7 -23726.94765.9921.951006.2540.40561.708.4627.0
7 -47590.49607.839.951224.2412.60438.9610.8627.0
8 -3948.96970.8412.141376.2615.93648.2515.4623.8
8 -53558.03579.6311.40413.0447.37465.858.3626.0
8 -771469.221548.0629.051667.91186.691216.3414.3525.0
9 -201095.741135.1818.0125.2905.02926.7311.4124.0
9 -471579.501626.3229.89133.31200.871226.9214.5920.9
9 -621775.971819.7124.57371.21422.051449.1712.9522.0
10 -73187.993297.6954.1451.22522.622581.6231.5220.0
10 -132736.232839.9344.6529.92156.042217.8929.6320.0
10 -31764.95816.8216.02303.9591.21618.6811.1022.0
11 -211194.711246.6122.1957.1997.391023.3218.4121.0
11 -562230.692321.0734.10329.51800.441851.9026.6418.0
11 -632386.592445.6523.5631.52003.322034.6012.5019.0
12 -142241.262335.0434.5546.41750.581803.9518.1418.0
12 -364021.574147.0643.3827.62991.413047.9738.7716.0
12 -803093.553197.1343.9721.12399.972430.0316.9616.4
15 -25372.025494.1060.8855.74003.674110.8947.2810.0
15 -36215.616360.3168.5051.74483.494558.7132.5111.3
15 -55311.445493.0085.5523.13541.963576.0220.0213.0
20 -29994.5910,370.08124.4624.38831.208961.4547.516.0
20 -517,213.7018,168.91578.0338.814,708.0214,951.71102.725.0
20 -69616.819748.1167.3230.07890.318081.7868.457.1
Table 2. CMSA with ACO. For 25 runs conducted, the best ( B e s t ) and average ( M e a n ) solution qualities with associated standard deviations ( S D ) are provided. The table also shows the average number of iterations ( I t e r . ) conducted for each problem instance. Statistically significant results, using a pairwise t-test, at the 95% confidence interval are highlighted in boldface.
Table 2. CMSA with ACO. For 25 runs conducted, the best ( B e s t ) and average ( M e a n ) solution qualities with associated standard deviations ( S D ) are provided. The table also shows the average number of iterations ( I t e r . ) conducted for each problem instance. Statistically significant results, using a pairwise t-test, at the 95% confidence interval are highlighted in boldface.
Inst.CMSA-HeurCMSA-ACO
BestMeanSDIter.BestMeanSDIter.
3 -5594.31610.8513.03110.0505.00505.000.00324.2
3 -23174.23179.032.52198.8149.07149.070.00476.0
3 -5369.3669.360.00624.169.3669.360.00755.5
4 -2825.3726.620.85215.623.8123.810.00346.3
4 -4288.1296.823.4967.966.0766.260.2527.3
4 -6145.9646.070.04266.645.9645.960.00298.0
5 -7396.81423.4414.8333.1252.90252.900.0019.1
5 -21168.63238.7126.8134.0168.63168.630.00129.6
5 -62273.95290.9515.5733.2249.50254.423.6913.0
6 -10834.651086.3293.4333.0825.64837.685.7112.4
6 -28298.54346.6630.3933.8218.37218.400.0513.2
6 -58350.63391.6225.4933.7236.05238.501.1113.0
7 -5430.28493.7434.0430.7420.20433.925.4128.4
7 -23704.54760.3626.6533.1553.02562.976.3927.0
7 -47493.25585.7951.1932.6419.60441.6711.9926.9
8 -31176.161350.22148.1233.0621.74658.8715.5023.9
8 -53459.59537.5442.6733.3442.83457.308.0626.0
8 -772017.812144.7281.6233.11183.941211.2713.9425.0
9 -201068.601149.5136.8033.0903.30928.4010.0424.0
9 -471974.632200.42146.1633.01205.991226.6211.0320.4
9 -622055.122214.52115.6732.91410.961449.4115.4322.0
10 -73268.613551.05143.9433.12491.082557.3233.2020.0
10 -133380.873834.01265.6431.52149.492205.2430.1920.0
10 -31730.47838.6446.2933.0592.08610.798.3322.3
11 -211298.521332.6833.6630.0997.081011.508.3821.0
11 -562832.302989.67138.0433.01793.481834.6818.1918.0
11 -632521.332723.58137.4033.01988.452025.3918.7119.0
12 -142154.492437.09170.9832.91737.871788.2316.3918.0
12 -364600.804832.21111.9933.02917.002989.8438.9016.1
12 -803336.943490.4379.0031.22363.392411.9817.7917.0
15 -25611.155851.75116.1232.23967.474041.8438.9510.0
15 -37484.497666.97113.3132.44352.504476.7160.6411.0
15 -54832.375311.91202.9433.03397.363520.9940.7213.0
20 -211,438.0711,550.3671.8729.08733.868913.1796.276.0
20 -520,478.4620,891.89237.8130.414,588.8114,894.91159.125.0
20 -69791.889941.6984.4932.07800.877968.6479.347.0
Table 3. A comparison of CGACO, CGDELS and BRKGA against MS and CMSA. The results are presented as the % difference from each method to the best known solution (column 2). The time limits for solving the restricted MIPs within MS-ACO and CMSA-ACO were set to 120 s. The best results are in bold. Statistically significant results, using a pairwise t-test, at the 95% confidence interval are italicized.
Table 3. A comparison of CGACO, CGDELS and BRKGA against MS and CMSA. The results are presented as the % difference from each method to the best known solution (column 2). The time limits for solving the restricted MIPs within MS-ACO and CMSA-ACO were set to 120 s. The best results are in bold. Statistically significant results, using a pairwise t-test, at the 95% confidence interval are italicized.
Inst.BestCGACOCGDELSBRKGAMS-ACOCMSA-ACO
3 -5505.00.10830.00000.00000.00000.0000
3 -23149.10.09230.00120.00000.00000.0000
3 -5369.40.00950.00270.00000.00000.0000
4 -2823.80.20660.00380.00500.00000.0000
4 -4266.10.06520.01480.02380.00920.0029
4 -6146.00.05700.00000.01110.00000.0000
5 -7252.90.21920.00220.00310.00000.0000
5 -21168.60.05030.00000.00000.00000.0000
5 -62249.50.20520.01970.02470.02370.0197
6 -10811.60.27090.01620.02030.02780.0321
6 -28218.40.34930.00680.04440.00000.0001
6 -58236.10.27470.02550.02240.00770.0104
7 -5418.10.28490.02200.02890.03050.0379
7 -23533.80.43190.03820.04460.05240.0547
7 -47406.40.51210.03450.02980.08030.0869
8 -3615.90.57090.03050.02250.05250.0697
8 -53442.20.30250.03180.02410.05350.0342
8 -771163.80.32720.04190.02620.04520.0408
9 -20873.30.29360.01570.01020.06120.0631
9 -471158.30.41450.05700.02360.05930.0590
9 -621382.60.30720.05120.01230.04810.0483
10 -72384.00.38620.03410.00680.08290.0727
10 -132082.70.36120.02980.01160.06490.0588
10 -31572.00.42600.04210.02580.08160.0678
11 -21964.00.29390.03150.00980.06150.0492
11 -561674.50.39610.06980.01210.10590.0957
11 -631887.20.29900.05440.01360.07810.0732
12 -141636.40.43130.05170.01320.10240.0928
12 -362764.20.49650.04320.01190.10270.0816
12 -802226.70.44390.05320.01410.09130.0832
15 -23596.50.53760.06630.00860.14300.1238
15 -33948.20.60750.07150.01170.15460.1339
15 -53234.70.69810.06130.01240.10550.0885
20 -27755.30.34370.07120.01740.15550.1493
20 -512,899.20.41390.08570.01740.15910.1547
20 -66907.80.41010.06340.01310.16990.1536
Table 4. The contribution of the merge step of MS-ACO and CMSA-ACO, relative to the total solution improvement. The MIP results sum up the percentage improvement in the objective for every MIP solve for an instance. The MS/CMSA+ACO result is the percentage difference of the first best solution found to the best solution at the end of the run. Small instances where the best solution is the first one found have been omitted.
Table 4. The contribution of the merge step of MS-ACO and CMSA-ACO, relative to the total solution improvement. The MIP results sum up the percentage improvement in the objective for every MIP solve for an instance. The MS/CMSA+ACO result is the percentage difference of the first best solution found to the best solution at the end of the run. Small instances where the best solution is the first one found have been omitted.
Inst.MS-MIPMS + ACOCMSA-MIPCMSA + ACO
4 - 281.723.131.692.37
4 - 420.861.872.253.31
5 - 77.308.237.948.63
5 - 623.146.003.464.66
6 - 101.362.380.251.86
6 - 282.812.962.462.55
6 - 582.583.092.773.14
7 - 53.475.562.564.23
7 - 232.025.411.606.48
7 - 473.637.701.407.52
8 - 34.988.292.655.57
8 - 531.301.951.414.82
8 - 770.933.521.034.73
9 - 202.204.431.243.61
9 - 471.473.260.933.33
9 - 621.062.790.492.25
10 - 71.092.290.963.37
10 - 130.872.690.362.48
10 - 311.703.961.194.65
11 - 211.172.890.913.96
11 - 560.692.380.583.96
11 - 630.472.690.422.09
12 - 140.883.161.353.82
12 - 360.492.141.193.85
12 - 800.362.800.953.27
15 - 20.492.280.362.68
15 - 30.461.470.293.39
15 - 50.631.490.983.81
20 - 20.091.070.061.77
20 - 50.130.670.041.60
20 - 60.120.450.061.96

Share and Cite

MDPI and ACS Style

Thiruvady, D.; Blum, C.; Ernst, A.T. Solution Merging in Matheuristics for Resource Constrained Job Scheduling. Algorithms 2020, 13, 256. https://0-doi-org.brum.beds.ac.uk/10.3390/a13100256

AMA Style

Thiruvady D, Blum C, Ernst AT. Solution Merging in Matheuristics for Resource Constrained Job Scheduling. Algorithms. 2020; 13(10):256. https://0-doi-org.brum.beds.ac.uk/10.3390/a13100256

Chicago/Turabian Style

Thiruvady, Dhananjay, Christian Blum, and Andreas T. Ernst. 2020. "Solution Merging in Matheuristics for Resource Constrained Job Scheduling" Algorithms 13, no. 10: 256. https://0-doi-org.brum.beds.ac.uk/10.3390/a13100256

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop