Next Article in Journal
Failure Analysis of the Water Supply Network in the Aspect of Climate Changes on the Example of the Central and Eastern Europe Region
Previous Article in Journal
Drivers for Sustainable Business Models in Start-Ups: Multiple Case Studies
Previous Article in Special Issue
A Novel Method of Developing Construction Projects Schedule under Rework Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Minimizing Makespan in A Two-Machine Flowshop Problem with Processing Time Linearly Dependent on Job Waiting Time

1
Department of Information Management, National Formosa University, Yun-Lin 632, Taiwan
2
Smart Machine and Intelligent Manufacturing Research Center, National Formosa University, Yun-Lin 632, Taiwan
*
Author to whom correspondence should be addressed.
Sustainability 2019, 11(24), 6885; https://0-doi-org.brum.beds.ac.uk/10.3390/su11246885
Submission received: 12 September 2019 / Revised: 20 November 2019 / Accepted: 24 November 2019 / Published: 4 December 2019
(This article belongs to the Special Issue Scheduling Problems in Sustainability)

Abstract

:
This paper is aimed at studying a two-machine flowshop scheduling where the processing times are linearly dependent on the waiting times of the jobs prior to processing on the second machine. That is, when a job is processed completely on the first machine, a certain delay time is required before its processing on the second machine. If we would like to reduce the actual waiting time, the processing time of the job on the second machine increases. The objective is to minimize the makespan. When the processing time is reduced, it implies that the consumption of energy is reduced. It is beneficial to environmental sustainability. We show that the proposed problem is NP-hard in the strong sense. A 0-1 mixed integer programming and a heuristic algorithm with computational experiment are proposed. Some cases solved in polynomial time are also provided.

1. Introduction

In this paper, we consider a two-machine flowshop scheduling problem where the processing times are linearly dependent on the waiting times of the jobs prior to processing on the second machine. The objective is to minimize the makespan. In this problem, when a job is processed completely on the first machine, a certain delay time is required before its processing on the second machine. If we can find a way to reduce the certain delay time at the cost of extra processing time added to the processing time of the job on the second machine, the whole processing time of all jobs may be reduced. When the whole processing time is reduced, the consumption of energy is reduced. This contributes to environmental sustainability. In addition, the managerial implication of this approach is that a production scheduler has more leeway to arrange a job sequence to reach the goal. One realistic example of this kind of scheduling exists in a painting process. Generally, there are at least two layer-painting stages in a painting engineering. After the first stage, the product has to wait for some time until it is dry naturally. Then, the product goes to the second stage. If we want to reduce the waiting time, we can use a dryer to accelerate the drying process. This implies that the cost of extra processing time is added to the processing time of the job on the second machine. Another practical scheduling problem arises at the cooking–chilling stage in a food plant [1]. The chilling process must start within 30 min of the completion of cooking. Otherwise, the food has to be discarded as unfit for human consumption. If there is an advantage in terms of the makespan minimization, we can reduce the waiting time after the completion of cooking at the cost of an extra processing time of the chilling process. In such a situation, it is good for the sustainability of the earth because it involves less food waste. Also, the objective of the makespan is minimized. The processing time of all jobs is reduced. The consumption of the energy is reduced. It is beneficial for environmental sustainability. For some other scheduling problems related to sustainability, the reader is referred to the works of [2,3,4,5,6,7,8].
The two-machine, minimum makespan, flowshop scheduling problem was first solved by Johnson [9]. Mitten [10] extended this problem with time lags. Sule and Huang [11] permitted the setup and removal times to be independent of the processing time. Maggu et al. [12] considered the problem with time lags and transportation time between machines. These problems proposed above are solved by a polynomial time algorithm, which is similar to Johnson’s rule [9]. There are also many studies considering scheduling problems with sequence dependent setup time. For example, Ruiz [13] proposed an iterated greedy heuristic to solve the flowshop scheduling problem with sequence dependent setup time. Nishi and Hiranaka [14] applied the Lagrangian relaxation and cut generation method to minimize the total weighted tardiness in flowshop scheduling problems with sequence dependent setup time. Wang et al. [15] presented a hybrid local search algorithm to solve the sequence setup times flowshop scheduling problem with a makespan criterion.
For the two-machine flowshop scheduling problem with waiting time constraints (or intermediate delays), Reddi and Ramamoorthy [16] proposed a polynomial time algorithm for the problem with no-wait in process. Dell’Amico [17] showed that the two-machine flowshop scheduling problem with intermediate delays is NP-hard in the strong sense if the solution space is not restricted to permutation schedules. In addition, Yu et al. [18] showed that the two-machine flowshop machine problem with intermediate delays is NP-hard in the strong sense, even if all processing times are 1. Fiszmann and Mosheiov [19] studied the scheduling problem of minimizing total load on a proportionate flowshop. They considered position-dependent job processing times in the most general way. They showed that this problem is solved in O( n 4 ) time, where n is the number of jobs. Yang and Chern [20] considered a two-machine flowshop sequencing problem with limited waiting time constraints; they showed that the permutation scheduling problem is NP-hard and proposed a branch-and-bound algorithm. Su [21] extended the problem studied by Yang and Chern [20] and considered a two-stage flowshop with a batch processor in stage 1 and a single processor in stage 2. Each batch processor can process a batch (limited number) of jobs simultaneously. A heuristic algorithm and a mixed integer program are proposed. Sriskandarajah and Goyal [22] considered a problem in which the processing times are linearly dependent on the waiting times; they showed that the problem is NP-hard and proposed a heuristic algorithm. Yang and Chern [23] further extended the problem studied by Sriskandarajah and Goyal [22] and considered a problem in which the processing time of a job on the second machine is linearly dependent on the waiting time if the waiting time is beyond a certain range. They proposed an integer program and a heuristic algorithm to solve the problem.
Chung et al. [24] considered a two-stage hybrid flowshop scheduling problem with a waiting time constraint. They provided two algorithms to solve the makespan minimization problem. Wang et al. [25] investigated a permutation flowshop scheduling problem with a time lag between two consecutive machines. They presented a two-stage constructive heuristic to minimize the makespan.
The proposed problem is different from those studied by Sriskandarajah and Goyal [22] and Yang and Chern [23]. In the problem of Sriskandarajah and Goyal [22], when a job is processed completely on the first machine, the job can be processed immediately on the second machine. No delay time is required. If there is a delay time before a job to be processed on the second machine, the processing time of the job on the second machine increases. In the problem of Yang and Chern [23], when a job is processed completely on the first machine, there is a waiting time before its processing on the second machine. If the waiting time is beyond a certain range, the processing time of a job on the second machine increases. This problem is similar to the following problem. When a job is processed completely on the first machine, a certain delay time (i.e., a certain waiting time) is required before its processing on the second machine. However, the actual waiting time can be either decreased or increased. The cost of the actual waiting time change is the increase of the processing time of a job on the second machine. In the proposed problem, we only consider the case in which the actual waiting time is allowed to be decreased at the cost of extra processing time added to the processing time of a job on the second machine.
The organization of the remainder of this paper is as follows. In Section 2, there is a description of the problem and its complexity. A 0-1 mixed integer programming formulation is given in Section 3 and the heuristic algorithm is presented in Section 4. Thereafter, computational experiments are reported in Section 5. Finally, in Section 6, we give the conclusions.

2. Problem Description and Complexity

The proposed two-machine makespan flowshop scheduling problem denoted T with processing time linearly dependent on job waiting time is described as follows.
First, some notations are introduced in the following and additional notations will be given when needed throughout the paper.
  • Ji: the ith job in the original sequence, i = 1, 2, 3,…, n
  • J[i]: the ith job in the actual sequence, i = 1, 2, 3,…, n
  • M1: the first machine on the flowshop
  • M2: the second machine on the flowshop
  • ai: the regular processing time of Ji on the first machine M1
  • bi: the regular processing time of Ji on the second machine M2
  • di: the delay time of Ji before its processing on machine M2
  • wi: the actual waiting time of Ji before its processing on machine M2
  • αi: the cost index, αi > 0
  • Ci,1: the completion time of Ji on machine M1
  • Ci,2: the completion time of Ji on machine M2
For a given set of jobs J = {J1, J2, …, Jn}, let ai and bi be the regular processing times of job Ji on the first machine M1 and the second machine M2, respectively. We assume that for each job Ji, when Ji is processed completely on machine M1, a delay time di is required before its processing on machine M2. However, in some situations, the actual waiting time wi is allowed to be smaller than the delay time di at the cost of extra processing time added to the processing time bi of job Ji on M2. That is, if the actual waiting time wi of Ji is smaller than di, then the processing time of Ji on machine M2 is given by bi + αi(diwi), where αi > 0. However, if widi, then the processing time on machine M2 is bi. The objective is to find the optimal schedule minimizing the makespan.
First, in the proposed problem, if α i > 1 , there is no benefit to reduce the actual time wi, and then the proposed problem is reduced to the problem studied in Yu et al. [18]. Hence, if α i > 1 , the proposed problem is also NP-hard in the strong sense. Next, we will show that the partition problem [26] reduces to the proposed problem if 0 < α i = α 1 , i = 1, …, n. Consider the following well-known NP-complete problem:
Partition: Given positive integers s1, s2, …, sk, does there exist a subset
E N = { 1 , , k }   such   that i E s i = i N E s i = ( i N s i ) / 2 ?
For a given instance of partition, s1, s2, …, sk, an instance of the proposed problem is constructed as follows:
n = k + 2;               
ai = 2Ssi, bi = si, di = 0 for iN = {1, …, k};
ak+1 = 1, bk+1 = 2 + 2S2, dk+1 = 0;    
ak+2 = 2, bk+2 = 2S2S, dk+2 = 2S2 + S;  
αi = α for i∈{1, 2, …, k + 2},      
where i N s i = 2S.
We will show that Partition has a solution if and only if the above instance has an optimal schedule with the minimum makespan Cmax = 4S2 + S + 3.
Lemma 1.
For the above instance, it is sufficient to consider the schedules that have the same job processing sequence on both machines for the job Ji, i ∈ {1, …, k + 1}.
Proof. 
If a schedule Π does not have the same job processing sequence on both machines for the job Ji, i∈{1, …, k + 1}, then there are two cases:
(1)
There is a job Ji, which directly precedes Jj on machine M1 and follows Jj on machine M2, i, j ∈ {1, …, k + 1}, perhaps with intervening jobs (Figure 1a). We may interchange the order of Ji and Jj on machine M1 without increasing the makespan, because di = dj = 0.
(2)
There is a sequence (A1, Ji, Jk+2, Jj, A2) on machine M1, where A1 and A2 are subsequences, and Ji follows Jj on machine M2, perhaps with intervening jobs (Figure 1b). We may interchange the processing order of Ji and Jj in (A1, Ji, Jk+2, Jj, A2) as the following sequence (A1, Jk+2, Jj, Ji, A2) on machine M1 without increasing the makespan (Figure 1c), because di = dj = 0.
This process of interchanging jobs may be repeated until a schedule Π’ is obtained with the same order on machine M1 as that on machine M2 for the job Ji, i ∈ {1, …, k + 1}. Π’ is clearly not worse than Π. Therefore, Lemma 1 holds. □
Lemma 2.
If Jk+1 is not processed first, then the makespan of any schedule is greater than 4S2 + S + 3.
Proof. 
For any given schedule, if Ji (iN) is processed first, then
C max a i + i N b i + b k + 1 + b k + 2 = 2 S Si + 2 S + 2 + 2 S 2 + 2 S 2 S > 4 S 2 + S + 3 .
Similarly, if Jk+2 is processed first, then
C max a k + 2 + i N b i + b k + 1 + b k + 2 = 2 + 2 S + 2 + 2 S 2 + 2 S 2 S > 4 S 2 + S + 3 .
Thus, we only consider the schedules whose Jk+1 is processed first on both machines. □
Lemma 3.
If Jk+2 is not processed second on machine M1, then the makespan of any schedule is greater than 4S2 + S + 3.
Proof. 
Let U = {i|Ji is processed between Jk+1 and Jk+2 on machine M1}. If there are jobs Ji, iU, then the following two cases are considered:
(1)
If any job Ji, iN, does not create any idle-time slot on machine M2, that is, the processing of these jobs on machine M2 is continuous, then there are two subcases:
(i)
If 2S2 + 2S − 2S i U s j 0 (Figure 2),
then
C max a 1 + b k + 1 + i N b i + l + b k + 2 + α max { 0 , d k + 2 w k + 2 } , = 1 + 2 + 2 S 2 + 2 S + l + 2 S 2 S + α max { 0 , 2 S 2 + S 2 S 2 2 S + 2 S i U s j l } = 4 S 2 + S + 3 + l + α max { 0 , ( 2 i U s j 1 ) S l } > 4 S 2 + S + 3 ,
where wk+2 = bk+1 + i N b i + l i U a j ak+2 = 2S2 + 2S − 2S i U s j + l and l ≥ 0.
(ii)
If 2S2+ 2S − 2S i U s j < 0, then Jk+2 creates an idle-time slot lk+2 = 2S i U s j − 2S2 − 2S > 0 on machine M2. From Lemma 2, if Jk+1 is scheduled first and there is no idle time on machine M2, except during operation 1 of Jk+1, there is a lower bound of the Cmax, which is 4S2 + S + 3. In this case, Jk+2 creates an idle-time slot lk+2 > 0; therefore, Cmax > 4S2 + S + 3.
(2)
Similarly, if a job Jr, rN, creates an idle-time slot lr > 0, then Cmax > 4S2 + S + 3.
Thus, we only consider the schedules whose Jk+2 is processed second on machine M1. □
Theorem 1.
For any given positive number α > 0, αi = α, i = 1, …, n, the two-machine makespan flowshop scheduling problem T is NP-hard.
Proof. 
If partition has a solution, then there is an optimal schedule with the makespan Cmax = 4S2 + S + 3 (Figure 3a). We note that, in this case, the waiting time widi for each i.
If partition has no solution, we will show that the makespan of any schedule for the above instance is greater than 4S2 + S + 3. For given schedule in which Jk+1 is processed first on two machines and Jk+2 is processed second on machine M1, we let E = {i|Ji is processed between Jk+1 and Jk+2 on machine M2}. By the assumption that partition has no solution and i E s i S = c ≠ 0, we consider the following two cases:
(1)
If c < 0 and l > 0 (Figure 3b), we have
C max a k + 1 + b k + 1 + i E b i + l + b k + 2 + α max { 0 , d k + 2 w k + 2 } + i N E b i = 1 + 2 + 2 S 2 + 2 S + l + 2 S 2 S + α max { 0 , d k + 2 w k + 2 } = 4 S 2 + S + 3 + l + α max { 0 , 2 S 2 + S ( 2 S 2 + i E s i + l ) } = 4 S 2 + S + 3 + l + α max { 0 , c l } > 4 S 2 + S + 3 ,
where wk+2 = bk+1 + i E b i + lak+2 = 2S2 + i E s i + l.
(2)
If c > 0 (c ≥ 1/2, because s1, s2, …, sk are positive integers), without loss of generality, we let Jj, jE, be the last processed job in E (Figure 3c), then there are two cases:
(i)
If any job Ji, iE-{j}, does not create any idle-time slot, then we will show that job Jj creates an idle-time slot lj > 0 on machine M2 (Figure 3c). The idle-time slot is
l j = max { 0 , a k + 2 + i E a i b k + 1 i E { j } b i } = max { 0 , 2 + 2 S i E s i 2 2 S 2 i E s i + s j } = max { 0 , 2 S ( S + c ) 2 S 2 S c + s j } = max { 0 , 2 S c S c + s j } = max { 0 , 2 ( S 1 / 2 ) ( c 1 / 2 ) 1 / 2 + s j } = 2 ( S 1 / 2 ) ( c 1 / 2 ) 1 / 2 + s j > 0 ,   since   S 1 ,   c 1 / 2   and   s j 1 .
Hence, Cmax > 4S2 + S + 3.
(ii)
If a job Jr, rE-{j}, creates an idle-time slot lr > 0, then Cmax > 4S2 + S + 3.
Thus, the makespan is greater than 4S2 + S + 3 if partition has no solution. It follows that partition has a solution if and only if the optimal schedule of the above instance has the minimum makespan Cmax = 4S2 + S + 3. □

3. A 0-1 Mixed Integer Programming Formulation

According to the similar analysis in Yang and Chern [23], a 0-1 mixed integer programming of the problem is developed as follows. First, we denote that
  • Ai = the starting time of job i on machine M1;
  • Bi = the starting time of job i on machine M2;
  • wi = the actual waiting time of job i before its processing on machine M2 = BiAiai;
  • Wi = max{diwi, 0}= max{diBi + Ai + ai, 0};
  • yijk = 1, if job i precedes job j on machine Mk;
  • yijk = 0, otherwise;
  • Cmax = the makespan.
For each job i, it is clear to have
BiAi + ai, 1 ≤ in.
In addition, it is necessary to assure that no two operations can be processed simultaneously by the same machine. Suppose, for example, that job j precedes job i on machine 1, then it is necessary to have
AiAj + aj.
On the other hand, if job i precedes job j on machine 1, then it is necessary to have
AjAi + ai.
These inequalities are called disjunctive constraints, because one and only one of these inequalities must hold. In order to accommodate these constraints in the formulation, these disjunctive constraints can be rewritten as follows:
Ai + Myij1Aj + aj, 1 ≤ i < jn,
Aj + M(1 − yij1) ≥ Ai + ai, 1 ≤ i < jn,
where M represents a sufficiently large positive number.
By the same way, these disjunctive constraints of job i and job j processed on machine 2 can be expressed as follows:
Bi + Myij2Bj + bj + αjWj, 1 ≤ i < jn,
Bj + M(1 − yij2) ≥ Bi + bi + αiWi, 1 ≤ i < jn.
We note that
Wi = max {diBi + Ai + ai, 0},
then it is necessary to have Wi ≥ 0 and
WidiBi + Ai + ai, 1 ≤ in.
For the makespan problem, it is necessary to have
CmaxBi + bi + αiWi, 1 ≤ in.
Then, a disjunctive integer programming formulation of the proposed problem can be given by
minimizeCmax
subject toCmaxBi + bi + αiWi1 ≤ in,
BiAi+ ai1 ≤ in,
Ai + Myij1Aj + aj1 ≤ i < jn,
Aj + M(1 − yij1) ≥ Ai + ai1 ≤ i < jn,
Bi + Myij2Bj + bj + αjWj1 ≤ i < jn,
Bj + M(1 − yij2) ≥ Bi + bi + αiWi1 ≤ i < jn,
WidiBi + Ai + ai1 ≤ in,
Cmax, Ai, Bi, Wi ≥ 0, 1 ≤ in, yijk = 0 or 1,1 ≤ ijn, 1 ≤ k ≤ 2.
The total number of type (1), (6), and (7) constraints is equal to 3n. The total number of type (2) and (3) constraints is equal to n(n − 1). The total number of type (4) and (5) constraints is equal to n(n − 1). Hence, the total number of constraints is n(2n + 1). There are 3n + 1 nonnegative variables of Cmax, Ai, Bi, and Wi. We note that if yijk is in the formulation, then yjik needs to be defined. Hence, there are n(n − 1)/2 0 − 1 integer variables of yij1 and the same number of yij2. The total number of variables is thus n(n + 2) + 1.

4. Heuristic Algorithm and Its Worst-Case Performance

As a reducible waiting time is considered in the proposed problem, somehow, the proposed problem is similar to the two-machine flowshop scheduling problem with start and stop lags. Therefore, we first use the Maggu and Das’s algorithm [27] to determine a sequence A (Algorithm 1).
Algorithm 1. Maggu and Das’s algorithm
Step 1. Let J = {J1, J2, …, Jn}.
Step 2. Determine the job processing order in the following way:
  2.1 Decompose set J into the following two sets:
U = {Jiaibi} and V = {Jiai > bi}.

  2.2. Arrange the members of set U in nondecreasing order of ai + di, and arrange the members of set V in nonincreasing order of bi + di.
  2.3. A sequence A is the ordered set U followed by the ordered set V.
Some solvable cases are described in the following.
First, if min 1 i n i}≥1, the proposed problem is the same as that in Dell’Amico [17]. Therefore, if max 1 i n {di} ≤ min 1 i n {bi + di} or max 1 i n {di} ≤ min 1 i n {ai + di}, an optimal schedule is given by Maggu and Das’s Algorithm.
Theorem 2
([23]). In the problem, the case considered here is with max 1 i n i}≤1, max 1 i n {ai + di} ≤ min 1 i n {bi + di}, and ai*i*di* = min 1 i n {aiidi}. If there is a job Jj, such that aj + dj ≤ bi*i*di*, then (Ji*, Jj, B) is an optimal schedule, where B is an arbitrary subsequence of the jobs without Ji*, Jj.
Proof. 
Please see the proof in Appendix A. □
Theorem 3
([23]). If max 1 i n i} ≤ 1, min 1 i n {ai} ≥ max 1 i n {bi + αidi} and bi* + αi*di* = min 1 i n {bi + αidi}, then (B, Ji*) is an optimal schedule, where B is an arbitrary subsequence of the jobs without Ji*, and we schedule the jobs with no-wait manner as shown in Figure 4.
Proof .
It is clear that a lower bound on the makespan is i = 1 n a i + min 1 i n {bi + αidi}. If max 1 i n i} ≤ 1, min 1 i n {ai} ≥ max 1 i n {bi + αidi}, then the makespan of (B, Ji*) with no-wait manner, as shown in Figure 4, is equal to the lower bound i = 1 n a i + bi* + αi*di*. Hence, (B, Ji*) is an optimal schedule. □
In the following, we propose a heuristic algorithm for the problem. The heuristic algorithm is presented for the problem with ai ≥ 0, bi ≥ 0, αi > 0, and di > 0 for all of the jobs. In this problem, if αi ≥ 1, then it is useless to reduce the waiting time. However, if 0 < αi < 1, then it is useful to reduce the waiting time as much as possible. A stepwise description of the algorithm is given as follows (Algorithm 2):
Algorithm 2. Heuristic algorithm
Step 0.
(Initialization) Check those conditions stated in Theorem 2 and Theorem 3. If any one of conditions holds, the optimal schedule is obtained. Otherwise, generate a heuristic solution in the following steps. Determine a sequence π by using Maggu and Das’s algorithm. Set k = 1 and C[0],1 = C[0],2 = 0.
Step 1.
(Determining the reduced period of waiting time and the additional time on machine M2 for each job) Set C[k],1 = C[k−1],1 + a[k].
If C[k−1],2 C[k−1],1 < a[k] (Figure 5a), then go to Step 1.1.
If a[k] C[k−1],2 C[k−1],1 a[k] + d[k] (Figure 5b), then go to Step 1.2.
If a[k] + d[k] < C[k−1],2 C[k−1],1 (Figure 5c), then go to Step 1.3.
1.1
If 0 < α[k] < 1, then C[k],2= C[k],1 + b[k] + α[k]d[k].
If 1 ≤ α[k], then C [k],2 = C[k],1 + d[k] + b[k].
Go to Step 2.
1.2
If 0 < α[k] < 1, then C [k],2 = C [k−1],2 + b[k] + α[k] ⋅( C[k],1 + d[k]C [k−1],2).
If 1 ≤ α[k], then C [k],2 = C[k],1 + d[k] + b[k].
Go to Step 2.
1.3
C[k],2 = C [k−1],2 + b[k]. Go to Step 2.
Step 2.
Set k = k + 1
If kn, go to Step 1. Otherwise, stop.
In the heuristic algorithm, Step 0 first determines a sequence A by using Maggu and Das’s algorithm. Step 1 adjusts the reduced period of waiting time and the additional time on machine 2 for each job. In Step 1.1 and Step 1.2, if α[k] ≥ 1, then it is useless to reduce the waiting time. However, if 1 > α[k] > 0, then it is useful to reduce the waiting time as much as possible and the calculation of the completion times on both machines is given. Step 1.3 depicts that if a[k],1 + d[k] < C [k−1],2C [k−1],1, then it is useless to reduce the waiting time either α[k] ≥ 1 or 1 > α[k] > 0.
In the following, an example of five jobs is given to illustrate the heuristic algorithm.
Example 1.
There are five jobs; that is, J1, J2, J3, J4, and J5, to be processed on machine M1 and machine M2. The processing times of these jobs on machine M1 are a1 = 1, a2 = 3, a3 = 2, a4 = 3, and a5 = 2, respectively. The processing times on machine M2 are b1 = 5, b2 = 4, b3 = 1, b4 = 2, and b5 = 3, respectively. The delay times required before its processing on machine M2 are d1 = 2, d2 = 3, d3 = 1, d4 = 2, and d5 = 5, respectively. The cost indices are α1 = 0.2, α2 = 0.3, α3 = 0.1, α4 = 0.5, and α5 = 1.2, respectively.
According to the above heuristic algorithm, we obtain that the makespan of these five jobs is 16.6. Please see the details in Appendix B.
Lower bound
First, we assume that processing on each machine may be continuous, D1 = {ii ≥ 1 for iN} and D2 = ND1. We can see that, if αi ≥ 1, then it is useless to reduce the waiting time. However, if 1 > αi > 0, then it is useful to reduce the waiting time as much as possible. Because all the jobs have to be processed on machine M1 and M2, if the delay time is short relative to the corresponding processing time, we have an immediate lower bound LB1 = max { i = 1 n a i + min { min i D 1 {di + bi}, min i D 2 idi + bi}}, min { min i D 1 {ai + di}, min i D 2 {ai + αidi}}+ i = 1 n b i }. On the other hand, if one of the delay times is long enough relative to the corresponding processing time, we have the second lower bound LB2 = max { m a x i D 1 {ai + di + bi}, m a x i D 2 {ai + αidi + bi}}. Hence, a lower bound is calculated as follows:
C l o w = max { i = 1 n a i + min { min i D 1 { d + b i } , min i D 2 { α i d i   + b i } } , min { min i D 1 { a + d i   } , min i D 2 { a + α i d i } } + i = 1 n b i , m a x i D 1 { i   + d + b i } , m a x i D 2 { i   + α i d i   + b i } } .
In the following, we will find the worst case of the heuristic algorithm. First, some notations are defined. Given a solution of the proposed problem, we define the critical job J [ c ] as the job with maximum starting time on machine M2, such that C [ c ] , 1 + w [ c ] = B [ c ] , where 0 w [ c ] d [ c ] and C [ c ] , 1 , d [ c ] , w [ c ] , and B [ c ] are the completion time of J [ c ] on machine M1, the delay time of J [ c ] , the actual waiting time of J [ c ] , and the starting time of J [ c ] on machine M2, respectively. We also define JP as the set of jobs proceeding J [ c ] on machine M1, and JF as the set of jobs following J [ c ] on machine M2. Then, considering the schedule in which the jobs are arranged by Maggu and Das’s algorithm, there are three cases of the makespan, as follows.
Case 1.
0 < α [ c ] < 1 and C [ c ] , 1 C [ c 1 ] , 2 (See Figure 6a)
C max ( π ) = i J P a i + a [ c ] + α [ c ] d [ c ] + b [ c ] + i J F b i
Case 2.
0 < α [ c ] < 1 and C [ c ] , 1 < C [ c 1 ] , 2 (See Figure 6b)
C max ( π ) = i J P a i + a [ c ] + w [ c ] + α [ c ] l [ c ] + b [ c ] + i J F b i = i J P a i + a [ c ] + d [ c ] ( 1 α [ c ] ) l [ c ] + b [ c ] + i J F b i , w h e r e l [ c ] = d [ c ] w [ c ]
Case 3.
α [ c ] 1 (See Figure 6c)
C max ( π ) = i J P a i + a [ c ] + d [ c ] + b [ c ] + i J F b i
Theorem 4.
Let π* be an optimal schedule and π be the Maggu and Das’s schedule for the heuristic algorithm. If 0 < α [ c ] < 1 and C [ c ] , 1 C [ c 1 ] , 2 , then ρ = C max ( π ) / C max ( π * ) 2 and the bound is tight.
Proof. 
Considering the schedule in which the jobs are arranged by Maggu and Das’s algorithm, if 0 < α [ c ] < 1 and C [ c ] , 1 C [ c 1 ] , 2 , then the makespan is as follows:
C max ( π ) = i J P a i + a [ c ] + α [ c ] d [ c ] + b [ c ] + i J F b i
Therefore, if a [ c ] b [ c ] , the jobs J i JP have the property of a i b i . Then, we have the following results:
C max ( π ) ( α [ c ] d [ c ] + b [ c ] ) + i = 1 n b i m a x i D 2 { a + α i d i   + b i } + { min i D 1 { a + d i   } , min i D 2 { a + α i d i } } + i = 1 n b i L B 1 + L B 2 2 C l o w .
Similarly, if a [ c ] > b [ c ] , the jobs J i JF have the property of a i > b i . Thus,
C max ( π ) ( a [ c ] + α [ c ] d [ c ] ) + i = 1 n a i m a x i D 2 { a + α i d i   + b i } + i = 1 n a i + min { min i D 1 { d i + b i } , min i D 2 { α i d i   + b i } } L B 2 + L B 1 2 C l o w .
Hence, ρ = C max ( π ) / C max ( π * ) C max ( π ) / C l o w 2 .
To prove the tightness of the bound, consider J 1 with a 1 = 1 , b 1 = 1 , d 1 = n , and 0 < α 1 < 1 , and J i ( i = 2 , 3 , , n ) with a i = 1 , b i = 1 , d i = 0 , and α i = 0.5 . By applying Maggu and Das’s algorithm, the job sequence is J 2 , J 3 , , J n , J 1 on both machine M1 and M2. Then, C max ( π ) = n + α 1 n + 1 = ( 1 + α 1 ) n + 1 < 2 n + 1 .
On the other hand, an optimal schedule can be obtained by arranging the job sequence J 1 , J 2 , J 3 , , J n on machine M1 and the job sequence J 2 , J 3 , , J n , J 1 on machine M2. Then, C max ( π * ) = n + 2 . Hence, ρ = C max ( π ) / C max ( π * ) = ( 2 n + 1 ) / ( n + 2 ) , which tends to 2 as n → ∞. □
Theorem 5.
Let π* be an optimal schedule and π be the Maggu and Das’s schedule for the heuristic algorithm. If α [ c ] 1 , then ρ = C max ( π ) / C max ( π * ) 2 and the bound is tight.
Proof. 
The proof is the similar to that of Theorem 4. Thus, we omit it here. □

5. Computational Experiments

Although we find the worst case of the heuristic algorithm under some certain situations (case 1 and case 3), the upper bound of case 2 is still unknown. Therefore, in order to evaluate the overall efficiency of the heuristic algorithm, we generate several groups of random problems as follows:
(1)
n is equal to 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 150, 200.
(2)
ai is uniformly distributed over [1,100].
(3)
bi is uniformly distributed over [1,100].
(4)
αi is uniformly distributed over [1,2].
(5)
di is uniformly distributed over [0,50], [50,100], or [100,200] depending on the group.
In the computational experiment, a total of 720 test problems are generated. The computation times of algorithms for all the test problems are within one second. For each of these random problems, the percentage of the error e = (ChClow) ∗ 100/Clow is computed, where Ch is the makespan of the heuristic solution and Clow is the lower bound on the makespan.
The result is given in Table 1. There are 20 test problems for each problem type. To evaluate the overall performance of the heuristic algorithm, we compute the mean of all the average percentage errors reported in Table 1. The mean value is 1.97%, which suggests that the heuristic algorithm, on average, finds schedules that are within 1.97% of optimality. From Theorem 4 and 5, we can see that the upper bound of the heuristic algorithm is 2 C l o w ; therefore, the performance of the heuristic algorithm is quite satisfactory. From Figure 7, the larger the value of di, the greater the percentage of the error. Because the proposed heuristic algorithm is restricted to searching a near-optimal permutation schedule, it may imply that the optimal schedule is likely to be a non-permutation schedule when the values of delay times of jobs are larger. Therefore, the performance of the heuristic algorithm is better when di is smaller. We can also see that the average percentage of errors decreases as the job number n increases for different values of di. Especially, when the job number n is more than 100, the mean value of the average percentage of errors is less than 1%. In view of the NP-hardness of the problem, this result is quite encouraging as it provides efficient procedures for solving large-sized problems.

6. Conclusions

In this paper, we investigate a two-machine flowshop scheduling problem in which the processing times of the second operations are linearly dependent on the waiting times of the jobs. This problem is shown to be NP-hard. A 0-1 integer programming and an efficient heuristic algorithm with computational experiment are also proposed. In addition, the worst case of the heuristic algorithm under some situations is provided. From the computational experiments, the overall performance of the proposed algorithm is quite satisfactory, especially when the job number is large. Some cases solved in polynomial time are also provided.
There are some limitations in this study. For example, the proposed heuristic algorithm only searches a near-optimal permutation schedule. In the future research, to develop a heuristic algorithm to find both permutation and non-permutation schedules may improve this scheduling performance (makespan). In addition, another performance measure, such as total tardiness or maximum lateness, may be considered for the proposed problem. Actually, in real situations, the cost of the reduction of the waiting time may be non-linear or the compression of the waiting time may be limited. Therefore, future research may focus on these issues, too.

Author Contributions

Conceptualization, D.-L.Y.; Methodology, D.-L.Y. and W.-H.K.; Software, W.-H.K.; Validation, D.-L.Y. and W.-H.K.; Formal Analysis, D.-L.Y. and W.-H.K.; Investigation, W.-H.K.; Resources, D.-L.Y.; Data Curation, W.-H.K.; Writing-Original Draft Preparation, W.-H.K.; Writing-Review & Editing, W.-H.K.; Visualization, D.-L.Y.; Supervision, D.-L.Y.; Project Administration, D.-L.Y.; Funding Acquisition, D.-L.Y.

Funding

This research was partially supported by the Ministry of Science and Technology of the Republic of China under grant MOST 103-2221-E-150-028.

Acknowledgments

The authors wish to thank the anonymous reviewers for their helpful comments on an earlier version of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Theorem 2.
In the problem, the case considered here is with max 1 i n i}≤1, max 1 i n {ai + di} ≤ min 1 i n {bi + di}, and ai*i*di* = min 1 i n {aiidi}. If there is a job Jj, such that aj + dj ≤ bi*i*di*, then (Ji*, Jj, B) is an optimal schedule, where B is an arbitrary subsequence of the jobs without Ji*, Jj.
Proof. 
Without loss of the generality, let Ji* = J1 and Jj = J2. It is clear that a lower bound on the makespan is min 1 i n {aiidi}+ i = 1 n b i . In the following, we will show that if max 1 i n i} ≤ 1, max 1 i n {ai + di} ≤ min 1 i n {bi + di}, and aj + djbi*i*di*, then the makespan of (Ji*, Jj, B), as shown in Figure A1, is equal to the lower bound ai*i*di* + i = 1 n b i . Hence, (Ji*, Jj, B) is an optimal schedule.
Figure A1. Optimal schedule for the case that max 1 i n i} ≤ 1, max 1 i n {ai + di} ≥ min 1 i n {bi + di}, ai + bibi* + αi*di*, and B = {J1, …, Jn} − {Ji*Jj}.
Figure A1. Optimal schedule for the case that max 1 i n i} ≤ 1, max 1 i n {ai + di} ≥ min 1 i n {bi + di}, ai + bibi* + αi*di*, and B = {J1, …, Jn} − {Ji*Jj}.
Sustainability 11 06885 g0a1
The constraint of max 1 i n i} ≤ 1 implies that it is worth reducing the delay time of a job if the finishing time of the job on machine M2 can be earlier. However, in such a case, only the delay time of the first job in the sequence is worthy to be reduced. Therefore, the constraint of aj + djbi*i*di* guarantees that the finishing time of Jj on machine M1 plus the delay time of the job is earlier than the finishing time of Ji* (the first job) on machine M2. The constraint of max 1 i n {ai + di} ≤ min 1 i n {bi + di} guarantees the finishing times of jobs (except for Ji* and Jj) on machine M1, plus the delay times of these jobs are earlier than the finishing time of their previous jobs on machine M2. It also implies that aibi for i = 1, 2, …, n.
We proceed by induction. First, we can obtain that the finishing time of Ji* on machine M2 is (ai* + bi* + αi*di* ). The finishing time of Jj (the second job) on machine M1 plus the delay time of the job is (ai* + aj + dj). If (aj + dj) ≤ (bi* + αi*di*), then (ai* + aj + dj) ≤ (ai* + bi* + αi*di* ). This means that the finishing time of Jj (the second job) on machine M1 plus the delay time of the job is earlier than the finishing time of Ji* on machine M2. There is no room for the reduction of the actual waiting time of Jj. Therefore, job Jj can be processed on machine M2 immediately after job Ji* is finished on machine M2. Then, the completion time of Jj on machine M2 is ai*i*di* + bi* + bj. Therefore, the case m = 2 (Jj=J2) is true. If the mth case is assumed to be true, that is, ( a i + a j + k = 3 m a k + d m ) ( a i + b i + α i d i + b j + k = 3 m 1 b k ) , and the completion time of the mth job on machine M2 is ( a i + b i + α i d i + b j + k = 3 m 1 b k + b m ) , then we show that the (m + 1)st case is also true.
The finishing time of the (m + 1)st job (say Jm+1) on machine M1 plus the delay time of the job is A = ( a i + a j + k = 3 m a k + a m + 1 + d m + 1 ) . Because max 1 i n {ai + di} ≤ min 1 i n {bi + di} and ( a i + a j + k = 3 m a k + d m ) ( a i + b i + α i d i + b j + k = 3 m 1 b k ) , A = ( a i + a j + k = 3 m a k + d m ) + ( a m + 1 + d m + 1 d m ) ( a i + b i + α i d i + b j + k = 3 m 1 b k ) + ( b m + d m d m ) = ( a i + b i + α i d i + b j + k = 3 m b k ) and the completion time of the (m + 1)st job on machine M2 is ( a i + b i + α i d i + b j + k = 3 m b k + b m + 1 ) . This completes the proof.
min 1 i n { b i + d i } , a + d j b + α i d i and B = { J 1 , , J n } { J i , } .
 □

Appendix B

Example .
There are five jobs; that is, J1, J2, J3, J4, and J5, to be processed on machine M1 and machine M2. The processing times of these jobs on machine M1 are a1 = 1, a2 = 3, a3 = 2, a4 = 3, and a5 = 2, respectively. The processing times on machine M2 are b1 = 5, b2 = 4, b3 = 1, b4 = 2, and b5 = 3, respectively. The delay times required before its processing on machine M2 are d1 = 2, d2 = 3, d3 = 1, d4 = 2, and d5 = 5, respectively. The cost indices are α1 = 0.2, α2 = 0.3, α3 = 0.1, α4 = 0.5, and α5 = 1.2, respectively.
Step 0.
Because max 1 i n i} = α5 =1.2 > 1, Theorem 2 and Theorem 3 cannot apply to this example. A sequence π using Maggu and Das’s algorithm is determined as follows: U = {J1 J2 J5} and V = {J4 J3}. Therefore, sequence π = {J[1] J[2] J[3] J[4] J[5]}={J1 J2 J5 J4 J3}. Set k = 1 and C[0],1 = C[0],2 = 0.
Step 1.
(k = 1)
Set C[k],1 = C[k−1],1+a[k] = C[1],1 = C[0],1+a[1] = 0 + 1 = 1.
C[k−1],2C[k−1],1 = C[0],2C[0],1= 0 < a[k] = a[1] = 1. Go to Step 1.1.
Step 1.1.
0 < α[k] = α[1] = 0.2 < 1.
C[k],2= C[k],1 +b[k][k]d[k] = C[1],2 = C[1],1 +b[1][1]d[1] =1 + 5 +( 0.2)(2) = 6.4.
Go to Step 2.
Step 2.
Set k = k +1=2 ≤ 5. Go to Step 1.
Step 1.
(k = 2)
Set C[2],1 = C[1],1+a[2] = 1 + 3 = 4.
C[1],2C[1],1= 6.4 − 1 =5.4 < a[2] = 3. Go to Step 1.1.
Step 1.1.
0 < α[2] = 0.3 < 1.
C[2],2= C[2],1 +b[2][2]d[2] = 4 + 4 + (0.3)(3) = 8.9.
Go to Step 2.
Step 2.
Set k = k + 1 = 3 ≤ 5. Go to Step 1.
Step 1.
(k = 3)
Set C[3],1 = C[3],1+a[3] = 4 + 2 = 6.
a[3] = 2 < C[2],2C[2],1= 8.9 − 4 = 4.9 < a[3] + d[3]= 2 + 5 = 7. Go to Step 1.2.
Step 1.2.
α[3] = 1.2 > 1.
C[3],2 = C[3],1 + d[3] + b[3] = 6 + 5 +3 = 14.
Go to Step 2.
Step 2.
Set k = k + 1 =4 ≤ 5. Go to Step 1.
Step 1.
(k = 4)
Set C[4],1 = C[4],1+a[4] = 6 + 3 = 9.
C[3],2C[3],1= 14 − 6 = 8 > a[4] = 3. Go to Step 1.3.
Step 1.3.
C[4],2 = C[3],2 + b[4] = 14 + 2 = 16.
Go to Step 2.
Step 2.
Set k = k + 1 = 5 ≤ 5. Go to Step 1.
Step 1.
(k = 5)
Set C[5],1 = C[4],1+a[5] = 9 + 2 = 11.
a[5] = 2 < C[4],2C[4],1= 16 − 9 = 7. Go to Step 1.2.
Step 1.2.
0 < α[5] = 0.1 < 1.
C[5],2 = C [4],2+b[5][5] ⋅(C[5],1+ d[5]C[4],2 ) = 16 + 1 + 0.1(11 + 1−16) = 16.6.
Go to Step 2.
Step 2.
Set k = k + 1 = 6 > 5. Stop
Therefore, the makespan of these five jobs is C[5],2 = 16.6.

References

  1. Hodson, A.; Muhlemann, A.; Price, D. A microcomputer based solution to a practical scheduling problem. J. Oper. Res. Soc. 1985, 36, 903–914. [Google Scholar] [CrossRef]
  2. Chen, S.-H.; Liou, Y.-C.; Chen, Y.-H.; Wang, K.-C. Order Acceptance and Scheduling Problem with Carbon Emission Reduction and Electricity Tariffs on a Single Machine. Sustainability 2019, 11, 5432. [Google Scholar] [CrossRef] [Green Version]
  3. Giret, A.; Trentesaux, D.; Prabhu, V. Sustainability in manufacturing operations scheduling: A state of the art review. J. Manuf. Syst. 2015, 37, 126–140. [Google Scholar] [CrossRef]
  4. Gong, L.; Li, Y.; Xu, D. Combinational Scheduling Model Considering Multiple Vehicle Sizes. Sustainability 2019, 11, 5144. [Google Scholar] [CrossRef] [Green Version]
  5. Liu, Y.; Liao, X.; Zhang, R. An Enhanced MOPSO Algorithm for Energy-Efficient Single-Machine Production Scheduling. Sustainability 2019, 11, 5381. [Google Scholar] [CrossRef] [Green Version]
  6. Shiue, F.-J.; Zheng, M.-C.; Lee, H.-Y.; Khitam, A.F.K.; Li, P.-Y. Renovation Construction Process Scheduling for Long-Term Performance of Buildings: An Application Case of University Campus. Sustainability 2019, 11, 5542. [Google Scholar] [CrossRef] [Green Version]
  7. Theophilus, O.; Dulebenets, M.A.; Pasha, J.; Abioye, O.F.; Kavoosi, M. Truck Scheduling at Cross-Docking Terminals: A Follow-Up State-Of-The-Art Review. Sustainability 2019, 11, 5245. [Google Scholar] [CrossRef] [Green Version]
  8. Torkjazi, M.; Huynh, N. Effectiveness of Dynamic Insertion Scheduling Strategy for Demand-Responsive Paratransit Vehicles Using Agent-Based Simulation. Sustainability 2019, 11, 5391. [Google Scholar] [CrossRef] [Green Version]
  9. Johnson, S.M. Optimal two-and three-stage production schedules with setup times included. Nav. Res. Logist. Q. 1954, 1, 61–68. [Google Scholar] [CrossRef]
  10. Mitten, L. Sequencing n jobs on two machines with arbitrary time lags. Manag. Sci. 1959, 5, 293–298. [Google Scholar] [CrossRef]
  11. Sule, D.R.; Huang, K.Y. Sequency on two and three machines with setup, processing and removal times separated. Int. J. Prod. Res. 1983, 21, 723–732. [Google Scholar] [CrossRef]
  12. Maggu, P.L.; Smghal, M.L.; Mohammad, N.; Yadav, S.K. On n-job, 2-machine flow-shop scheduling problem with arbitrary time lags and transportation times of jobs. J. Oper. Res. Soc. Jpn. 1982, 25, 219–227. [Google Scholar] [CrossRef] [Green Version]
  13. Ruiz, R.; Stützle, T. An iterated greedy heuristic for the sequence dependent setup times flowshop problem with makespan and weighted tardiness objectives. Eur. J. Oper. Res. 2008, 187, 1143–1159. [Google Scholar] [CrossRef]
  14. Nishi, T.; Hiranaka, Y. Lagrangian relaxation and cut generation for sequence-dependent setup time flowshop scheduling problems to minimise the total weighted tardiness. Int. J. Prod. Res. 2013, 51, 4778–4796. [Google Scholar] [CrossRef]
  15. Wang, Y.; Li, X.; Ma, Z. A Hybrid Local Search Algorithm for the Sequence Dependent Setup Times Flowshop Scheduling Problem with Makespan Criterion. Sustainability 2017, 9, 2318. [Google Scholar] [CrossRef] [Green Version]
  16. Reddi, S.; Ramamoorthy, C. On the flow-shop sequencing problem with no wait in process. J. Oper. Res. Soc. 1972, 23, 323–331. [Google Scholar] [CrossRef]
  17. Dell’Amico, M. Shop problems with two machines and time lags. Oper. Res. 1996, 44, 777–787. [Google Scholar] [CrossRef]
  18. Yu, W.; Hoogeveen, H.; Lenstra, J.K. Minimizing makespan in a two-machine flow shop with delays and unit-time operations is NP-hard. J. Sched. 2004, 7, 333–348. [Google Scholar] [CrossRef] [Green Version]
  19. Fiszman, S.; Mosheiov, G. Minimizing total load on a proportionate flowshop with position-dependent processing times and job-rejection. Inf. Process. Lett. 2018, 132, 39–43. [Google Scholar] [CrossRef]
  20. Yang, D.-L.; Chern, M.-S. A two-machine flowshop sequencing problem with limited waiting time constraints. Comput. Ind. Eng. 1995, 28, 63–70. [Google Scholar] [CrossRef]
  21. Su, L.-H. A hybrid two-stage flowshop with limited waiting time constraints. Comput. Ind. Eng. 2003, 44, 409–424. [Google Scholar] [CrossRef]
  22. Sriskandarajah, C.; Goyal, S. Scheduling of a two-machine flowshop with processing time linearly dependent on job waiting-time. J. Oper. Res. Soc. 1989, 40, 907–921. [Google Scholar] [CrossRef]
  23. Yang, D.-L.; Chern, M.-S. A generalized two-machine flowshop scheduling problem with processing time linearly dependent on job waiting-time. Comput. Ind. Eng. 1999, 36, 365–378. [Google Scholar] [CrossRef]
  24. Chung, T.-P.; Sun, H.; Liao, C.-J. Two new approaches for a two-stage hybrid flowshop problem with a single batch processing machine under waiting time constraint. Comput. Ind. Eng. 2017, 113, 859–870. [Google Scholar] [CrossRef]
  25. Wang, B.; Huang, K.; Li, T. Permutation flowshop scheduling with time lag constraints and makespan criterion. Comput. Ind. Eng. 2018, 120, 1–14. [Google Scholar] [CrossRef]
  26. Johnson, D.; Garey, M. A Guide to the Theory of NP-Completeness. Computers and Intractability; WH Freeman and Company: New York, NY, USA, 1979. [Google Scholar]
  27. Maggu, P.L.; Das, G. On 2×n sequncing problem with transportation times of jobs. Pure Appl. Math. Sci. 1980, 12, 1–6. [Google Scholar]
Figure 1. An illustration for the interchange of operations in Lemma 1.
Figure 1. An illustration for the interchange of operations in Lemma 1.
Sustainability 11 06885 g001
Figure 2. Gantt diagram for the case that job Ji is processed between Jk+1 and Jk+2.
Figure 2. Gantt diagram for the case that job Ji is processed between Jk+1 and Jk+2.
Sustainability 11 06885 g002
Figure 3. Optimal schedule with the makespan Cmax = 4S2 + S + 3.
Figure 3. Optimal schedule with the makespan Cmax = 4S2 + S + 3.
Sustainability 11 06885 g003
Figure 4. Optimal schedule for the case that max 1 i n i} ≤ 1, min 1 i n {ai} ≥ max 1 i n {bi + αidi}, and J[i] is the job scheduled at the i-th position in the processing sequence.
Figure 4. Optimal schedule for the case that max 1 i n i} ≤ 1, min 1 i n {ai} ≥ max 1 i n {bi + αidi}, and J[i] is the job scheduled at the i-th position in the processing sequence.
Sustainability 11 06885 g004
Figure 5. Gantt diagrams for three cases in the heuristic algorithm.
Figure 5. Gantt diagrams for three cases in the heuristic algorithm.
Sustainability 11 06885 g005
Figure 6. Gantt diagrams for three cases of the critical job.
Figure 6. Gantt diagrams for three cases of the critical job.
Sustainability 11 06885 g006
Figure 7. The average percentages of errors in the computational experiments.
Figure 7. The average percentages of errors in the computational experiments.
Sustainability 11 06885 g007
Table 1. Computational results for Algorithm 1.
Table 1. Computational results for Algorithm 1.
ndiObserved in 20 Test Problems
Minimum Percentage Error (%)Average Percentage Error (%)Maximum Percentage Error (%)
10[0,50]03.817210.6325
[50,100]05.986411.8965
[100,200]08.123619.3263
20[0,50]02.71155.3269
[50,100]02.965810.1268
[100,200]1.25475.15799.8852
30[0,50]01.23633.9986
[50,100]02.66366.2355
[100,200]3.62785.698612.3023
40[0,50]0.13212.15065.3330
[50,100]0.10361.53673.7795
[100,200]1.22333.99687.4862
50[0,50]00.98651.2296
[50,100]0.10071.22333.0125
[100,200]1.00233.05596.7756
60[0,50]0.05531.00212.9140
[50,100]0.30540.99531.2235
[100,200]0.95622.00133.9240
70[0,50]0.03451.07822.6835
[50,100]0.16020.98662.2012
[100,200]0.60582.61583.9671
80[0,50]00.22121.2369
[50,100]0.23650.99682.0123
[100,200]0.55231.26332.1825
90[0,50]0.03270.32231.0023
[50,100]0.23050.66961.2354
[100,200]0.68132.03623.7182
100[0,50]0.01730.12111.2310
[50,100]0.02350.90712.3261
[100,200]0.21031.59813.6682
150[0,50]00.43550.8908
[50,100]0.06360.36931.0032
[100,200]0.10151.10361.9936
200[0,50]00.10360.3679
[50,100]0.30120.42500.5587
[100,200]0.29630.39720.8572

Share and Cite

MDPI and ACS Style

Yang, D.-L.; Kuo, W.-H. Minimizing Makespan in A Two-Machine Flowshop Problem with Processing Time Linearly Dependent on Job Waiting Time. Sustainability 2019, 11, 6885. https://0-doi-org.brum.beds.ac.uk/10.3390/su11246885

AMA Style

Yang D-L, Kuo W-H. Minimizing Makespan in A Two-Machine Flowshop Problem with Processing Time Linearly Dependent on Job Waiting Time. Sustainability. 2019; 11(24):6885. https://0-doi-org.brum.beds.ac.uk/10.3390/su11246885

Chicago/Turabian Style

Yang, Dar-Li, and Wen-Hung Kuo. 2019. "Minimizing Makespan in A Two-Machine Flowshop Problem with Processing Time Linearly Dependent on Job Waiting Time" Sustainability 11, no. 24: 6885. https://0-doi-org.brum.beds.ac.uk/10.3390/su11246885

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop