Next Article in Journal
Nonnegative Inverse Elementary Divisors Problem for Lists with Nonnegative Real Parts
Next Article in Special Issue
Comparing Built-in Power Banks for a Smart Backpack Design Using an Auto-Weighting Fuzzy-Weighted-Intersection FAHP Approach
Previous Article in Journal
Set the Controls for the Heart of the Maths. The Protective Factor of Resilience in the Face of Mathematical Anxiety
Previous Article in Special Issue
Arithmetics of Vectors of Fuzzy Sets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Migrating Birds Optimization Algorithm for a Hybrid Flow Shop Scheduling within Steel Plants

1
Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Ministry of Education, Wuhan 430000, China
2
Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan 430000, China
*
Author to whom correspondence should be addressed.
Submission received: 9 September 2020 / Revised: 22 September 2020 / Accepted: 23 September 2020 / Published: 26 September 2020
(This article belongs to the Special Issue Fuzzy Sets and Soft Computing)

Abstract

:
Steelmaking and the continuous-casting (SCC) scheduling problem is a realistic hybrid flow shop scheduling problem with continuous-casting production at the last stage. This study considers the SCC scheduling problem with diverse products, which is a vital and difficult problem in steel plants. To tackle this problem, this study first presents the mixed-integer linear programming (MILP) model to minimize the objective of makespan. Then, an improved migrating birds optimization algorithm (IMBO) is proposed to tackle this considered NP-hard problem. In the proposed IMBO, several improvements are employed to achieve the proper balance between exploration and exploitation. Specifically, a two-level decoding procedure is designed to achieve feasible solutions; the simulated annealing-based acceptance criterion is employed to ensure the diversity of the population and help the algorithm to escape from being trapped in local optima; a competitive mechanism is developed to emphasize exploitation capacity by searching around the most promising solution space. The computational experiments demonstrate that the proposed IMBO obtains competing performance and it outperforms seven other implemented algorithms in the comparative study.

1. Introduction

The common flow shop scheduling problem (FSP) is a complex combinatorial optimization problem which has great applications in industry [1,2,3]. A wide variety of approaches have been developed to solve the FSP and its variants. This study considers a realistic hybrid flow shop scheduling problem (HFSP) in the steel plants, namely the steelmaking and continuous-casting (SCC) scheduling problem. In the literature, the SCC scheduling problem is usually regarded as a hybrid flow shop scheduling problem (HFSP) with continuous-casting production at the last stage.
As is common knowledge, SCC is the key process in the steel production system to manufacture billet or slabs with user-defined chemical compositions [4]. In this process, hot metal is first smelted into molten steel with a strictly predefined chemical composition, then it is cast into solidified parts. The general HFSP consists of k stages and each stage has one or more parallel machines. A set of N jobs need to be operated by the machines available at each stage in sequence. Similarly, the SCC scheduling problem consists of three stages (steelmaking, refining, and continuous casting) in series, where each stage can have several machines in parallel, and a set of charges are operated by the machines available at each stage in sequence. Nevertheless, SCC production has three main characteristics: (1) Continuous production mode and intermittent production mode coexist. At the first two phases, charge is regarded as the minimum production unit, corresponding to the definition of job in the flow shop scheduling. Nevertheless, the charges with similar chemical compositions must be operated together in sequence on the same caster at the last stage. These charges consist of one batch (also called cast in some studies) and hence the batch with several charges is the basic production unit on the casters. (2) The casting process has strict limitations on temperature. Molten steel is required to keep above a certain temperature to ensure the technological requirements for casting. (3) Setup time is required for production preparation between any two adjacent batches. All these features make the scheduling of SCC scheduling problem a more difficult problem than the general HFSP [5,6].
On the other hand, since diversified steel products are in need, the ingredients and specifications of steel products have been further subdivided. This results in the increasing of batch diversity and highly frequent utilization of the refining process. These changes further increase the time complexity for solving the SCC scheduling problem, and lead to a large difficulty in developing a reasonable and effective production schedule. With regard to the objectives of this problem, the minimization objective of the makespan (also called the maximum completion time) is commonly utilized as a performance evaluator. This objective reflects productivity accurately, and has the advantages of reducing production cycle, reducing wait time related to machines, and reducing tardiness penalties responding to punctual delivery [6,7].
Extending the standard notation in [8], this considered SCC scheduling problem is described as F H m ,   ( ( P M ) ( i ) ) k = 1 M i | |   { C m a x } . In the above expression, F H m means the problem is a hybrid flow shop scheduling problem with predefined number of stages; ( ( P M ) ( i ) ) k = 1 M i means that there are one or more identical parallel machines at each stage,   k = 1 , 2 M i ; C m a x represents the objective of optimizing makespan. The simplest scenario of F H m , which involves two stages and a single unit at one stage and two identical units at another stage, has been proven NP-hard [9]. Hence, the proposed SCC scheduling problem also belongs to the NP-hard category and it is difficult to obtain a high-quality schedule with considering an acceptable computation time.
Therefore, this study presents an improved migrating birds optimization algorithm (IMBO) to tackle the SCC scheduling problem with diverse products effectively. The proposed IMBO utilizes two neighborhood structures to intensify exploration and proposes the simulated annealing-based acceptance criterion to ensure the diversity of the population and further to escape from being trapped in local optima [10,11]. Meanwhile, the proposed method also employs a competitive mechanism to emphasize exploitation capacity by searching around the most promising solution space. All these improvements are employed to help the proposed algorithm to achieve the proper balance between exploration and exploitation. Notice that this is the first attempt to apply IMBO to solve the SCC scheduling problem. Computational study demonstrates that the proposed IMBO obtains competing performance and it outperforms the original migrating birds optimization algorithm and seven other well-known algorithms.
The remainder part of this paper is organized as follows: Section 2 provides the literature review, and Section 3 presents the problem description and mathematical formulation. Subsequently, Section 4 introduces the proposed improved migrating birds optimization algorithm. Section 5 reports the experimental results and finally Section 6 provides conclusions and future research venues.

2. Literature Review

For solving the general HFSP scheduling problem and its variants, many approaches have been developed to obtain the optimal or near-optimal solutions [12,13,14]. These approaches include exact algorithms [15,16], heuristic methods [17,18] and meta-heuristics methods [3,11]. This study mainly focuses on recent achievements on the SCC scheduling problem.
Exact methods attempt to solve the SCC scheduling problem optimally, and they could also provide the duality gaps or lower bounds for evaluating the solutions. Bellabdaoui and Teghem [19] presented a SCC scheduling model based on mixed-integer mathematical programming; Harjunkoski and Grossmann [20] presented a decomposition-based mathematical programming strategy for solving the large-size scheduling problems. These mathematical models are solved by branch and bound method or branch and cutting method. Tang, et al. [21] presented a mathematical programming formulation and then developed a solution methodology by combining Lagrangian relaxation and heuristics to solve the problem. However, it is worth mentioning that, as the problem size increases, the computational time spent by exact methods grows greatly. Hence, exact methods can only obtain the optimal solutions of small-size instances, whereas they cannot solve the large-size instances effectively within an acceptable computation time.
Heuristic methods are developed to achieve feasible solutions in a short computational time. Peng, et al. [22] developed a heuristic scheduling procedure with an optional figure-based directional search process. Hauber [23] put forward a heuristic algorithm consisting of three levels: cast scheduling on casters, scheduling of the other stages and exact timing of all operations. Buyukozkan, et al. [24] studied the SCC scheduling problem with task start time delay, and they designed an efficient heuristic scheduling method to respond to any interruption in the production quickly. However, heuristic algorithms might not obtain high-quality solutions for some instances.
Compared to heuristic algorithms, meta-heuristics can achieve better solutions at the cost of additional computation time [25,26]. Regarding the meta-heuristics, Atighehchian, et al. [27] combined ant colony optimization and iterative algorithm. Considering the controllable processing times at the last stage, Jiang, et al. [5] presented an improved differential evolution algorithm. To solve the SCC scheduling problem with dynamic operation and skipping features, Li and Pan [28] proposed an improved discrete artificial bee colony (DABC) algorithm. The above studies suggest that meta-heuristics especially the population-based optimization algorithms are quite competent to solve the SCC scheduling problem.
As you can see, the above studies do not consider the SCC scheduling problem with diverse products. Hence, this study formulates this problem and develops a metaheuristic, namely the migrating birds optimization (MBO) algorithm, to solve the considered SCC scheduling problem. The MBO algorithm has been successfully applied to hybrid combination optimization problems including the flow shop scheduling problems [11,29], flexible manufacturing systems [30], assembling line balancing problems [31,32], task allocation problem [10], etc. The MBO algorithm has a unique evolution mechanism with sharing mechanism which differentiates it from the other algorithms. The MBO is selected in this study for two reasons as follows. Firstly, MBO is a population algorithm based on local search and it is capable of achieving the local optimal solutions in a short time. Secondly, the sharing mechanism replaces the poor-quality solutions with the neighbor solutions of other individuals in the swarm, and it accelerates the evolution process by replacing the poor-quality solutions. Last but not the least, we also present several improvements to enhance the MBO algorithm to achieve proper balance between exploration and exploitation. Notice that this is the first attempt to apply the MBO to solve the SCC scheduling problem in consideration.

3. Problem Description and Mathematical Formulation

This section first provides the problem description and later presents the mathematical formulation.

3.1. Problem Description

SCC production mainly contains three phases: steelmaking, refining and continuous casting. Figure 1 presents a schematic diagram of the SCC production process. At the steelmaking phase, hot metal and steel scrap is smelted into molten steel via several identical and parallel smelting converter furnaces, Linz-Donawitzor Gas (LD) furnaces. Subsequently, molten steel is reserved in a ladle pot as a charge. Then, each charge is transferred to the refining stages to adjust its chemical compositions on Ladle-Furnace (LF) or even to vacuum degassing in hydrogen removal on Ruhrstahl-Heraeus (RH). Once the temperature and chemical compositions are in accordance with the technological requirements, charges are finally sent to their designated continuous caster (CC) to produce slabs. At the steelmaking and refining stages, the charge is assigned to any parallel machine at each stage. Nevertheless, the charges with similar chemical compositions must be operated in together on the same caster, and these charges consist of one batch (cast). Each batch contains one or several charges with similar chemical compositions, and all the charges in a batch should be operated in the continuous production mode. On the continuous caster, when completing a batch on a continuous caster, a setup time is needed for production preparation of the immediately following batch on this caster.
Assume for this realistic hybrid flow shop scheduling problem, there are J (j = 1, 2, …, J) charges, L (l = 1, 2, …, L) batches. At each processing stage, there are Mi identical and parallel machines. In batch l, LJ (l,j) charges belonging to this batch have exactly one processing route. The processing time of a charge j related to stage i is denoted by PTi,j. The setup time for production preparation is Su and the total number of unit-specific event points is denoted as T. The main purpose of the SCC scheduling problem is to determine:
(1)
Optimal batch schedule including the allocation, sequencing and timing of each batch at the last stage;
(2)
Optimal schedule for all charges including their allocation, sequencing and timings at other stages.
The main assumptions of this study are presented as follows. Notice that the SCC scheduling problem addressed in this study is different from those reported in the literature [33] as diverse products are considered. Namely, in this study the charges in different batches may have different processing routes, and clearly the considered problem is more complex than that in [16].
(1)
The sequence of the charges belonging to the same batch is predefined;
(2)
Machine malfunction is not considered;
(3)
Transportation time between any two successive stages is negligible;
(4)
Setup time between any two successive batches is independent of batch sequence and properties;
(5)
Unlimited storage capacity is provided between stages;
(6)
All charges and processing units are available for processing at time zero.

3.2. Mathematical Formulation

Although there are different modelling approaches including discrete- and continuous-time representations for scheduling problems, the unit-specific event-based modelling approach, which is a kind of continuous-time representation, has shown clear advantages in the literature [34,35]. The modeling method based on unit-specific event-based representation is very flexible and capable of expressing many hybrid scheduling characteristics. They have successfully been applied to various scheduling problems both in academic research and industrial applications. Hence, this study utilizes the unit-specific event-based modelling approach to represent the scheduling horizon, where the scheduling horizon is divided into T (t = 1, 2, …, T) event points. Each machine unit has T event points, and the location of the same event points for different machine units may be different as shown in Figure 2.
To formulate the problem, it is supposed that all the charges undergo every stage but the processing time is set to zero when the charges do not need the stages in real production. Before introducing the mathematical formulation, the utilized notations are introduced first as the following in Table 1.
On the basis of the utilized notations, the mathematical model is formulated with expressions (1)–(19) to minimize the makespan.
Min   C m a x
C m a x T S m , t + j t X j , m , t × P T i , j + t < | T | ( T S m , t + 1 T S m , t j t X j , m , t × P T i , j ) , i = s ,   m M i ,   t = 1
C m a x T F m , t   , i = s ,   m M i ,   t = T
j X j , m , t 1 , m ,   t
m = 1 m i t X j , m , t = 1 , j , i
m = 1 m i t Y l , m = 1 , l ,   i = s
j L J ( l , j ) t X j , m , t Y l , m , i = s , m M i
X j + 1 , m , t + 1 X j , m , t ,   ( j , j + 1 ) L J ( l , j ) , i = s , m M i , t < T
j X j , m , t + 1 j X j , m , t ,   m , t < T
T s i i + 1 , j T s i i , j + P T i , j ,   j , i < s , m M i
T s i i , j + 1 = T s i i , j + P T i , j ,   ( j , j + 1 ) L J ( l , j ) , i = s , m M i
T s m , t + 1 T f m , t ,   m , t < T
T f m , t T s m , t + P T i , j ,   i , m M i , t < T
T s m , t T s i i , j + U ( 1 X j , m , t ) ,   j , i , m M i , t
T s m , t T s i i , j U ( 1 X j , m , t ) ,   j , i , m M i , t
T s i i , j T f m , t + S u U · ( 1 X j , m , t + 1 ) ,   j L J F ( l , j ) , i = s , m M i , t < T
X j , m , t , Y l , m { 0 , 1 } ,   l , j , i , m M i , t
T s i i , j 0 , j , i
T s m , t , T f m , t 0 , m , t
The formulated model in expression (1) minimizes the makespan. Constraints (2) and (3) calculate the lower bounds of the C m a x to reduce the search scope and computational time. Constraint (2) indicates that, at the last stage s, C m a x must be larger than or equal to the sum of the start time of first charge, the total processing times of all charges at stage s and the total idle times on machine m. Constraint (3) indicates that, at the last stage s, C m a x must be larger than or equal to the end time of the last charge on the machine of the last stage.
For the remaining constraints, constraints (4) and (5) mean that a machine at any stage can operate at most one charge and a charge must be processed exactly once at any stage at each event point. Constraint (6) implies that each batch must be allocated to exactly one caster at the last stage and all charges belonging to a batch must be processed sequentially on the specified machine based on the predetermined order without interruption. Constraint (7) ensures that each event point cannot be assigned unless the previous one has been assigned. This additional constraint guarantees that all event points of each machine are allocated one by one. Constraints (8)–(13) calculate the start and the end time of each process of all charges to ensure their feasibility. Constraints (14)–(16) consider the situation when charge j at stage i is allocated to event point t on machine m. Additionally, the start time of charge j at stage i equals to that of event point t on machine m when charge j at stage i is allocated to event point t on machine m; the start time of charge j at stage i and that of event point t on machine m are both otherwise relaxed. Constraints (17)–(19) define the scope of the decision variables.
This formulated model is a mixed-integer linear programming model, and small-size problems described with this model can be solved optimally with the Cplex solver. This mathematical model is different from that published by considering the diverse products, and this model will be tested in Section 5.3.

4. Improved Migrating Birds Optimization Algorithm

This section introduces the basic MBO, and later describes the main components of the proposed IMBO algorithm.

4.1. Introduction of the Basic MBO

Migrating birds optimization (MBO) is a recent high-performing meta-heuristic algorithm inspired by the migrating birds’ flight in a V-shaped formation. Within this formation, a bird in the first, which is called leader, leads the whole flock and other birds are placed in the right and left sides of the leader equally. During the flight, if the leader gets tired, it steps back to the tail of the left or right line and the following bird with best fitness will be as the new leader. This algorithm has shown competitive performances on many combinatorial optimization problems [36,37].
In the MBO algorithm, solutions are regarded as birds in the flock. This algorithm consists of initialization, leader improvement, follower improvement, new leader selection and this main cycle is repeated until a termination criterion is met. The main procedure of the basic MBO is presented in Algorithm 1. This algorithm has four parameters: the number of individuals in the swarm (α), the number of neighbors around the leader and following bird (β), the number of neighbors to be shared with the following birds (χ), and the number of iterations before replacing the leader, which is also called the number of tours (γ).
Algorithm 1 Procedure of the basic migrating birds optimization
Input:Job permutations and algorithm parameters
%Initialization
Step 1:Generate the initial solutions randomly, and select the best one as the leader and the others are divided to form V-shape with the left line and right line, equally.
%Leader improvement
Step 2:Generate β solutions around the leader. If these new neighbors improve the leader, then substitute the leader with the best neighbor; otherwise, keep the leader unchanged. After that, other unused neighbors around the leader are sorted according to the descending order of objective values, and the 2χ neighbors with best values are divided equally to form the sharing neighbors for left and right set, respectively.
%Follower improvement
Step 3:Conduct the follower improvement along the lines toward the tails. For example, for a follower in the left (right) line, randomly create β-χ neighbors around it. Then, evaluate these β-χ solutions and χ solutions from the sharing neighbor set, which are described as Step 2. If the best neighbor has the better fitness than the follower, then replace it. Subsequently, rebuild the left (right) sharing neighbor set with the best χ unused neighbors. Perform this procedure repeatedly until all followers have been considered.
%New leader selection
Step 4:After conducting Steps 2 and 3 for γ times, reset the leader to the tail of the left or the right line, and then selected the first follower in this line as the new leader.
Step 5:If the termination criterion is satisfied, output the best solution achieved. Otherwise, return to Step 2.
Output:The achieved best solution.
To tackle the considered problem, this section presents an improved migrating birds optimization (IMBO) with several improvements. The main segments and the procedure of the proposed IMBO are presented in the following sections in detail.

4.2. Encoding and Decoding

The proposed IMBO utilizes the charge permutation for encoding, which corresponds to a charge sequence. Specifically, each element in the charge permutation is a charge. Supposed that there is a charge permutation [2, 5, 3, 6, 4, 7, 1, 8], the sequence of operating the charges at the first stage is charge 2, 5, 3, 6, 4, 7, 1 and 8.
Although the encoding for the problem under consideration is similar to that utilized in the general HFSP, it is necessary to develop a new decoding procedure to obtain the feasible solution as the more features are considered in Section 3.2. Hence, on the basis of charge permutation, this study develops the following two-level decoding procedure to obtain feasible solutions. This two-level decoding procedure consists of the FIFO-based forward heuristic and the backward heuristic, where FIFO means first-in and first-out rule.
In the FIFO-based forward heuristic, machine assignment and charge permutation on each machine are determined by the FIFO rule, which has been widely utilized in the scheduling literature. The FIFO rule gives a higher priority to the charge that has the earlier release time, and with the FIFO rule all charges are sequenced in the descending order of release times with the time complexity O(nlog(n)). Namely, each charge is assigned to the first available machine at any stage except the last stage. At the last stage, all the charges in a batch should be assigned to the same caster. The proposed FIFO-based forward heuristic for the steelmaking and refining stages is presented in Algorithm 2.
Nevertheless, the forward heuristic is not enough to ensure the continuity of each batch in the last stage. Additionally, the forward heuristic might lead to the large wait time and may result in huge temperature drop of charges. Hence, this study furthers puts forward a backward heuristic to adjust the charges from the last stage to the first to (1) ensure that all the charges in a batch are cast continuously and (2) reduce the wait time between any two successive stages. In contrast to the forward heuristic, the backward heuristic adjusts the timing of each charge reversely from continuous casting stage to refining stages and finally to steelmaking stage. The backward heuristic is provided in Algorithm 3. Notice that Algorithms 2 and 3 present the main idea of the FIFO-based forward heuristic and the backward heuristic and the detailed procedures are presented in Appendix A.
Algorithm2 The FIFO-based forward heuristic
Input:Batch plan and charge permutation π(j)
Step 1:At the steelmaking stage, select each charge based on the given charge permutation and assign the first available machine (LD) to the selected charge.
Step 2:At the refining stage, select each charge based on the ascending order of completion times of the steelmaking stage and assign the first available machine (LF) to the selected charge.
Step 3:At the continuous casting stage, process the charges in each batch sequentially.% The charges in each batch must be operated together in sequence.
Output:Machine assignment, charge permutation on each machine.
Algorithm 3 The backward heuristic
Input:Machine assignment, charge permutation on each machine.
Step 1:At the last casting stage, each charge except for the last one in the same batch, should be moved right to eliminate the batch break when there is a batch break.
Step 2:At the refining stage, right shift the starting time of each charge on each machine.
Step 3: At the steelmaking stage, shift the starting time of each charge on each machine.
Output:Production schedule.
To clarify the proposed decoding process, a small-size instance is presented in Table 2 as an example. This instance has three batches and eight charges to be scheduled, and the detailed characteristics of these charges are represented in Table 2. Note that there are two parallel machines (LDs) at the steelmaking stage and two machines (CCs) at the casting stage, respectively. For the refining phase, there are two process routes, LF and LF-RH, where LF-RH means first being operated by LF and later by RH. Additionally, the LF stage has two parallel machines while the RH stage has only one machine.
Supposing that production permutation of charges is [2, 5, 3, 6, 4, 7, 1, 8], on the basis of the FIFO-based forward heuristic, the charge 2 is assigned to LD1 and charge 3 is assigned to LD2, as presented in Figure 3a. After that, the forward heuristic allocates charge 1 to LD1 and charge 5 to LD2 and this procedure terminates until all the charges have been assigned. As you can see in Figure 3a, charges 2, 1, 6, and 8 are allocated to LD1 and the others are allocated to LD2. Since charge 2 is the first in the permutation and it does not go through RH, both CC1 and CC2 are available after it is released from LF and here CC1 is selected. Charge 3 also belongs to batch 2, and hence it is also allocated to CC1. When charge 1, the first charge in a new batch, is released from LF1, CC1 is not available. Charge 1 needs to be allocated to CC2 after waiting for a setup time. The detailed schedule by the forward heuristic is depicted in Figure 3a.
At the continuous casting stage, the backward heuristic pulls back the charges 5, 6, 7 towards charge 8 to ensure that they are cast continuously without interruption as all of them belong to batch 3. A similar method is executed for other batches. Then, the backward heuristic pulls back the release times at the previous two stages of charges in order to reduce their wait times between two adjacent stages. For example, as presented in Figure 3b, the backward heuristic pulls back the release time at the LF process of charge 1 until it equals to the start time on CC2. As a result of the backward heuristic, a feasible solution is achieved as illustrated in Figure 3b.

4.3. Population Initialization

To achieve high-quality charge permutation, this study utilizes the heuristic initialization as presented in Algorithm 4.
Algorithm 4 Procedure of the heuristic initialization
Input:A random batch sequence.
Step 1Generate a batch sequence randomly and arrange the charges in each batch in the ascending order.
Step 2Add the first charge of each batch to set π1[j] according to the batch sequence.
Step 3Select a charge from π1[j] randomly, add this charge to the charge permutation π[j] and then delete this selected charge from π1[j].
Step 4If there are assignable charges in the batch which the deleted charge belongs to, select the first one from the remaining charges in this batch and add it to the set π1[j].
Step 5If the set π1[j] is empty, output the achieved charge permutation π[j]; otherwise, return to Step 3.
Output:The charge permutation π[j].
If taking the small-size instance in Section 4.2 as an example, Table 3 exhibits the detailed procedure of generating a feasible permutation by the heuristic initialization. Supposing that a random batch sequence is [2, 3, 1], the resulting charge permutation is [2, 5, 3, 6, 4, 7, 1, 8].
To have a diverse swarm, parts of the initial individuals are obtained utilizing the above heuristic initialization and the others are achieved randomly. After obtaining a swarm of individuals, the best individual is selected as the leader and other birds are put in a hypothetical ‘V’ formation randomly.

4.4. Neighborhood Structures

The proposed IMBO algorithm generates β neighbor solutions around the leader and β-χ neighbor solutions around followers. This study applies two neighborhood operators: insert and swap. Insert operator removes a charge and reinserts it to another different position randomly. Swap operator selects two charges randomly and exchanges the positions of the selected charges.
The utilization of two neighborhood structures aims at enhancing the exploration capacity. Since two neighbor operators are involved, this study selects one neighborhood operator randomly to modify the individual. In other words, this study first generates a random number within [0, 1]. If this number is less than 0.5, the insert operator is utilized; otherwise, the swap operator is applied.

4.5. New Acceptance Criterion

In the original MBO, one bird is replaced with the best neighbor solution with greedy acceptance criterion. Nevertheless, this acceptance criterion might lead to the situation where the incumbent bird remains unchanged for many times and the algorithm might be trapped in local optima. Hence, this study develops a novel probabilistic acceptance criterion based on the simulated annealing algorithm. If the best neighbor outperforms the incumbent bird, the incumbent bird is directly replaced; otherwise, the incumbent bird is also replaced with the best neighbor according to the probability of e ( Δ / t e m p ) , where the temp is calculated using expression (20). Clearly, this new acceptance criterion accepts the worse solution with a certain probability to ensure the diversity of the population and help the algorithm to escape from being trapped into local optima.
t e m p = T × m j P T j , m L × n × P

4.6. Competitive Mechanism

In the basic MBO, the birds are put on hypothetical V-shaped formation arbitrarily, some promising birds may be placed in the tail of the flock and have few opportunities to share their neighbors. Hence, this study applies the competitive mechanism proposed by Zhang, et al. [32] to remedy this possible drawback. In the competitive mechanism, the positions of the birds in each line are adjusted as follows. The bird with the best fitness is placed in the first in corresponding line, the bird with the second-best fitness is placed in the second position, and lastly the bird with the worst fitness is placed in the last position. Clearly, this competitive mechanism guarantees that promising birds are placed in the front of the lines and have more opportunities to share their neighbors. In this study, this mechanism is executed after the improvement of leader bird to adjust the positions of the following birds in the left and right lines.

4.7. Main Procedure of the IMBO

On the basis of the components of the IMBO described above, the main procedure of the proposed IMBO is presented as follows in Algorithm 5. Here, the proposed IMBO has three main improvements: (1) the new neighborhood structures are utilized to improve the leader bird and following birds to intensify exploration; (2) a new acceptance criterion is developed to enhance the diversity of the population and help the algorithm to escape from being trapped into local optima; (3) the competitive mechanism is developed to emphasize exploitation capacity by searching around the most promising solution space. The improvements will be tested in Section 5.3.
Algorithm 5 Procedure of the improved migrating birds optimization
Input:Charge permutations and algorithm parameters
%Initialization
Step 1Initialize α birds and put birds in a hypothetical V formation.
%Leader improvement
Step 2:Generate β neighborhood solutions around the leader and try to improve the leader bird with the neighborhood solutions. If these new neighbors bring an improvement, replace the leader with the best neighbor; otherwise, preserve the current leader. After that, unused neighbors are sorted in the descending order of objective values, then the best 2χ neighbors are selected and divided to form the left and right sharing neighbor sets, respectively.
%Follower improvement
Step 3:For each following bird x, do,
3.1 Generate β-χ neighbors according to the neighborhood structures described in Section 4.4;
3.2 Form neighborhood Λ(X) with β-χ neighbors around the following bird and χ shared neighbors from the leader;
3.3 If the best neighbor solution x’ in Λ(X) is better than x then,
x = x’;
Else If random e ( Δ / t e m p ) then,
x = x’;
3.4 Seek out the best χ unused neighbors to rebuild the left (right) sharing neighbor set.
%New leader selection
Step 4:After conducting Steps 2 and 3 for γ times, update the best bird and then let the best one become the new leader.
%Competitive mechanism
Step 5:Adjust the positions of the bird in the left and right lines, where the promising birds are placed in the front of the line to have more opportunities to share their neighbors.
Step 6:If the termination criterion is satisfied, output the best solution achieved. Otherwise, return to Step 2.
Output:The achieved best solution.

5. Computational Study

This section first presents the experimental design in Section 5.1, later describes the calibration of the parameters in Section 5.2. Subsequently, the performance of the formulated model and proposed IMBO is tested in Section 5.3 and Section 5.4.

5.1. Experimental Design

This section describes the computational experiments employed to evaluate the performance of the proposed algorithm. Firstly, the proposed IMBO is compared with the Cplex solver and original migrating bird’s optimization algorithm (MBO). To further evaluate the performance of the proposed IMBO, several other well-known algorithms are selected for comparison, including genetic algorithm (GA) [38], hybrid genetic algorithm in Tseng and Lin (GAR) [39], memetic algorithm (MA) [40], particle swarm optimization (PSO) [41], simulated annealing (SA) [42], discrete teaching learning based optimization algorithm (DTLBO) [43], and discrete artificial bee colony algorithm (DABC) [26]. The implemented algorithms are presented in Table 4. These algorithms are selected as they have solved the problems that are, to some extent, similar to the problem under consideration. Among them, GA, PSO and SA are the well-known algorithms, GAR, DTLBO and DABC are variants of GA, TLBO and ABC which have produced competing performances. All these algorithms are programmed in this study based on the cited papers. All the tested algorithms utilize the same encoding and decoding presented in Section 4.2 and the same neighborhood structures in Section 4.4 as highlighted in Table 4. In addition, the parameters of all the tested algorithms are calibrated using the multi-factor analysis of variance (ANOVA) in Section 5.2.
On the basis of the actual production process in steelmaking plant, this study generates a set of 40 instances as follows.
(1)
As there are two forms in refining phase (LF or first LF and later RH) as presented in Figure 1 in real applications, the number of stages in the principal processes of primary SCC varies among two levels: 3, 4. Additionally, the numbers of LD, LF, RH and CC are set as 2, 2, 1 and 2, respectively.
(2)
The number of charges in each batch varies among five levels: 1, 2, 3, 4 and 5. Additionally, the number of batches and charges is generated uniformly within [1, 34] and [1, 114] respectively.
(3)
The processing time PTi,j of steelmaking, refining and casting phases are generated uniformly within [70, 80], [40, 50] and [55, 70] respectively. Note that transportation time is included in the processing time in this study [7, 26].
(4)
As a relatively long setup time is in need for production preparation, the setup time is set as 60 in this study.
The generated instances are described in Table 5. The detailed instances are not exhibited for space reasons, but they are available upon request.
To evaluate the performance of IMBO method on different instances, the relative percentage deviation (RPD) is utilized to measure the achieved results utilizing expression (21). Here, C M S o m e is the makespan achieved by one algorithm and C M B e s t is the best makespan obtained by all algorithms for the same instance.
R P D = C M S o m e C M B e s t C M B e s t × 100
Clearly, the algorithm with the smaller RPD value has the better performance. All the algorithms solve the instances for 10 times, and two indicators are employed here: RPD-Avg and RPD-Min, where RPD-Avg is the average value of the RPD values for each instance and RPD-Min is the minimum value of the RPD values for each instance in 10 repetitions. The tested problem is solved utilizing the Cplex solver of General Algebraic Modeling System 23.0. The proposed IMBO and all the other tested algorithms are implemented utilizing the C++ programming language and are running on a personal computer equipped with Intel (R) Core (TM) 2.80 GHz i5 CPU and 2.00 GB RAM.

5.2. Calibration of Algorithmic Parameters

Since parameters have a significant effect on the performance of algorithms, this section presents the calibration the parameters of the implemented algorithms. Here, this study utilizes the Taguchi method as the design of experiment (DOE) to decide the desirable values of the parameters. For IMBO, there are five parameters (or controlled factors): the number of initial solutions (α), the number of neighbor solutions (β), the number of neighbor solutions to be shared (χ), the number of tours (γ) and the initial temperature (τ). This study utilizes orthogonal array to arrange experiments and there are L16(45) experiment combinations to analyze the instance of case 40. Each experiment combination is run 10 times and the minimum relative percentage deviation or minimum RPD is obtained according to Equation (21). Then, the average value of 10 minimum RPD for each combination is regarded as the response variable. The levels of these parameters and the corresponding response variable values of the combinations are presented in Table 6.
After obtaining response variable values, the multi-factor analysis of variance (ANOVA), a powerful parametric statistical technique, is carried out after checking the normality, homoscedasticity, and independence of the residuals. The detailed ANOVA results are illustrated in Table 7. Here, the parameter χ has the largest F-ratio of 7.75, indicating that this parameter, the number of neighbor solutions to be shared, has the greatest influence on the proposed algorithm. This also shows that the sharing mechanism makes the MBO a better competitive performance by searching around the most promising solution space. Similarly, the initial temperature (τ), which is related to the simulated annealing-based acceptance criterion to ensure the diversity of the population, has the second greatest influence on the proposed algorithm. In addition, if ranking the parameters in the decreasing order of the F-ratio values, the sequence is χ, τ, α, γ and β, where the former parameters have the greater impacts.
The signal–noise ratio (SNR)’s main effects on the plot of the algorithm parameters is illustrated in Figure 4. As seen in this figure, the best combination of parameters is: α = 11, β = 12, χ = 6, γ = 150, and T = 0.4. The selected parameter values of the proposed IMBO are also presented in Table 8.
After the determination with the best parameters’ combination, α = 11, β = 12, χ = 6, γ = 100, and τ = 0.4. Figure 5 is here to further illustrate the evolution of the IMBO for different instances and note that the average of 10 objective values of Cmax for each iteration is regarded as the response variable. As can be seen, there is a clear decreasing trend which shows the convergence of the proposed IMBO.
In addition, the parameters of other implemented algorithms are also calibrated utilizing the same method above. For space reasons, the detailed calibration results are omitted and Table 9 presents the selected best combinations of these algorithmic parameters.

5.3. Performance Evaluation of the Proposed IMBO

To test the performance of the proposed IMBO, the small-size instances are solved by the mathematical model in Section 3.2 utilizing the Cplex solver, and the achieved results are also compared with that by the MBO and proposed IMBO. For each instance, the Cplex solver terminates when the optimal solution is obtained or computational time reaches 300 s (s), and the termination criterion of the MBO and IMBO is an elapsed computation time of L × J × ρ milliseconds (ρ = 10). As MBO and IMBO are the stochastic methods due to the randomness involved in their nature, each test problem is solved for 10 times to evaluate the robustness of algorithms. Table 10 reports the detailed computational results for the tested instances by the Cplex and the algorithms. In this table, column 2 presents the problem size L × J (batches × charges). Columns 3 to 10 report the results by the Cplex solver, MBO and IMBO. Here, Best means the best value of the RPD values for each instance, Worst means the worst value of the RPD values for each instance and Avg means the average value of the RPD values for each instance in 10 repetitions. Note that the smaller RPD value denotes the better performance.
It is observed from Table 10 that all methods obtain the optimal solutions when the problem size (batches × charges) is 3 × 8 or 5 × 18. As the problem size becomes the larger, the achieved solutions by MBO and IMBO are significantly better than those by the Cplex. It is also clear that the achieved solutions by the proposed IMBO are smaller than those by MBO for the large-size instances. This finding suggests that the IMBO outperforms the MBO and demonstrates the efficiency of the proposed improvements. In summary, this comparative study demonstrates that the proposed IMBO outperforms the Cplex and MBO, and the improvements enhance the IMBO by a significant margin.

5.4. Comparative Study Among Algorithms

To further evaluate the performance of the proposed IMBO, the results by the IMBO algorithm are compared with that by seven other meta-heuristics. Additionally, the termination criterion of an elapsed computation time of L × J × ρ milliseconds is utilized, where L and J denote the number of batches and the number of charges, respectively. To observe the performances of the algorithms under different computation time, ρ is set as 10 and 20. All the algorithms solve all the 40 tested instances for 10 times, and namely a total number of 6400 data are achieved. Table 11 and Table 12 present the detailed results by these algorithms when ρ = 10 and ρ = 20, where Min and Avg are the minimum value and the average value of the RPD values in 10 repetitions.
From Table 11, it is observed that the proposed IMBO outperforms the GA, GAR, MA, PSO, SA, DTLBO and DABC for most of the instances. Specifically, all the algorithms obtain the current best solutions for small-size instances. For the medium-size instances from case 11 to case 20, all the results by IMBO are better than or equal to those by other compared algorithms in terms of minimum RPD and average RPD. For the large-size instances, all the results by IMBO are better than those by GA, GAR, MA, PSO, SA, and DTLBO. Specifically, IMBO outperforms the compared methods in terms of minimum RPD for all the 40 instances and IMBO outperforms those methods for 34 out of 40 instances in terms of average RPD. This indicates the superiority of IMBO over these compared algorithms. From Table 12, similar findings can be achieved. Clearly, the proposed IMBO outperforms the other algorithms of almost all the tested instances in terms of minimum RPD and average RPD.
To evaluate the algorithms’ performance statistically, this study also conducts the ANOVA test to check whether the IMBO outperforms other algorithms statistically. Here, the algorithms type is the controlled factor and the average RPD is the response variable. Detailed ANOVA results are not exhibited for space limit, and instead this section depicts the means plots and 95% confidence intervals by the implemented algorithms in Figure 6 (ρ = 10) and Figure 7 (ρ = 20). From these figures, we can see clearly that the proposed IMBO outperforms other algorithms in terms of the minimum RPD and average RPD under two termination criteria. The statistical analysis shows that the improved migrating birds optimization algorithm is more efficient to solve the SCC scheduling problem.
In sum, the results demonstrate that the superiority of the proposed IMBO over the compared algorithms. In fact, the superiority of the IMBO is attributed to the optimization mechanism of the MBO and the developed improvements. Namely, the proposed IMBO utilizes two neighborhood structures to intensify exploration, the simulated annealing-based acceptance criterion to ensure the diversity of the population and the competitive mechanism emphasizes the exploitation capacity by searching around the most promising solution space. In summary, the computational studies demonstrate that the improvements enhance the IMBO by a significant margin and the proposed IMBO is a powerful algorithm for the considered SCC scheduling problem by outperforming all the compared algorithms.

6. Conclusions and Future Research

This study solves the realistic hybrid flow shop scheduling problem arising from the steelmaking and continuous-casting (SCC) process, namely the SCC scheduling problem with diverse products. To tackle this NP-hard problem, this study formulates a mixed-integer linear programming model to minimize the makespan and presents an improved migrating birds optimization algorithm (IMBO). The proposed IMBO utilizes two neighborhood structures to intensify exploration, develops a simulated annealing-based acceptance criterion to enhance the diversity of the population, and employs a competitive mechanism to emphasize exploitation capacity. All these improvements help the IMBO algorithm to achieve the proper balance between exploration and exploitation. To evaluate the performance of the proposed algorithm, this study generates a set of 40 instances based on realistic production process in SCC plants. Experimental results demonstrate that the proposed IMBO is capable of achieving the optimal solutions for small-size instances, and outperforms the original migrating birds optimization algorithm and seven other reimplemented algorithms.
Future research might extend the considered problem by taking into account the processing time variation or machine breakdowns. In addition, since the transportation time is often limited by the carriers, the ladle pot and travelling crane, resulting to the complex uncertainties in the schedule process, considering the integrated problem of production scheduling and transportation process will effectively improve production continuity and reduce waiting time related costs. It is also suggested to consider the energy consumption or carbon emissions in steel plants. Meanwhile, since the developed IMBO produces competing performance, this algorithm can be extended to solve other scheduling problems.

Author Contributions

Conceptualization, D.H. and Q.T.; methodology, D.H. and Z.L.; software, D.H. and Z.Z.; validation, Q.T. and Z.L.; formal analysis, D.H.; investigation, D.H. and Z.Z.; resources, Q.T. and Z.L.; data curation, D.H.; writing—original draft preparation, D.H.; writing—review and editing, D.H.; visualization, D.H.; supervision, Z.L.; project administration, Q.T. and Z.L.; funding acquisition, Q.T. and Z.L. All authors have read and agree to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant 51875421 and Grant 61803287.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A. Detailed Two-Level Decoding Procedure

Algorithm A1 Detailed procedure of the FIFO-based forward heuristic
Input:Batch plan, charge permutation π(j) and the charge at position k in permutation {πk(j)}.
Step 1:Set the release time of each machine and each charge, ri,m and rj, to be zero.
%Forward heuristic for first stage
Step 2:For the first stage
While ( k J ) do
2.1 Select charge πk(j) sequentially according to charge permutation π(j);
2.2 Assign charge πk(j) to the earliest available machine m , where r 1 , m = min m M 1 r 1 , m ;
2.3 Schedule charge πk(j) on the assigned machine, set S T 1 , π k ( j ) = m a x { r 1 , m , r π k ( j ) } and update r 1 , m = S T 1 , π k ( j ) + P T 1 , π k ( j ) .
Endwhile
Endfor
%Forward heuristic for refining stages
Step 3:For the next stages except the last stage ( i = 2 , 3 s 1 )
If (the charge completes its task at the previous stage and is transferred to the current stage)
3.1 Select the first available machine (marked as refining machine   m ) or randomly select one if more than one machine is available simultaneously, where r i , m = min m M i r i , m ;
3.2 Set S T i , π k ( j ) = m a x { r s , m , S T i 1 , π k ( j ) + P T i 1 , π k ( j ) } and update   r 1 , m = S T i , π k ( j ) + P T i , π k ( j ) .
Endif
Endfor
%Forward heuristic for last stage
Step 4:For the last stage
While ( l L ) do
4.1 Select the earliest available caster for batch l , m = arg m M s min { r s , m } ;
4.2 Compute the casting start time of batch   l , T s i s , L J ( l , l j f ( l , j ) ) = max { T f s 1 , L J ( l , l j f ( l , j ) ) , r s , m + S u } , where   r s , m denotes the earliest available time of caster m on which batch l is processed;
4.3 Set r = l j f ( l , j ) + 1 ;
Endwhile
4.4 While ( r l j e ( l , j ) ) do
4.4.1 Compute the casting start time of the remaining charges, T s i s , L J ( l , r ) = max { T s i s , L J ( l , r 1 ) + P T s , L J ( l , r 1 ) , T f s 1 , L J ( l , j ) } ;
4.4.2 Set   r = r + 1 .
Endwhile
4.5 Set   l = l + 1 .
Endfor
Output:Machine assignment, charge permutation on each machine.
Algorithm A2 Detailed procedure of the backward heuristic
Input:Machine assignment, charge permutation on each machine
Step 1:Sort all the charges in the increasing order according to finish time at each stage and generate a charge list λ m on each machine, where | λ m | denotes the total number of charges on machine m ;
%Backward heuristic for last stage
Step 2:While ( l L ) do
2.1 Calculate the start time of the last charge in batch l, T s i s , L J ( l , l j e ( l , j ) ) =
max { T s i s , L J ( l , l j e ( l , j ) 1 ) + P T s , L J ( l , l j e ( l , j ) 1 ) , T f s 1 , L J ( l , l j e ( l , j ) ) } , where l j e ( l , j ) denotes the last charge in batch l processed on caster m .
2.2 Set r = l j e ( l , j ) 1 ;
2.3 While ( r l j f ( l , j ) ) do
2.3.1 Adjust the start time of each charge to ensure it to be processed continuously in a batch T s i s , L J ( l , r ) = T s i s , L J ( l , r + 1 ) P T L J ( l , r ) , m , update r s , m = T s i s , L J ( l , r ) + P T s , L J ( l , r ) ;
2.3.2 Set r = r 1 .
Endwhile
2.4 Set l = l + 1 .
Endwhile
%Backward heuristic for refining stages
Step 3:For the refining stages ( i = 2 , 3 s 1 )
Set i = s ;
While ( i > 2 ) do
3.1 Adjust the start time of the last charge on each machine m by the right-shift method, T s i i 1 , L J ( l , | λ m | ) = T s i i , L J ( l , | λ m | ) P T i , L J ( l , | λ m | ) ;
3.2 Update r i 1 , m = T s i i 1 , L J ( l , | λ m | ) ;
3.3 set j = | λ m | 1 ;
3.4 While ( j 1 ) do
3.5.1 Adjust the start time of the remaining charges on each machine m , T s i i 1 , j = min { r i 1 , m , T s i i , j } P T i , j and update r i 1 , m = T s i i 1 , j ;
3.5.2 Set j = j 1 ;
Endwhile
3.6 Set i = i 1 ;
Endwhile
Endfor
%Backward heuristic for first stage
Step 4:For the first stage ( i = 1 )
4.1 Adjust the start time of the last charge processed on machine m by the right-shift method, set T s i 1 , L J ( l , | λ m | ) = T s i 2 , L J ( l , | λ m | ) P T 1 , L J ( l , | λ m | ) ;
4.2 Update r 1 , m = T s i 1 , L J ( l , | λ m | ) ;
4.3 Set j = | λ m | 1 ;
4.4 While ( j 1 ) do
4.4.1 Adjust the start time of the remaining charges, T s i 1 , j = min { r 1 , m , T s i 2 , j } P T 1 , j and update r 1 , m = T s i 1 , j ;
4.4.2 Set j = j 1 ;
Endwhile
Endfor
Output:Production schedule

References

  1. Rahman, H.F.; Sarker, R.; Essam, D. Multiple-order permutation flow shop scheduling under process interruptions. Int. J. Adv. Manuf. Technol. 2018, 97, 2781–2808. [Google Scholar] [CrossRef]
  2. Zhao, F.; Xue, F.; Zhang, Y.; Ma, W.; Zhang, C.; Song, H. A discrete gravitational search algorithm for the blocking flow shop problem with total flow time minimization. Appl. Intell. 2019, 49, 3362–3382. [Google Scholar] [CrossRef]
  3. Jiang, T.; Zhang, C.; Zhu, H.; Gu, J.; Deng, G. Energy-Efficient Scheduling for a Job Shop Using an Improved Whale Optimization Algorithm. Mathematics 2018, 6, 220. [Google Scholar] [CrossRef] [Green Version]
  4. Yang, J.P.; Wang, B.L.; Liu, Q.; Guan, M.; Li, T.K.; Gao, S.; Wei, D.G.; Liu, Q. Scheduling Model for the Practical Steelmaking-continuous Casting Production and Heuristic Algorithm Based on the Optimization of “Furnace-caster Matching” Mode. ISIJ Int. 2020, 60, 1213–1224. [Google Scholar] [CrossRef] [Green Version]
  5. Jiang, S.-L.; Zheng, Z.; Liu, M. A preference-inspired multi-objective soft scheduling algorithm for the practical steelmaking-continuous casting production. Comput. Ind. Eng. 2018, 115, 582–594. [Google Scholar] [CrossRef]
  6. Xu, Z.; Zheng, Z.; Gao, X. Energy-efficient steelmaking-continuous casting scheduling problem with temperature constraints and its solution using a multi-objective hybrid genetic algorithm with local search. Appl. Soft Comput. 2020, 95, 106554. [Google Scholar] [CrossRef]
  7. Pan, Q.K. An effective co-evolutionary artificial bee colony algorithm for steelmaking-continuous casting scheduling. Eur. J. Oper. Res. 2016, 250, 702–714. [Google Scholar] [CrossRef]
  8. Ruiz, R.; Vázquez-Rodríguez, J.A. The hybrid flow shop scheduling problem. Eur. J. Oper. Res. 2010, 205, 1–18. [Google Scholar] [CrossRef] [Green Version]
  9. Guirchoun, S.; Martineau, P.; Billaut, J.-C. Total completion time minimization in a computer system with a server and two parallel processors. Comput. Oper. Res. 2005, 32, 599–611. [Google Scholar] [CrossRef]
  10. Oz, D. An improvement on the Migrating Birds Optimization with a problem-specific neighboring function for the multi-objective task allocation problem. Expert Syst. Appl. 2017, 67, 304–311. [Google Scholar] [CrossRef]
  11. Meng, T.; Pan, Q.K.; Li, J.Q.; Sang, H.Y. An improved migrating birds optimization for an integrated lot-streaming flow shop scheduling problem. Swarm Evol. Comput. 2018, 38, 64–78. [Google Scholar] [CrossRef]
  12. Ferraro, A.; Rossit, D.; Toncovich, A.; Frutos, M. Lot Streaming Flow Shop with a Heterogeneous Machine. Eng. Manag. J. 2019, 31, 113–126. [Google Scholar] [CrossRef]
  13. Gmys, J.; Mezmaz, M.; Melab, N.; Tuyttens, D. A computationally efficient Branch-and-Bound algorithm for the permutation flow-shop scheduling problem. Eur. J. Oper. Res. 2020, 284, 814–833. [Google Scholar] [CrossRef] [Green Version]
  14. Hakeem-Ur-Rehman; Wan, G.; Zhan, Y. Multi-level, multi-stage lot-sizing and scheduling in the flexible flow shop with demand information updating. Int. Trans. Oper. Res. 2019, 1–27. [Google Scholar] [CrossRef]
  15. Sun, L.; Jin, H.; Yu, Y.; Li, Z.; Xi, J. Research on Steelmaking-Continuous Casting Production Scheduling Problem Based on Augmented Lagrangian Relaxation Algorithm under Multi-Coupling Constraints. IFAC-PapersOnLine 2019, 52, 820–825. [Google Scholar] [CrossRef]
  16. Cui, H.J.; Luo, X.C.; Wang, Y. Scheduling of steelmaking-continuous casting process using deflected surrogate Lagrangian relaxation approach and DC algorithm. Comput. Ind. Eng. 2020, 140, 106271. [Google Scholar] [CrossRef]
  17. Kim, S.-i.; Han, J.; Lee, Y.; Park, E. Decomposition based heuristic algorithm for lot-sizing and scheduling problem treating time horizon as a continuum. Comput. Oper. Res. 2010, 37, 302–314. [Google Scholar] [CrossRef]
  18. Slotnick, S.A. Optimal and heuristic lead-time quotation for an integrated steel mill with a minimum batch size. Eur. J. Oper. Res. 2011, 210, 527–536. [Google Scholar] [CrossRef] [Green Version]
  19. Bellabdaoui, A.; Teghem, J. A mixed-integer linear programming model for the continuous casting planning. Int. J. Prod. Econ. 2006, 104, 260–270. [Google Scholar] [CrossRef]
  20. Harjunkoski, I.; Grossmann, I.E. A decomposition approach for the scheduling of a steel plant production. Comput. Chem. Eng. 2001, 25, 1647–1660. [Google Scholar] [CrossRef]
  21. Tang, L.; Liu, J.; Rong, A.; Yang, Z. A mathematical programming model for scheduling steelmaking-continuous casting production 1. Eur. J. Oper. Res. 2000, 120, 423–435. [Google Scholar] [CrossRef]
  22. Peng, J.G.; Liu, M.Z.; Zhang, X.; Ling, L. Hybrid heuristic algorithm for multi-objective scheduling problem. J. Syst. Eng. Electron. 2019, 30, 327–342. [Google Scholar] [CrossRef]
  23. Hauber, W. A scheduling system for the steelmaking-continuous casting process. A case study from the steel-making industry. Int. J. Prod. Res. 2009, 47, 4147–4172. [Google Scholar]
  24. Buyukozkan, K.; Kucukkoc, I.; Satoglu, S.I.; Zhang, D.Z. Lexicographic bottleneck mixed-model assembly line balancing problem: Artificial bee colony and tabu search approaches with optimised parameters. Expert Syst. Appl. 2016, 50, 151–166. [Google Scholar] [CrossRef]
  25. Meng, T.; Pan, Q.-K.; Sang, H.-Y. A hybrid artificial bee colony algorithm for a flexible job shop scheduling problem with overlapping in operations. Int. J. Prod. Res. 2018, 56, 5278–5292. [Google Scholar] [CrossRef]
  26. Peng, K.K.; Pan, Q.K.; Zhang, B. An improved artificial bee colony algorithm for steelmaking-refining-continuous casting scheduling problem. Chin. J. Chem. Eng. 2018, 26, 1727–1735. [Google Scholar] [CrossRef]
  27. Atighehchian, A.; Bijari, M.; Tarkesh, H. A novel hybrid algorithm for scheduling steel-making continuous casting production. Comput. Oper. Res. 2009, 36, 2450–2461. [Google Scholar] [CrossRef]
  28. Li, J.-Q.; Pan, Q.-K. Solving the large-scale hybrid flow shop scheduling problem with limited buffers by a hybrid artificial bee colony algorithm. Inf. Sci. 2015, 316, 487–502. [Google Scholar] [CrossRef]
  29. Han, Y.; Li, J.-Q.; Gong, D.; Sang, H. Multi-objective migrating birds optimization algorithm for stochastic lot-streaming flow shop scheduling with blocking. IEEE Access 2018, 7, 5946–5962. [Google Scholar] [CrossRef]
  30. Niroomand, S.; Hadi-Vencheh, A.; Şahin, R.; Vizvári, B. Modified migrating birds optimization algorithm for closed loop layout with exact distances in flexible manufacturing systems. Expert Syst. Appl. 2015, 42, 6586–6597. [Google Scholar] [CrossRef]
  31. Zhang, Z.; Tang, Q.; Li, Z.; Zhang, L. Modelling and optimisation of energy-efficient U-shaped robotic assembly line balancing problems. Int. J. Prod. Res. 2018, 57, 5520–5537. [Google Scholar] [CrossRef]
  32. Zhang, Z.; Tang, Q.; Han, D.; Li, Z. Enhanced migrating birds optimization algorithm for U-shaped assembly line balancing problems with workers assignment. Neural Comput. Appl. 2018, 31, 7501–7515. [Google Scholar] [CrossRef]
  33. Li, J.; Xiao, X.; Tang, Q.; Floudas, C.A. Production Scheduling of a Large-Scale Steelmaking Continuous Casting Process via Unit-Specific Event-Based Continuous-Time Models: Short-Term and Medium-Term Scheduling. Ind. Eng. Chem. Res. 2012, 51, 7300–7319. [Google Scholar] [CrossRef]
  34. Shaik, M.A.; Floudas, C.A. Unit-specific event-based continuous-time approach for short-term scheduling of batch plants using RTN framework. Comput. Chem. Eng. 2008, 32, 260–274. [Google Scholar] [CrossRef]
  35. Verderame, P.M.; Elia, J.A.; Li, J.; Floudas, C.A. Planning and Scheduling under Uncertainty: A Review Across Multiple Sectors. Ind. Eng. Chem. Res. 2010, 49, 3993–4017. [Google Scholar] [CrossRef]
  36. Gao, L.; Pan, Q.-K. A shuffled multi-swarm micro-migrating birds optimizer for a multi-resource-constrained flexible job shop scheduling problem. Inf. Sci. 2016, 372, 655–676. [Google Scholar] [CrossRef]
  37. Janardhanan, M.N.; Li, Z.X.; Bocewicz, G.; Banaszak, Z.; Nielsen, P. Metaheuristic algorithms for balancing robotic assembly lines with sequence-dependent robot setup times. Appl. Math. Model. 2019, 65, 256–270. [Google Scholar] [CrossRef] [Green Version]
  38. Lin, S.; Luan, F.J.; Han, Z.H.; Lu, X.S.; Zhou, X.F.; Liu, W. Genetic Algorithm Based on Duality Principle for Bilevel Programming Problem in Steel-making Production. Chin. J. Chem. Eng. 2014, 22, 742–747. [Google Scholar] [CrossRef]
  39. Tseng, L.-Y.; Lin, Y.-T. A hybrid genetic algorithm for no-wait flowshop scheduling problem. Int. J. Prod. Econ. 2010, 128, 144–152. [Google Scholar] [CrossRef]
  40. Deng, J.; Wang, L.; Wang, S.-Y.; Zheng, X.-L. A competitive memetic algorithm for the distributed two-stage assembly flow-shop scheduling problem. Int. J. Prod. Res. 2016, 54, 3561–3577. [Google Scholar] [CrossRef]
  41. Behnamian, J. Diversified particle swarm optimization for hybrid flowshop scheduling. J. Optim. Ind. Eng. 2019, 12, 107–119. [Google Scholar]
  42. Aghajani, M.; Ghodsi, R.; Javadi, B. Balancing of robotic mixed-model two-sided assembly line with robot setup times. Int. J. Adv. Manuf. Technol. 2014, 74, 1005–1016. [Google Scholar] [CrossRef]
  43. Li, J.Q.; Pan, Q.K.; Mao, K. A discrete teaching-learning-based optimisation algorithm for realistic flowshop rescheduling problems. Eng. Appl. Artif. Intell. 2015, 37, 279–292. [Google Scholar] [CrossRef]
Figure 1. Steelmaking and continuous-casting production.
Figure 1. Steelmaking and continuous-casting production.
Mathematics 08 01661 g001
Figure 2. Continuous time representation based on unit specific event.
Figure 2. Continuous time representation based on unit specific event.
Mathematics 08 01661 g002
Figure 3. Two-level decoding procedure for the SCC scheduling problem. (a) FIFO-based forward heuristic; (b) Backward heuristic.
Figure 3. Two-level decoding procedure for the SCC scheduling problem. (a) FIFO-based forward heuristic; (b) Backward heuristic.
Mathematics 08 01661 g003aMathematics 08 01661 g003b
Figure 4. SNR main effects plot.
Figure 4. SNR main effects plot.
Mathematics 08 01661 g004
Figure 5. Convergence analysis for different instances when ρ = 10. (a) The instance with 20 casts and 61 charges; (b) The instance with 40 casts and 114 charges.
Figure 5. Convergence analysis for different instances when ρ = 10. (a) The instance with 20 casts and 61 charges; (b) The instance with 40 casts and 114 charges.
Mathematics 08 01661 g005
Figure 6. Means plots and 95% confidence intervals regarding minimum RPD or average RPD when ρ = 10. (a) The value of minimum RPD by algorithms; (b) The value of average RPD by algorithms.
Figure 6. Means plots and 95% confidence intervals regarding minimum RPD or average RPD when ρ = 10. (a) The value of minimum RPD by algorithms; (b) The value of average RPD by algorithms.
Mathematics 08 01661 g006
Figure 7. Means plots and 95% confidence intervals regarding the minimum RPD or average RPD when ρ = 20. (a) The value of minimum RPD by algorithms; (b) The value of average RPD by algorithms.
Figure 7. Means plots and 95% confidence intervals regarding the minimum RPD or average RPD when ρ = 20. (a) The value of minimum RPD by algorithms; (b) The value of average RPD by algorithms.
Mathematics 08 01661 g007aMathematics 08 01661 g007b
Table 1. Indices, parameters and variables of the proposed model.
Table 1. Indices, parameters and variables of the proposed model.
Indices:
i The stage.
j The charge.
l The batch.
m The machine.
tThe event points.
Sets:
I The set of stages, I = { 1 , 2 s } , where s denotes the last casting stage.
J The set of charges, J = { 1 , 2 J } , where J is the total number of charges.
L The number of batches.
TThe set of event points, T = { 1 ,   2 .   .   .   T } , where T is the total number of unit-specific event points in the scheduling horizon.
L J ( l , j ) Set of charge indices in the lth batch, l = { 1 , 2 L } , where L is the total number of batches, and l L J ( l , j ) = n ,   L J ( l , j ) L J ( l , j ) =   for   a l l   l l .
M i The set of machines at stage   i .
L J F ( l , j ) The set of first charges of batches.
L J E ( l , j ) The set of last charges of batches.
Parameters:
UA big positive number.
P T i , j The processing time of the charge j at stage   i .
S u The setup time between adjacent batches on the caster for production preparation for the next batches.
Variables:
X j , m , t Binary variable. X j , m , t = 1, if charge j is being processed at event point t on machine m ; X j , m , t = 0, otherwise.
Y l , m   Binary variable. Y l , m   = 1 , if batch l is processed on caster m at the last stage; Y l , m   = 0 , otherwise.
T s m , t   Continuous variable; start time of event point t on machine   m .
T f m , t   Continuous variable; finish time of event point t on machine   m .
T s i i , j   Continuous variable; start time of charge j at stage   i .
C m a x Makespan; the completion time of the last charge on the continuous casters.
Table 2. Characteristics of the given charges.
Table 2. Characteristics of the given charges.
BatchChargesProcess RouteProcessing Times
1{1}LD-LF-CC75-45-61
2{2, 3, 4}LD-LF-CC70-45-56
3{5, 6, 7, 8}LD-LF-RH-CC70-45-40-55
Table 3. Process of heuristic initialization under batch sequence [2, 3, 1].
Table 3. Process of heuristic initialization under batch sequence [2, 3, 1].
No.Unallocated BatchesAssignable ChargesSelected ChargeCharge Set
1{1, 2, 3}{1, 2, 5}22
2{1, 2, 3}{1, 3, 5}52, 5
3{1, 2, 3}{1, 3, 6}32, 5, 3
4{1, 2, 3}{1, 4, 6}62, 5, 3, 6
5{1, 2, 3}{1, 4, 7}42, 5, 3, 6, 4
6{1, 3}{1, 7}72, 5, 3, 6, 4, 7
7{1, 3}{1, 8}12, 5, 3, 6, 4, 7, 1
8{3}{8}82, 5, 3, 6, 4, 7, 1, 8
Table 4. Summary of the tested algorithms.
Table 4. Summary of the tested algorithms.
AlgorithmsAbbreviationDescription
Genetic algorithm (Lin, et al. [38])GAThe neighbor operations in Section 4.4 are utilized for mutation operator.
Genetic algorithm (Tseng and Lin [39])GARThe neighbor operations in Section 4.4 are utilized for mutation operator.
Memetic algorithm (Deng, et al. [40])MAThe neighbor operations in Section 4.4 are utilized for mutation operator.
Particle swarm optimization algorithm (Behnamian [41])PSOThe neighbor operations in Section 4.4 are utilized.
Simulated annealing algorithm (Aghajani, et al. [42])SAThe neighbor operations in Section 4.4 are utilized.
Discrete teaching learning based optimization algorithm (Li, et al. [43])DTLBOThe neighbor operations in Section 4.4 are utilized.
Discrete artificial bee colony algorithm (Peng, et al. [26])DABCThe neighbor operations in Section 4.4 are utilized.
Improved migrating birds optimizationIMBOThis is the algorithm described in Section 4.
Table 5. The data for 40 instances.
Table 5. The data for 40 instances.
InstanceCastsChargesInstanceCastsCharges
Case 138Case 212061
Case 238Case 222061
Case 338Case 232061
Case 438Case 242061
Case 538Case 252061
Case 6518Case 262576
Case 7518Case 272576
Case 8518Case 282576
Case 9518Case 292576
Case 10518Case 302576
Case 111030Case 313096
Case 121030Case 323096
Case 131030Case 333096
Case 141030Case 343096
Case 151030Case 353096
Case 161547Case 3634114
Case 171547Case 3734114
Case 181547Case 3834114
Case 191547Case 3934114
Case 201547Case 4034114
Table 6. Orthogonal array of IMBO.
Table 6. Orthogonal array of IMBO.
No.Levels of ParametersMinimum RPDAverage
αβχγτ12345678910
1953500.40.140.140.130.170.140.170.170.130.100.100.14
29841000.50.140.170.140.130.140.140.110.110.110.110.13
39951500.450.150.150.140.130.120.130.120.120.110.120.13
491262000.60.120.090.090.090.090.060.060.060.060.100.08
511541500.60.190.140.140.110.050.110.090.110.100.110.12
611832000.450.140.150.150.140.110.110.110.110.110.090.12
71196500.50.090.070.070.070.070.070.070.070.070.070.07
8111251000.40.200.130.100.070.060.060.060.030.000.030.07
913552000.50.180.170.180.110.140.140.150.140.130.110.15
1013861500.40.090.090.090.090.090.090.060.090.090.090.09
1113931000.60.140.110.110.110.110.090.100.070.100.060.10
1213124500.450.140.110.110.110.110.110.110.090.110.110.11
1315531000.450.200.170.130.150.140.090.110.090.090.090.13
141585500.60.140.110.090.090.090.090.090.110.090.100.10
1515942000.40.090.140.150.110.090.090.090.090.090.050.10
16151261500.50.090.140.150.110.090.090.090.090.090.050.10
Table 7. The ANOVA results for the parameters.
Table 7. The ANOVA results for the parameters.
SourcesdfSeq SSAdj SSAdj MSF-Ratiop Value
α 30.198370.198370.066125.470.001
β30.604590.092440.030812.550.058
χ30.212380.280810.09367.750
γ 30.080740.117680.039233.250.024
τ30.261970.261970.087327.230
Error1441.739750.01208
Total1593.0977
Table 8. Selected parameter values of the proposed IMBO.
Table 8. Selected parameter values of the proposed IMBO.
ParametersSymbolLevelsSelected Value
The number of initial solutions α9, 11, 13, 1511
The number of neighbor solutions β5, 8, 9, 1212
The number of neighbor solutions to be sharedχ3, 4, 5, 66
The number of toursγ50, 100, 150, 200150
The initial temperatureτ04, 0.45, 0.5, 0.60.4
Table 9. Parameter values of the comparison algorithms.
Table 9. Parameter values of the comparison algorithms.
ParametersTesting RangeValue
GA algorithm (GA)
Population size40, 60, 8060
Crossover rate0.7, 0.8, 0.90.9
Mutation rate0.1, 0.2, 0.30.2
GA algorithm (GAR)
Population size40, 80, 12080
Crossover rate0.7, 0.8, 0.90.8
Mutation rate0.1, 0.2, 0.30.3
Memetic Algorithm (MA)
Population size40, 60, 8060
Crossover rate0.7, 0.8, 0.90.9
Mutation rate0.1, 0.15, 0.20.2
PSO algorithm (PSO)
The number of particles40, 60, 8060
SA algorithm (SA)
Initial temperature0.5, 0.75, 1.00.5
Cooling rate0.8, 0.9, 0.950.9
Maximum number of iterations500, 1000, 1500500
Discrete TLBO algorithm (DTLBO)
Population size30, 40, 8040
Discrete artificial bee colony algorithm (DABC)
The number of scout bees3, 5, 65
The number of following bees10, 15, 1815
Life time30, 40, 6040
Table 10. Computational results by the Cplex, MBO and IMBO.
Table 10. Computational results by the Cplex, MBO and IMBO.
InstancesProblem SizeCplexMBOIMBO
RPDTime (s)RPDRPD
BestWorstAvgBestWorstAvg
13 × 80.0024.320.000.000.000.000.000.00
25 × 180.00232.340.000.000.000.000.000.00
36 × 191.90300.000.000.240.000.000.000.00
47 × 213.63300.000.000.000.000.000.000.00
58 × 246.13300.000.201.480.590.000.490.10
610 × 304.33300.000.551.810.790.000.550.08
Table 11. Comparison results for minimum RPD and average RPD when ρ = 10.
Table 11. Comparison results for minimum RPD and average RPD when ρ = 10.
InstancesGAGARMAPSOSADTLBODABCIMBO
MinAvgMinAvgMinAvgMinAvgMinAvgMinAvgMinAvgMinAvg
Case 10.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 20.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 30.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 40.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 50.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 60.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 70.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 80.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 90.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 100.000.000.000.000.000.000.000.000.000.000.000.060.000.000.000.00
Case 110.000.230.000.100.000.000.000.110.000.210.000.230.000.000.000.00
Case 120.000.430.000.370.000.180.000.250.000.590.310.750.000.230.000.00
Case 130.000.370.000.050.090.110.000.080.000.310.000.220.000.060.000.00
Case 140.310.590.000.260.330.180.120.180.310.370.310.530.000.090.000.00
Case 150.000.650.000.280.000.300.270.310.000.660.000.500.000.240.000.00
Case 160.531.410.671.290.550.510.000.480.531.411.061.680.000.360.000.33
Case 170.521.161.021.191.161.050.670.911.101.290.941.250.470.820.000.43
Case 180.631.050.841.000.880.920.861.030.841.350.581.280.050.480.000.35
Case 190.310.530.260.390.331.000.901.060.310.510.310.570.260.300.000.28
Case 200.891.360.350.900.940.900.800.940.891.331.471.620.000.290.000.26
Case 210.781.101.221.471.000.550.490.580.951.331.031.490.120.570.000.54
Case 220.781.200.730.951.080.300.270.311.031.310.821.270.490.730.000.62
Case 230.211.021.181.311.041.401.251.470.991.441.201.470.080.680.000.44
Case 240.410.700.480.690.730.980.881.030.701.040.490.940.210.390.000.37
Case 250.200.570.520.780.600.780.700.820.571.020.611.020.160.270.000.29
Case 260.340.931.221.451.231.141.021.201.171.641.011.670.000.260.000.17
Case 270.901.250.711.261.340.470.420.491.271.701.581.970.000.330.000.37
Case 280.671.281.191.491.200.580.520.611.141.461.141.620.070.370.000.18
Case 290.601.201.041.321.750.960.861.011.671.911.141.700.000.280.000.10
Case 300.801.351.101.350.851.501.341.581.001.701.441.860.070.280.000.36
Case 310.591.110.881.100.931.090.971.141.131.511.101.580.130.370.000.38
Case 320.751.350.741.001.041.080.971.141.181.501.531.810.030.380.000.26
Case 330.701.030.701.051.071.371.221.441.021.451.291.560.210.460.000.35
Case 340.731.200.791.081.101.040.931.101.051.641.261.640.030.470.000.20
Case 350.620.940.771.001.241.451.301.531.181.340.831.280.050.280.000.44
Case 360.480.720.500.750.861.221.091.290.821.021.021.200.160.290.000.15
Case 370.520.770.290.640.691.201.071.260.660.990.861.130.000.270.000.24
Case 380.320.590.540.730.360.790.710.830.340.850.891.180.000.160.000.19
Case 390.300.640.650.780.740.970.871.020.700.870.590.960.020.130.000.14
Case 400.300.720.460.900.670.820.730.860.640.900.551.100.000.320.000.17
Avg0.350.690.470.670.600.620.530.650.580.870.630.930.070.250.000.19
Note: The smallest values of RPD are the best.
Table 12. Comparison results for minimum RPD and average RPD when ρ = 20.
Table 12. Comparison results for minimum RPD and average RPD when ρ = 20.
InstancesGAGARMAPSOSADTLBODABCIMBO
MinAvgMinAvgMinAvgMinAvgMinAvgMinAvgMinAvgMinAvg
Case 10.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 20.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 30.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 40.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 50.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 60.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 70.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 80.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 90.000.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
Case 100.000.000.000.000.000.000.000.000.000.000.000.060.000.000.000.00
Case 110.000.130.000.100.000.000.000.110.000.120.000.230.000.000.000.00
Case 120.000.350.000.310.000.180.000.180.000.440.000.750.000.070.000.00
Case 130.000.190.000.050.000.050.000.080.000.060.000.220.000.000.000.00
Case 140.000.360.000.260.000.180.000.180.000.310.310.530.000.030.000.00
Case 150.000.170.000.150.000.300.160.210.000.340.000.460.000.030.000.00
Case 160.531.280.671.110.550.510.000.480.531.410.791.530.000.340.000.17
Case 170.521.161.001.111.161.050.670.911.101.290.791.200.370.600.000.43
Case 180.581.000.550.870.880.920.860.520.841.190.581.280.050.320.000.31
Case 190.310.440.260.380.330.420.400.290.310.460.260.480.000.280.000.24
Case 200.891.280.350.900.440.900.800.860.421.070.631.270.000.160.000.14
Case 210.461.100.431.171.000.550.490.410.951.331.031.490.120.570.000.25
Case 220.660.840.630.730.900.300.270.310.861.120.821.100.080.510.000.41
Case 230.211.020.351.021.041.401.070.930.991.441.201.470.080.680.000.34
Case 240.370.610.350.530.600.760.560.530.570.830.370.780.210.320.000.18
Case 250.200.570.230.510.600.780.540.420.570.930.610.890.120.250.000.23
Case 260.340.840.570.731.231.140.770.541.171.641.011.600.000.200.000.13
Case 270.641.100.610.960.880.470.420.490.841.491.071.530.000.330.000.24
Case 280.671.280.991.251.200.580.520.611.141.460.781.620.070.370.000.18
Case 290.601.040.640.901.300.960.860.811.241.571.141.640.000.280.000.10
Case 300.441.220.411.060.851.461.111.091.001.601.441.860.070.280.000.26
Case 310.590.980.640.860.931.090.900.911.051.311.101.420.000.360.000.34
Case 320.430.930.410.810.921.080.850.890.881.191.121.380.030.300.000.20
Case 330.590.860.560.750.871.130.790.830.831.240.941.440.110.320.000.35
Case 340.730.920.710.800.991.040.840.970.941.291.071.460.030.340.000.18
Case 350.620.940.710.870.961.070.911.070.911.180.831.280.050.260.000.13
Case 360.430.580.410.500.620.810.930.490.590.890.480.930.050.140.000.08
Case 370.320.600.290.520.360.690.850.690.340.760.800.980.000.250.000.22
Case 380.320.590.390.600.360.790.710.770.340.850.891.180.000.160.000.19
Case 390.300.620.320.540.740.850.570.650.700.870.590.960.020.130.000.14
Case 400.300.720.460.670.670.820.700.860.550.900.551.100.000.320.000.17
Avg0.300.590.320.530.510.560.440.450.490.760.530.850.040.210.000.14
Note: The smallest values of RPD are the best.

Share and Cite

MDPI and ACS Style

Han, D.; Tang, Q.; Zhang, Z.; Li, Z. An Improved Migrating Birds Optimization Algorithm for a Hybrid Flow Shop Scheduling within Steel Plants. Mathematics 2020, 8, 1661. https://0-doi-org.brum.beds.ac.uk/10.3390/math8101661

AMA Style

Han D, Tang Q, Zhang Z, Li Z. An Improved Migrating Birds Optimization Algorithm for a Hybrid Flow Shop Scheduling within Steel Plants. Mathematics. 2020; 8(10):1661. https://0-doi-org.brum.beds.ac.uk/10.3390/math8101661

Chicago/Turabian Style

Han, Dayong, Qiuhua Tang, Zikai Zhang, and Zixiang Li. 2020. "An Improved Migrating Birds Optimization Algorithm for a Hybrid Flow Shop Scheduling within Steel Plants" Mathematics 8, no. 10: 1661. https://0-doi-org.brum.beds.ac.uk/10.3390/math8101661

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop