Next Article in Journal
EADN: An Efficient Deep Learning Model for Anomaly Detection in Videos
Previous Article in Journal
Thermoelastic Plane Waves in Materials with a Microstructure Based on Micropolar Thermoelasticity with Two Temperature and Higher Order Time Derivatives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algorithms for Multi-Customer Scheduling with Outsourcing

College of Science, Zhongyuan University of Technology, Zhengzhou 450007, China
*
Author to whom correspondence should be addressed.
Submission received: 27 March 2022 / Revised: 22 April 2022 / Accepted: 27 April 2022 / Published: 5 May 2022

Abstract

:
There are two customers and two uniform machines. Each customer has a set of jobs. These jobs may be processed on a uniform machine or may be outsourced with an outsourcing cost. Every customer has an objective function for his jobs. Manufacturers want to find the best scheduling scheme for both customers. We present algorithms for these problems for the first time.
MSC:
90B36

1. Introduction

Scheduling is the process of arranging, controlling, and optimizing work and workloads in various manufacturing productions. With the progress of society and the rapid development of the economy, there are more and more scheduling problems in our daily life, such as, for large-scale engineering projects, how to arrange different personnel and the supply of different equipment, or, for express companies, how to schedule the delivery sequences for various online shopping commodities so that the packages can be delivered to customers at the fastest speed. These are all scheduling problems. There are many other applications of scheduling in the service industry, manufacturing, transportation, computer science, and other fields in modern life. Good (or bad) scheduling directly affects the cost and the profit of manufacturers.
In classical scheduling problems, the jobs often belong to one customer (agent). However, in practical problems, the jobs usually belong to multiple customers (agents). For example, a manufacturer needs to face different customers at the same time. Each customer has his own jobs (products) which need to be processed on the same assembly lines. Then, decision makers must decide how to schedule these jobs to be processed on machines so as to maximize profit and satisfy different customers’ requirements. Therefore, decision makers need to balance the interests of these customers. This leads to multi-customer (agent) scheduling.
Baker and Smith [1] first studied two-customer scheduling. Their problem is 1 | | Z A + θ Z B , where Z A is objective function of the first customer and Z B is the objective function of the second customer. When Z A = C max A , Z B = { C j B , w j B C j B } , they provided polynomial-time algorithms. For problems 1 | | ( λ A w j A C j A + λ B w j B C j B ) , 1 | | ( λ A C j A + λ B L max B ) and 1 | | ( λ A L max A + λ B L max B ) , they also provided polynomial-time algorithms. Moreover, they proved that problem 1 | | ( λ A w j A C j A + λ B L max B ) is unary NP-hard (there are no polynomial-time algorithms). Though algorithms in [1] solved the problems completely, some algorithms are not so good. Later, Yuan et al. [2] updated some results in [1]. Agnetis et al. [3] also considered two-customer scheduling. Agnetis et al. first proposed the constrained scheduling problem 1 | | f A : f B L ( L 0 , L is a constant) and the Pareto-optimal problem 1 | | ( f A , f B ) . In order to solve the Pareto-optimal problem 1 | | ( f A , f B ) , they first solved two constrained problems, 1 | | f A : f B L and 1 | | f B : f A L * , where L * is the optimal solution of problem 1 | | f A : f B L . Let L 🟉 be the optimal solution of problem 1 | | f B : f A L * . The authors then proved that ( L * , L 🟉 ) is a Pareto-optimal solution of problem 1 | | ( f A , f B ) . With the change in L and a repetition of the above, they found all the Pareto-optimal solutions to problem 1 | | ( f A , f B ) . Moreover, the authors also proved that problem 1 | | ( C j A , C j B L ) is binary NP-hard. The algorithms provided in this paper are pioneering and skillful. These algorithms solved their problems perfectly. Agnetis et al. presented very valuable methods and ideas to find all Pareto-optimal solutions for multi-customer scheduling. Later, many researchers used their ideas to study other constrained problems and Pareto-optimal problems involving multiple customers. Most of the algorithms in Agnetis et al. [3] can be applied to solve problems which have polynomial-time algorithms. Since our problems in this paper are more complex and NP-hard, our algorithms are complex. We present pseudo-polynomial-time algorithms. In general, the speeds of polynomial-time algorithms are usually faster than those of pseudo-polynomial-time algorithms. More and more researchers have become interested in multi-customer scheduling. For other related results about multi-customer scheduling until 2014, the readers can refer to Agnetis et al. [4]. Feng et al. [5] studied two-customer scheduling with outsourcing on a single machine. The algorithms provided in [5] solved all their problems completely. The details for [5] are described below.
Later, problems studied by researchers became more sophisticated. Different problems may have different characteristics and need different algorithms for solving them. For most problems, existing algorithms cannot be used directly to solve new problems. Therefore, researchers try to find new algorithms for their problems. Sometimes these algorithms are capable of completely solving the problems considered. Sometimes these algorithms are capable of partially solving the problems.
As for recent research on multi-customer scheduling, Zhang et al. [6] considered the Pareto-optimal problem 1 | | ( w j A C j A , w j B Y j B ) . They provided an algorithm to solve their problem and tested this algorithm on experimental data. The algorithm was capable of completely solving their problem. Zhao and Yuan [7] considered the Pareto-optimal problem Q | p j = p | ( f ( 1 ) , f ( 2 ) ) . For the objective functions studied in Oron et al. [8], they proved that problems can be solved polynomially or pseudo-polynomially. However, for f ( 1 ) = T j ( 1 ) and f ( 2 ) { T j ( 2 ) , w j ( 2 ) U j ( 2 ) , w j ( 2 ) C j ( 2 ) } , the exact complexities of problems Q | p j = p | ( f ( 1 ) , f ( 2 ) ) , P | p j = p | ( f ( 1 ) , f ( 2 ) ) and 1 | p j = p | ( f ( 1 ) , f ( 2 ) ) are not known; that is, their algorithms were capable of partially solving their problems. Li and Yuan [9] considered single-machine scheduling with multiple customers. They presented algorithms and solved their problems completely. He and Yuan [10] studied three problems 1 | pmtn | ( C j A , Y j B ) , 1 | pmtn | ( L max A , Y j B ) , 1 | pmtn | ( Y j A , Y j B ) and presented polynomial-time algorithms that also solved their problems completely. Feng et al. [11] studied a two-customer Pareto-optimal problem on a bound parallel-batching machine and found a polynomial-time algorithm for the problem. Recently, He et al. [12] considered a more general case of the problem in [11]. He et al. also provided a better algorithm with a lower time bound. Chen et al. [13] studied two-customer Pareto scheduling with their own equal processing time. They solved all the problems when objective functions of two customers were related to late work. They tested their algorithms on experimental data, which revealed that their algorithms were capable of completely solving the problems considered.
If we think of outsourcing as rejection, then our problem with outsourcing can be considered a problem with rejection. In the early stage of scheduling theory, jobs are usually not allowed to be outsourced. However, the assumption is obviously inconsistent with the development of society. In order to maximize profits, manufacturers often outsource or reject some jobs, which leads to scheduling with outsourcing.
Bartal et al. [14] first studied scheduling with outsourcing. The jobs in their problems were processed on identical parallel machines. Hoogeveen et al. [15] studied scheduling with outsourcing if the jobs can be preempted. Ou and Zhong [16,17] studied problems with outsourcing. Lu et al. [18] considered problems 1 | r j , model i | C max ( O ¯ ) + g i ( V ) , i = 1 , 2 , 3 , 4 , where O ¯ was the set of the jobs which cannot be outsourced. For the taxonomy and review, the readers can refer to Slotnick [19] and Shabtay et al. [20]. Each algorithm in [14,17] has its own character and is different from existing algorithms. Zou and Yuan [21] studied single-machine scheduling with job outsourcing and a maintenance interval of a fixed length on the machine. The authors of [18,21] all showed that their problems were NP-hard. The proof of NP-hardness required good skills to construct instances. The construction in [18,21] is entirely different. Each method was skillful and solved their problems completely.
Much research is focused on multi-customer scheduling and much on scheduling with outsourcing. However, there is little research on multi-customer scheduling with outsourcing, which motivates us to study the problems in this paper.
Feng et al. [5] first considered two-customer scheduling with outsourcing on a single machine. For each case of objective pairs of two customers, the authors proved that problem was NP-hard. If all jobs of customer B were not allowed to be outsourced, they gave an approximation algorithm. Finally, Feng et al. [5] presented exact complexities for several NP-hard problems. Proof of NP-hardness was performed using the Partition problem for reduction in [5]. The Partition problem is a well-known NP-hard problem in scheduling theory. However, the instance construction, data, and approximation algorithm were first presented in [5]. Later, Li and Lu [22] generalized the problems in Feng et al. [5] to two identical parallel machines. Using similar dynamic programming algorithms as Feng et al. [5], they provided algorithms for the problems.
This paper studies two-customer scheduling problems with outsourcing on two uniform machines. As far as we know, this is the first study of multi-customer scheduling with outsourcing on uniform machines. Since all problems have been proved to be NP-hard on a machine by Feng et al. [5], all the problems considered in this paper are also NP-hard. We provide exact complexities for these problems.
For the sake of presentation, we usually used three fields γ 1 | γ 2 | γ 3 introduced by Graham et al. [23] to denote a scheduling problem. γ 1 specifies the machine environment. The machines are as follows: identical parallel machines denoted by P, and each machine has the same speed; uniform parallel machines denoted by Q, and each machine M q has a speed s q ; and unrelated parallel machines denoted by R, and each machine M q has a job-dependent speed s i q for job J i . Other machine environments have job shops, flow shops, open shops, and mixed shops. γ 2 denotes job characteristics and constraint conditions. Other variables include r j (release date of job J j ), d j (due date of job J j ), w j (weight of job J j ), e j (outsourcing cost of job J j ), reject (jobs can be outsourced), etc. γ 3 denotes the optimal criterion, such as the maximum completion time C max , the maximum lateness L max , the sum of the completion times C j , etc.
For most research on multi-customer (agent) scheduling, the main scheduling models focus on the following three cases (with two customers used as an example). f ( 1 ) and f ( 2 ) are the objective functions for each of the two customers, respectively.
(1) Constrained scheduling γ 1 | f ( 2 ) K | f ( 1 ) . The goal is to find an optimal sequence to minimize f ( 1 ) under condition f ( 2 ) K , where K 0 .
(2) Scheduling with weighted objective functions γ 1 | γ 2 | λ 1 f ( 1 ) + λ 2 f ( 2 ) , where λ 1 and λ 2 are weight coefficients. The goal is to find an optimal schedule that minimizes λ 1 f ( 1 ) + λ 2 f ( 2 ) .
(3) Pareto-optimal scheduling γ 1 | γ 2 | ( f ( 1 ) , f ( 2 ) ) . The goal is to find all schedules that minimize f ( 1 ) and f ( 2 ) simultaneously.
In this paper, we consider a combination of both (1) and (2).
The organization of this article is as follows. In Section 2, we provide the notations used in the paper. In Section 3, we give the problem statement. Section 4, Section 5 and Section 6 focus on algorithms for our problems. Finally, conclusions are provided in Section 7.

2. Notations

For two-customer scheduling with outsourcing on two uniform machines, let A and B be two customers. We use the following notation in this paper, x { A , B } . All parameters are integers in this paper:
  • d j x is the due date of job J j x .
  • e j x is the outsourcing cost of job J j x .
  • E x = i = 1 n x e i x denotes the total outsourcing cost of all jobs of customers x.
  • F x is an objective function of jobs processed on machine M q ( q = 1 , 2 ) of customers x; in this paper, F { C max , L max , max w j C j , C j , w j U j } .
  • J x denotes the set of jobs of customers x, that is, J x = { J 1 x , J 2 x , , J n x x } .
  • i , j , k , l , r , s , t are job indexes.
  • K is a positive integer and the upper bound of the constraint conditions.
  • M q ( q = 1 , 2 ) denotes machine M 1 or M 2 .
  • n x is the number of jobs of customers x.
  • p j x is the processing time of job J j x .
  • P = i = 1 n A p i A + i = 1 n B p i B denotes the sum of processing times of all the jobs of the two customers.
  • Q denotes that the machine environment is uniform parallel machines.
  • Q 2 denotes that there are two uniform parallel machines.
  • s q is the processing speed of machine M q ( q = 1 , 2 ).
  • w j x is the weight of job J j x .
    Moreover, we also use the following notation for a given schedule π , x { A , B } :
  • A x ( π ) is the set of jobs of customer x which cannot be outsourced in π .
  • A q ( X , Y ) ( π ) is the set of jobs of two customers processed on machine M q among J 1 A , , J X A , J 1 B , , J Y B in π . If there is no ambiguity, we use A q ( X , Y ) instead of A q ( X , Y ) ( π ) .
  • A q B ( X , Y ) ( π ) is the set of jobs of customer B processed on machine M q among J 1 A , , J X A , J 1 B , , J Y B in π . If there is no ambiguity, we use A q B ( X , Y ) instead of A q B ( X , Y ) ( π ) .
  • e ( R x ( π ) ) is the total outsourcing cost of jobs of customer x in π . If there is no ambiguity, we use e ( R x ) instead of e ( R x ( π ) ) .
  • C j x ( π ) is the completion time of job J j x in π .
  • C max x ( π ) is the maximum completion time of the jobs of customers x in π .
  • C j x ( π ) is the total completion time of jobs in π .
  • L j x ( π ) = C j x ( π ) d j x ( π ) is the lateness of job J j x in π (penalty if job is completed after its due date).
  • L max x ( π ) = max { L j x ( π ) : 1 j n x } is the maximum lateness of jobs of customers x in π .
  • R x ( π ) is the set of jobs of customer x which are outsourced in π .
  • R A ( X , Y ) ( π ) is the set of jobs of customer A which are outsourced among J 1 A , , J X A in π . If there is no ambiguity, we use R A ( X , Y ) instead of R A ( X , Y ) ( π ) .
  • R B ( X , Y ) ( π ) is the set of jobs of customer B which are outsourced among J 1 B , , J Y B in π . If there is no ambiguity, we use R B ( X , Y ) instead of R B ( X , Y ) ( π ) .
  • U j x ( π ) is the tardy indicator number of job J j x in π , that is, U j x ( π ) = 1 if C j x ( π ) > d j x , and U j x ( π ) = 0 if C j x ( π ) d j x .
  • max w j x C j x ( π ) is the maximum weighted completion time of the jobs of customers x in π .
  • w j x U j x ( π ) is the number of weighted tardy of customers x in π .

3. Problem Statement

There are two uniform parallel machines M q ( q = 1 , 2 ) whose processing speeds are denoted by s 1 = 1 and s 2 = 1 α , where α is a positive integer. There are two customers A and B, and each customer has a set of jobs J x = { J 1 x , J 2 x , , J n x x } , x { A , B } . Moreover, the intersection of the two sets is empty. For a job J j x of customers x, J j x may be processed on M 1 (or M 2 ) and cannot be preempted. J j x may be outsourced. If J j x is outsourced, then producers need to pay an outsourcing cost e j x . If J j x is not outsourced, then the actual processing time on machine M q is p j x s q ( q = 1 , 2 ). All jobs can begin the process at zero. Each job has an objective function F x if it is processed on a uniform machine. The goal of scheduling is to find a feasible sequence which minimizes F A + e ( R A ) subject to the constraint that F B + e ( R B ) K . Our problem can be written as Q 2 | reject , F B + e ( R B ) K | F A + e ( R A ) , as by Graham et al. [23].

4. Problem Q 2 | reject , L max B + e ( R B ) K | max w j A C j A + e ( R A )

We use three orders in the following discussion. Then, we first give them, x { A , B } .
Definition 1.
GW order: Jobs are processed in non-increasing order of w j x .
Definition 2.
SPT order: Jobs are processed in non-decreasing order of p j x .
Definition 3.
EDD order: Jobs are processed in non-decreasing order of d j x .
First, for problem Q 2 | reject , L max B + e ( R B ) K | max w j A C j A + e ( R A ) , we have
Lemma 1.
For problem Q 2 | r e j e c t , L max B + e ( R B ) K | max w j A C j A + e ( R A ) , there is an optimal order π with the following two properties.
(1) Jobs in A A ( π ) are processed in the GW order on every machine.
(2) Jobs in A B ( π ) are processed in the EDD order on every machine.
Proof. 
Suppose that π is an optimal sequence. If property (1) does not hold for π , then there must be two jobs J i A , J j A A A ( π ) on machine M k ( k = 1 , 2 ) in which job J i A is scheduled before J j A but w i A w j A . We obtain a schedule π by putting job J i A directly after job J j A . Then, the completion times of job J j A and jobs ordered between J i A and J j A are decreased in π . The completion time of job J i A is C j A ( π ) in π . For the remaining jobs, the completion times do not change. Since w i A w j A , then for any job J h A in sequence π , we have max w h A C h A ( π ) max w h A C h A ( π ) . By continuing in this way, we obtain a sequence with property (1).
Property (2) can be proved in a similar manner as property (1). □
By Lemma 1 , we re-label all jobs of the two customers so that w 1 A w 2 A w n A A and d 1 B d 2 B d n B B .
Theorem 1.
Problem Q 2 | r e j e c t , L max B + e ( R B ) K | max w j A C j A + e ( R A ) can find an optimal sequence in O ( n A n B P 2 K 2 E A ) time.
Proof. 
Suppose that J 1 A , , J X A , J 1 B , , J Y B ( 1 X n A , 1 Y n B ) are the current jobs of the two customers in consideration. These jobs are classified into several sets as follows, x { A , B } , q = 1 , 2 .
  • A q ( X , Y ) : set of jobs of two customers processed on machine M q in the current sequence.
  • A q B ( X , Y ) : set of jobs of customer B processed on machine M q in the current sequence.
  • R A ( X , Y ) : set of jobs of customer A which are outsourced among J 1 A , , J X A in the current sequence.
  • R B ( X , Y ) : set of jobs of customer B which are outsourced among J 1 B , , J Y B in the current sequence.
Let π ( X 1 , Y ) be the optimal sequence for J 1 A , , J X 1 A , J 1 B , , J Y B , and let π ( X , Y 1 ) be the optimal sequence for J 1 A , , J X A , J 1 B , , J Y 1 B .
We denote by G ( X , Y , T 1 , T 2 , L , e A , e B ) the minimum scheduling function value of customer A satisfying the following conditions:
(a) The current jobs in consideration in the current sequence are J 1 A , , J X A , J 1 B , , J Y B ;
(b)
T q = J l x A q ( X , Y ) p l x , 1 l max { X , Y } ;
(c)
L = max { L k B : J k B A 1 B ( X , Y ) A 2 B ( X , Y ) } , 1 k Y ;
(d)
e A = J r A R A ( X , Y ) e r A , e B = J s B R B ( X , Y ) e s B , 1 r X , 1 s Y .
Now, for an optimal sequence for J 1 A , , J X A , J 1 B , , J Y B , we have six cases as follows.
Case 1. J X A is outsourced.
Since J X A is outsourced, in π ( X 1 , Y ) , J l x A q ( X 1 , Y ) p l x = T q . L = max { L k B : J k B A 1 B ( X 1 , Y ) A 2 B ( X 1 , Y ) } . Moreover, J r A R A ( X 1 , Y ) e r A = e A e X A , and J s B R B ( X 1 , Y ) e s B = e B . Therefore, G ( X , Y , T 1 , T 2 , L , e A , e B ) = G ( X 1 , Y , T 1 , T 2 , L , e A e X A , e B ) + e X A .
Case 2. Machine M 1 finally processes J X A .
Then, in π ( X 1 , Y ) , J l x A 1 ( X 1 , Y ) p l x = T 1 p X A . J l x A 2 ( X 1 , Y ) p l x = T 2 . L = max { L k B : J k B A 1 B ( X 1 , Y ) A 2 B ( X 1 , Y ) } . J r A R A ( X 1 , Y ) e r A = e A , and J s B R B ( X 1 , Y ) e s B = e B . Thus G ( X , Y , T 1 , T 2 , L , e A , e B ) = max { G ( X 1 , Y , T 1 p X A , T 2 , L , e A , e B ) e A , w X A T 1 } + e A .
Case 3. Machine M 2 finally processes J X A .
Similar to Case 2, for π ( X 1 , Y ) , we have J l x A 1 ( X 1 , Y ) p l x = T 1 . J l x A 2 ( X 1 , Y ) p l x = T 2 p X A . L = max { L k B : J k B A 1 B ( X 1 , Y ) A 2 B ( X 1 , Y ) } . J r A R A ( X 1 , Y ) e r A = e A , and J s B R B ( X 1 , Y ) e s B = e B . So G ( X , Y , T 1 , T 2 , L , e A , e B ) = max { G ( X 1 , Y , T 1 , T 2 p X A , L , e A , e B ) e A , α w X A T 2 } + e A .
Case 4. J Y B is outsourced.
Thus, for π ( X , Y 1 ) , J l x A q ( X , Y 1 ) p l x = T q , and L = max { L k B : J k B A 1 B ( X , Y 1 ) A 2 B ( X , Y 1 ) } . J r A R A ( X , Y 1 ) e r A = e A , J s B R B ( X , Y 1 ) e s B = e B e Y B . We have G ( X , Y , T 1 , T 2 , L , e A , e B ) = G ( X , Y 1 , T 1 , T 2 , L , e A , e B e X B ) , where L + e B K .
Case 5. Machine M 1 finally processes J Y B .
Consider π ( X , Y 1 ) , so J l x A 1 ( X , Y 1 ) p l x = T 1 p Y B . J l x A 2 ( X , Y 1 ) p l x = T 2 . L 1 = max { L k B : J k B A 1 B ( X , Y 1 ) A 2 B ( X , Y 1 ) } , where L = max { L 1 , T 1 d Y B } . J r A R A ( X , Y 1 ) e r A = e A , and J s B R B ( X , Y 1 ) e s B = e B . Then G ( X , Y , T 1 , T 2 , L , e A , e B ) = min 0 L 1 L G ( X , Y 1 , T 1 p Y B , T 2 , L 1 , e A , e B ) if T 1 d Y B = L , and G ( X , Y , T 1 , T 2 , L , e A , e B ) = G ( X , Y 1 , T 1 p Y B , T 2 , L , e A , e B ) if T 1 d Y B < L , where L + e B K .
Case 6. Machine M 2 finally processes J Y B .
Similar to Case 5, we have G ( X , Y , T 1 , T 2 , L , e A , e B ) = min 0 L 2 L G ( X , Y 1 , T 1 , T 2 p Y B , L 2 , e A , e B ) if α T 2 d Y B = L , and G ( X , Y , T 1 , T 2 , L , e A , e B ) = G ( X , Y 1 , T 1 , T 2 p Y B , L , e A , e B ) if α T 2 d Y B < L .
If we summarize the six cases, we can obtain
Algorithm H 1
The initial values:
G ( 0 , 0 , T 1 , T 2 , L , e A , e B ) = 0 , if ( T 1 , T 2 , L , e A , e B ) = ( 0 , 0 , , 0 , 0 ) , + , otherwise .
The recursive equation:
G ( X , Y , T 1 , T 2 , L , e A , e B ) = min G ( X 1 , Y , T 1 , T 2 , L , e A e X A , e B ) + e X A ; max { G ( X 1 , Y , T 1 p X A , T 2 , L , e A , e B ) e A , w X A T 1 } + e A ; max { G ( X 1 , Y , T 1 , T 2 p X A , L , e A , e B ) e A , α w X A T 2 } + e A ; G ( X , Y 1 , T 1 , T 2 , L , e A , e B e Y B ) if L + e B K ; min 0 L 1 L G ( X , Y 1 , T 1 p Y B , T 2 , L 1 , e A , e B ) if T 1 d Y B = L and L + e B K ; G ( X , Y 1 , T 1 p Y B , T 2 , L , e A , e B ) if T 1 d Y B < L and L + e B K ; min 0 L 2 L G ( X , Y 1 , T 1 , T 2 p Y B , L 2 , e A , e B ) if α T 2 d Y B = L and L + e B K ; G ( X , Y 1 , T 1 , T 2 p Y B , L , e A , e B ) if α T 2 d Y B < L and L + e B K .
The optimal value
min { G ( n A , n B , T 1 , T 2 , L , e A , e B ) : 0 T 1 , T 2 P , 0 e A E A , 0 L + e B K } .
Since Algorithm H 1 has at most O ( n A n B P 2 K 2 E A ) states, if L Y B = T 1 d Y B = L (when J Y B is processed on machine M 1 ) or if L Y B = α T 2 d Y B = L (when J Y B is processed on machine M 2 ), the number of states is at most O ( n A n B P 2 K E A ) . Every iteration needs O ( K ) time; otherwise, every iteration needs a constant time. Thus, the theorem holds. □
Suppose w X A = 1 for all jobs of customer A and d Y B = 0 for all jobs of customer B, the above problem changes into Q 2 | reject , C max B + e ( R B ) K | C max A + e ( R A ) . The variable L can be omitted. Thus, we have
Corollary 1.
Time bound for solving Q 2 | r e j e c t , C max B + e ( R B ) K | C max A + e ( R A ) is O ( n A n B P 2 K E A ) .
Using a similar method, one can solve problems Q 2 | reject , max w j B C j B + e ( R B ) K | max w j A C j A + e ( R A ) and Q 2 | reject , L max B + e ( R B ) K | L max A + e ( R A ) . We only give the following two theorems.
Theorem 2.
Time bound for solving Q 2 | r e j e c t , max w j B C j B + e ( R B ) K | max w j A C j A + e ( R A ) is O ( n A n B P 2 K 2 E A ) .
Theorem 3.
Time bound for solving Q 2 | r e j e c t , L max B + e ( R B ) K | L max A + e ( R A ) is O ( n A n B P 2 K 2 E A ) .

5. Problem Q 2 | reject , L max B + e ( R B ) K | J j A A A ( π ) C j A + e ( R A )

For Q 2 | reject , L max B + e ( R B ) K | J j A A A ( π ) C j A + e ( R A ) , we present a lemma. Since it is easy to prove, we omitted the proof.
Lemma 2.
For problem Q 2 | r e j e c t , L max B + e ( R B ) K | J j A A A ( π ) C j A + e ( R A ) , there exists an optimal sequence π in which jobs in A A ( π ) are processed according to the SPT order. and jobs in A B ( π ) are processed in the EDD order.
Then, by Lemma 2, re-number all jobs so that p 1 A p 2 A p n A A and d 1 B d 2 B d n B B .
Theorem 4.
Time bound for solving Q 2 | r e j e c t , L max B + e ( R B ) K | J j A A A ( π ) C j A + e ( R A ) is O ( n A n B P 2 K 2 ) .
Proof. 
Suppose that J 1 A , , J X A , J 1 B , , J Y B ( 1 X n A , 1 Y n B ) are the current jobs of the two customers in consideration in the current sequence. We classify these jobs into sets A q ( X , Y ) , A q B ( X , Y ) , and R B ( X , Y ) . The definitions of these sets are same as those in Theorem 1. Let π ( X 1 , Y ) be the optimal sequence for J 1 A , , J X 1 A , J 1 B , , J Y B , and π ( X , Y 1 ) be the optimal sequence for J 1 A , , J X A , J 1 B , , J Y 1 B .
Let G ( X , Y , T 1 , T 2 , L , e B ) denote the minimum scheduling objective value of customer A satisfying the following conditions:
(a) The current jobs in consideration in the current sequence are J 1 A , , J X A , J 1 B , , J Y B ;
(b)
T q = J l x A q ( X , Y ) p l x 1 l max { X , Y } ;
(c)
L = max { L k B : J k B A 1 B ( X , Y ) A 2 B ( X , Y ) } , 1 k Y ;
(d)
J s B R B ( X , Y ) e s B = e B , 1 s Y .
Furthermore, we distinguish six cases as follows.
Case 1. J X A is outsourced. Thus, for π ( X 1 , Y ) , the T q , L, e B values do not change. J X A contributes e X A to G ( X , Y , T 1 , T 2 , L , e B ) . Thus, G ( X , Y , T 1 , T 2 , L , e B ) = G ( X 1 , Y , T 1 , T 2 , L , e B ) + e X A .
Case 2. Machine M 1 finally processes J X A . Then, in π ( X 1 , Y ) , J l x A 1 ( X 1 , Y ) p l x = T 1 p X A . J l x A 2 ( X 1 , Y ) p l x = T 2 . L = max { L k B : J k B A 1 B ( X 1 , Y ) A 2 B ( X 1 , Y ) } . J s B R B ( X 1 , Y ) e s B = e B . J X A contributes T 1 to G ( X , Y , T 1 , T 2 , L , e B ) . Therefore, G ( X , Y , T 1 , T 2 , L , e B ) = G ( X 1 , Y , T 1 p X A , T 2 , L , e B ) + T 1 .
Case 3. Machine M 2 finally processes J X A . Thus, in π ( X 1 , Y ) , J l x A 2 ( X 1 , Y ) p l x = T 2 p X A , x { A , B } . Note that the other values do not change. J X A contributes α T 2 to G ( X , Y , T 1 , T 2 , L , e B ) . Therefore, G ( X , Y , T 1 , T 2 , L , e B ) = G ( X 1 , Y , T 1 , T 2 p X A , L , e B ) + α T 2 .
Case 4. J Y B is outsourced. Thus, in π ( X , Y 1 ) , J s B R B ( X , Y 1 ) e s B = e B e Y B . The other values do not change. Therefore, G ( X , Y , T 1 , T 2 , L , e B ) = G ( X , Y 1 , T 1 , T 2 , L , e B e Y B ) .
Case 5. M 1 finally processes J Y B . Then, in π ( X , Y 1 ) , J l x A 1 ( X , Y 1 ) p l x = T 1 p Y B . J l x A 2 ( X , Y 1 ) p l x = T 2 , x { A , B } . J s B R B ( X , Y 1 ) e s B = e B . L 1 = max { L k B : J k B A 1 B ( X , Y 1 ) A 2 B ( X , Y 1 ) } , where L = max { L 1 , T 1 d Y B } . Thus, G ( X , Y , T 1 , T 2 , L , e B ) = min 0 L 1 L G ( X , Y 1 , T 1 p Y B , T 2 , L 1 , e B ) if T 1 d Y B = L and G ( X , Y , T 1 , T 2 , L , e B ) = G ( X , Y 1 , T 1 p Y B , T 2 , L , e B ) if T 1 d Y B < L , where T 1 d Y B K .
Case 6. M 2 finally processes J Y B . Similar to Case 5, G ( X , Y , T 1 , T 2 , L , e B ) = min 0 L 2 L G ( X , Y 1 , T 1 , T 2 p Y B , L 2 , e B ) if α T 2 d Y B = L and G ( X , Y , T 1 , T 2 , L , e B ) = G ( X , Y 1 , T 1 , T 2 p Y B , L , e B ) if α T 2 d Y B < L , where α T 2 d Y B K .
If we summarize the six cases, we have
Algorithm H 2
The initial values:
G ( 0 , 0 , T 1 , T 2 , L , e B ) = 0 , if ( T 1 , T 2 , L , e B ) = ( 0 , 0 , , 0 ) , + , otherwise .
The recursive equation:
G ( X , Y , T 1 , T 2 , L , e B ) = min G ( X 1 , Y , T 1 , T 2 , L , e B ) + e X A ; G ( X 1 , Y , T 1 p X A , T 2 , L , e B ) + T 1 ; G ( X 1 , Y , T 1 , T 2 p X A , L , e B ) + α T 2 ; G ( X , Y 1 , T 1 , T 2 , L , e B e Y B ) if L + e B K ; min 0 L 1 L G ( X , Y 1 , T 1 p Y B , T 2 , L 1 , e B ) if T 1 d Y B = L and L + e B K ; G ( X , Y 1 , T 1 p Y B , T 2 , L , e B ) if T 1 d Y B < L and L + e B K ; min 0 L 2 L G ( X , Y 1 , T 1 , T 2 p Y B , L 2 , e B ) if α T 2 d Y B = L and L + e B K ; G ( X , Y 1 , T 1 , T 2 p Y B , L , e B ) if α T 2 d Y B < L and L + e B K .
The optimal value can be obtained from
min { G ( n A , n B , T 1 , T 2 , L , e B ) : 0 T 1 , T 2 P , 0 L + e B K } .
Since Algorithm H 2 has at most O ( n A n B P 2 K 2 ) states, if L Y B = T 1 d Y B = L (when J Y B is processed on machine M 1 ) or if L Y B = α T 2 d Y B = L (when J Y B is processed on machine M 2 ), then it has at most O ( n A n B P 2 K ) states and the time of every iteration is O ( K ) ; otherwise, the time of every iteration is a constant. Therefore, the theorem holds. □
If all d Y B = 0 , then the problem is Q 2 | reject , C max B + e ( R B ) K | J j A A A C j A + e ( R A ) . Thus,
Corollary 2.
Time bound for solving Q 2 | r e j e c t , C max B + e ( R B ) K | J j A A A C j A + e ( R A ) is O ( n A n B P 2 K ).
Similarly, for Q 2 | reject , J j A B B C j B + e ( R B ) K | J j A A A C j A + e ( R A ) , we have
Theorem 5.
Time bound for solving Q 2 | r e j e c t , J j A B B C j B + e ( R B ) K | J j A A A C j A + e ( R A ) is O ( n A n B P 2 K 2 ) .

6. Problem Q 2 | reject , w j B U j B K | J j A A A C j A + e ( R A )

First, Q 2 | reject , w j B U j B K | J j A A A C j A + e ( R A ) has similar properties as Lemmas 1 and 2; we thus omitted the proof.
Lemma 3.
Problem Q 2 | r e j e c t , w j B U j B K | J j A A A C j A + e ( R A ) has the best sequence π in which jobs in A A ( π ) are processed in the SPT order and early jobs in A B ( π ) are sequenced in the EDD order (that is, the late jobs of customer B are also outsourced).
According to this lemma, all jobs are to be re-labeled so that p 1 A p 2 A p n A A and d 1 B d 2 B d n B B .
Theorem 6.
Time bound for solving Q 2 | r e j e c t , w j B U j B K | J j A A A ( π ) C j A + e ( R A ) is O ( n A n B
P 2 K ) .
Proof. 
Suppose that J 1 A , , J X A , J 1 B , , J Y B are the current jobs of the two customers in consideration. The definitions of sets A q ( X , Y ) ( q = 1 , 2 ) and R B ( X , Y ) are same as the above. Similarly, for π ( X 1 , Y ) and π ( X , Y 1 ) , definitions are also same to the above.
Let G ( X , Y , T 1 , T 2 , W ) denote the minimum scheduling objective value of customer A satisfying the following conditions:
(a) The current jobs in consideration in the current sequence are J 1 A , , J X A , J 1 B , , J Y B ;
(b)
J l x A q ( X , Y ) p l x = T q , 1 l max { X , Y } ;
(c)
J s B R B ( X , Y ) w s B U s B = W , 1 s Y
Six cases still need to be considered.
Case 1. J X A is outsourced. Thus, in π ( X 1 , Y ) , J l x A q ( X 1 , Y ) p l x = T q . J s B R B ( X 1 , Y ) w s B U s B = W . J X A contributes e X A to G ( X , Y , T 1 , T 2 , W ) . Thus, G ( X , Y , T 1 , T 2 , W ) = G ( X 1 , Y , T 1 , T 2 , W ) + e X A .
Case 2. Machine M 1 finally processes J X A . Then, in π ( X 1 , Y ) , J l x A 1 ( X 1 , Y ) p l x = T 1 p X A , and J l x A 2 ( X 1 , Y ) p l x = T 2 . J s B R B ( X 1 , Y ) w s B U s B = W . J X A contributes T 1 to G ( X , Y , T 1 , T 2 , W ) . Therefore, G ( X , Y , T 1 , T 2 , W ) = G ( X 1 , Y , T 1 p X A , T 2 , W ) + T 1 .
Case 3. M 2 finally processes J X A . Thus, in π ( X 1 , Y ) , J l x A 2 ( X 1 , Y ) p l x = T 2 p X A . Note that the other values do not change. J X A contributes α T 2 to G ( X , Y , T 1 , T 2 , W ) . Thus, G ( X , Y , T 1 , T 2 , W ) = G ( X 1 , Y , T 1 , T 2 p X A , W ) + α T 2 .
Case 4. M 1 finally processes J Y B . Then, in π ( X , Y 1 ) , J l x A 1 ( X , Y 1 ) p l x = T 1 p Y B . The other values do not change. Thus, G ( X , Y , T 1 , T 2 , W ) = G ( X , Y 1 , T 1 p Y B , T 2 , W ) , where T 1 d Y B .
Case 5. M 2 finally processes J Y B . Similar to Case 4, G ( X , Y , T 1 , T 2 , W ) = G ( X , Y 1 , T 1 , T 2 p Y B , W ) , where T 2 d Y B .
Case 6. J Y B is late. Thus, in π ( X , Y 1 ) , J l x A q ( X , Y 1 ) p l x = T q , ( q = 1 , 2 ) . J s B R B ( X , Y 1 ) w s B U s B = W w Y B . Therefore, G ( X , Y , T 1 , T 2 , W ) = G ( X , Y 1 , T 1 , T 2 , W w Y B ) , where W K .
If we summarize the six cases, we have
Algorithm H 3 :
The initial values:
G ( 0 , 0 , T 1 , T 2 , W ) = 0 , if ( T 1 , T 2 , W ) = ( 0 , 0 , 0 ) , + , otherwise .
The recursive equation:
G ( X , Y , T 1 , T 2 , W ) = min G ( X 1 , Y , T 1 , T 2 , W ) + e X A ; G ( X 1 , Y , T 1 p X A , T 2 , W ) + T 1 ; G ( X 1 , Y , T 1 , T 2 p X A , W ) + α T 2 ; G ( X , Y 1 , T 1 p Y B , T 2 , W ) if T 1 d Y B ; G ( X , Y 1 , T 1 , T 2 p Y B , W ) if α T 2 d Y B ; G ( X , Y 1 , T 1 , T 2 , W w Y B ) , if W K .
The optimal value can be obtained from
min { G ( n A , n B , T 1 , T 2 , W ) : 0 T 1 , T 2 P , 0 W K } .
The maximum number of states in the recursive equation is O ( n A n B P 2 K ) , and the time of each iteration is a constant. Hence, the theorem holds. □

7. Conclusions

This paper studied two-customer scheduling with outsourcing on two uniform machines. The goal was to obtain a feasible sequence which minimizes the criterion F A + e ( R A ) of customer A such that the criterion of customer B satisfies F B + e ( R B ) K . Since the problem was proved to be NP-hard for different combinations of F A and F B on a single machine, all the problems considered here are NP-hard. Up to now, researchers have not discovered the exact complexities for these problems on two uniform machines. We present the exact complexities of F { C max , L max , max w j C j , C j , w j U j } for the first time in this paper. For further research, one can consider efficient approximate algorithms for these problems or consider problems in other machine environments.

Author Contributions

Conceptualization, Q.F. and S.L.; methodology, Q.F. and S.L.; formal analysis, Q.F.; investigation, Q.F. and S.L.; resources, Q.F. and S.L.; writing—original draft preparation, Q.F. and S.L.; writing—review and editing, Q.F.; supervision, Q.F.; project administration, Q.F.; funding acquisition, Q.F. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported by National Natural Science Foundation of China (No. 11701595), IRTSTHN (21IRTSTHN013), and Key Research Projects of Henan Higher Education Institutions (No. 22B520054). It is also supported by the Research Team Development Foundation of Zhongyuan University of Technology (No. K2021TD004).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We are grateful to the editors and the anonymous referees for their helpful comments on an earlier version of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baker, K.R.; Smith, J.C. A multiple-criterion model for machine scheduling. J. Sched. 2003, 6, 7–16. [Google Scholar] [CrossRef]
  2. Yuan, J.J.; Shang, W.P.; Feng, Q. A note on the scheduling with two families of jobs. J. Sched. 2005, 8, 537–542. [Google Scholar] [CrossRef]
  3. Agnetis, A.; Mirchandani, P.B.; Pacciarelli, D.; Pacifici, A. Scheduling problems with two competing agents. Oper. Res. 2004, 52, 229–242. [Google Scholar] [CrossRef] [Green Version]
  4. Agnetis, A.; Billaut, J.C.; Gawiejnowicz, S.; Pacciarelli, D.; Soukhal, A. Multiagent Scheduling; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  5. Feng, Q.; Fan, B.Q.; Li, S.S.; Shang, W.P. Two-agent scheduling with rejection on a single machine. Appl. Math. Model. 2015, 39, 1183–1193. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Geng, Z.C.; Yuan, J.J. Two-agent Pareto-scheduling of minimizing total weighted completion time and total weighted late work. Mathematics 2020, 8, 2070. [Google Scholar] [CrossRef]
  7. Zhao, Q.L.; Yuan, J.J. Bicriteria scheduling of equal length jobs on uniform parallel machines. J. Comb. Opt. 2020, 39, 637–661. [Google Scholar] [CrossRef]
  8. Oron, D.; Shabtay, D.; Steiner, G. Single machine scheduling with two competing agents and equal job processing times. Eur. J. Oper. Res. 2015, 244, 86–99. [Google Scholar] [CrossRef]
  9. Li, S.S.; Yuan, J.J. Single-machine scheduling with multi-agents to minimize total weighted late work. J. Sched. 2020, 23, 497–512. [Google Scholar] [CrossRef]
  10. He, R.Y.; Yuan, J.J. Two-agent preemptive Pareto-scheduling to minimize late work and other criteria. Mathematics 2020, 8, 1517. [Google Scholar] [CrossRef]
  11. Feng, Q.; Li, S.S.; Shang, W.P.; Jiao, C.W.; Li, W.J. Two-Agent Scheduling on a Bounded Parallel-Batching Machine with Makespan and Maximum Lateness Objectives. J. Oper. Res. Soc. China 2020, 8, 189–196. [Google Scholar] [CrossRef]
  12. He, C.; Wu, J.; Lin, H. Two-agent bounded parallel-batching scheduling for minimizing maximum cost and makespan. Discret. Opt. 2022, 45, 100698. [Google Scholar] [CrossRef]
  13. Chen, R.B.; Geng, Z.C.; Lu, L.L.; Yuan, J.J.; Zhang, Y. Pareto-Scheduling of Two Competing Agents with Their Own Equal Processing Times. Eur. J. Oper. Res. 2022, 301, 414–431. [Google Scholar] [CrossRef]
  14. Bartal, Y.; Leonardi, S.; Spaccamela, A.M.; Sgall, J.; Stougie, L. Multiprocessor scheduling with rejection. SIAM J. Discret. Math. 2000, 13, 64–78. [Google Scholar] [CrossRef]
  15. Hoogeveen, H.; Skutella, M.; Woeginger, G.J. Preemptive scheduling with rejection. Math. Program. 2003, 94, 361–374. [Google Scholar] [CrossRef]
  16. Ou, J.W.; Zhong, X.L. Order acceptance and scheduling with consideration of service level. Ann. Oper. Res. 2017, 248, 429–447. [Google Scholar] [CrossRef]
  17. Ou, J.W.; Zhong, X.L. Bicriteria order acceptance and scheduling with consideration of fill rate. Eur. J. Oper. Res. 2017, 262, 904–907. [Google Scholar] [CrossRef]
  18. Lu, L.F.; Zhang, L.Q.; Ou, J.W. In-house production and outsourcing under different discount schemes on the total outsourcing cost. Ann. Oper. Res. 2021, 298, 361–374. [Google Scholar] [CrossRef]
  19. Slotnick, S.A. Order acceptance and scheduling: A taxonomy and review. Eur. J. Oper. Res. 2011, 212, 1–11. [Google Scholar] [CrossRef] [Green Version]
  20. Shabtay, D.; Gaspar, N.; Kaspi, M. A survey on off-line scheduling with rejection. J. Sched. 2013, 16, 3–28. [Google Scholar] [CrossRef]
  21. Zou, J.; Yuan, J.J. Single-machine scheduling with maintenance activities and rejection. Discret. Opt. 2020, 38, 100609. [Google Scholar] [CrossRef]
  22. Li, D.W.; Lu, X.W. Two-agent parallel-machine scheduling with rejection. Theor. Comput. Sci. 2017, 703, 66–75. [Google Scholar] [CrossRef]
  23. Graham, R.L.; Lawer, E.L.; Lenstra, J.K.; Rinnooy Kan, A.H.G. Optimization and approximation in deterministic sequencing and scheduling: A survey. Ann. Discret. Math. 1979, 5, 287–326. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Feng, Q.; Li, S. Algorithms for Multi-Customer Scheduling with Outsourcing. Mathematics 2022, 10, 1553. https://0-doi-org.brum.beds.ac.uk/10.3390/math10091553

AMA Style

Feng Q, Li S. Algorithms for Multi-Customer Scheduling with Outsourcing. Mathematics. 2022; 10(9):1553. https://0-doi-org.brum.beds.ac.uk/10.3390/math10091553

Chicago/Turabian Style

Feng, Qi, and Shisheng Li. 2022. "Algorithms for Multi-Customer Scheduling with Outsourcing" Mathematics 10, no. 9: 1553. https://0-doi-org.brum.beds.ac.uk/10.3390/math10091553

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop