Sunteți pe pagina 1din 22

JOURNAL OF INDUSTRIAL AND doi:10.3934/jimo.

2018017
MANAGEMENT OPTIMIZATION

SCHEDULING FAMILY JOBS ON AN UNBOUNDED


PARALLEL-BATCH MACHINE TO MINIMIZE MAKESPAN AND
MAXIMUM FLOW TIME

Zhichao Geng and Jinjiang Yuan∗


School of Mathematics and Statistics, Zhengzhou University
Zhengzhou, Henan 450001, China

(Communicated by Changzhi Wu)

Abstract. This paper investigates the scheduling of family jobs with release
dates on an unbounded parallel-batch machine. The involved objective func-
tions are makespan and maximum flow time. It was reported in the literature
that the single-criterion problem for minimizing makespan is strongly NP-hard
when the number of families is arbitrary, and is polynomially solvable when
the number of families is fixed. We first show in this paper that the single-
criterion problem for minimizing maximum flow time is also strongly NP-hard
when the number of families is arbitrary. We further show that the Pareto op-
timization problem (also called bicriteria problem) for minimizing makespan
and maximum flow time is polynomially solvable when the number of families
is fixed, by enumerating all Pareto optimal points in polynomial time. This
implies that the single-criterion problem for minimizing maximum flow time is
also polynomially solvable when the number of families is fixed.

1. Introduction.
1.1. Problem description and motivation. Suppose that n jobs J1 , J2 , · · · , Jn
from K (job) families Ff (1 ≤ f ≤ K) have to be processed without interruption
on a p-batch (parallel-batch) machine.
P Each family Ff has nf jobs, denoted by
Ff = {Jf,1 , · · · , Jf,nf }, where 1≤f ≤K nf = n. The jobs in Ff are called f -jobs.
Each job Jj (Jf,j ) is associated with a processing time pj (pf,j ) and a release date
rj (rf,j ). The p-batch machine can simultaneously process up to b jobs as a batch.
Here b stands for the capacity of a batch, and it has two versions: the bounded
capacity (b < n) for which a batch contains at most a finite number of jobs and
the unbounded capacity (b ≥ n) for which a batch can contain any number of jobs.
For the p-batch setting, the processing time p(B) of a batch B is equal to the
maximum processing time of the jobs in the batch, i.e., p(B) = max{pj : Jj ∈ B},
and the completion time of a batch is defined to be the time point when all jobs
contained in the batch have finished processing. We assume in this paper that jobs
from different families are not permitted to be processed in a common batch (also
called ‘the families are incompatible’), and this corresponds to the actual production
situation in which the products of different types can not be processed together due

2010 Mathematics Subject Classification. Primary: 58F15, 58F17; Secondary: 53C35.


Key words and phrases. Parallel-batch, family jobs, maximum flow time, pareto optimization.
The authors are supported by NSFC (11671368), NSFC (11571321), and NSFC (11771406).
∗ Corresponding author: Jinjiang Yuan.

1
2 ZHICHAO GENG AND JINJIANG YUAN

to the incompatibility of chemical property of the respective raw materials. Hence,


a batch is a set of jobs from a common family.
Table 1: The definitions of the abbreviations/notations
Abbreviation/Notation Definition
ERD earliest release date first (rule)
PoP Pareto optimal point
PoS Pareto optimal schedule
ERD-family scheule a schedule of family jobs defined in the first paragraph in section 3.1
Ff the f -th job family
f -job a job which belongs to family Ff
f -batch a p-bath which only includes f -jobs
Jf,j the j-th job in family Ff
Jj a general job
pj (pf,j )/rj (rf,j ) the processing time/the release dates of job Jj (Jf,j )
nf the number of jobs in family Ff
(b < n)/(b ≥ n) the bounded/unbounded capacity of a p-batch
p(B) the processing time of batch B
π/σ a feasible schedule
Cj (π)(Cf,j (π)) the completion time of job Jj (Jf,j ) in π
Sj (π)(Sf,j (π)) the starting time of job Jj (Jf,j ) in π
CB (π) the completion time of batch B in π
SB (π) the starting time of batch B in π
Fj (π)(Ff,j (π)) the flow time of job Jj (Jf,j ) in π which is Fj (π) = Cj (π) − rj (π)
Cmax (π) the maximum completion time of all jobs in π
Fmax (π) the maximum flow time of all jobs in π
(i)
Ff the set composed by the first i jobs of family Ff , i.e., {Jf,0 , Jf,1 , · · · , Jf,i }
(i1 ) (i )
(i1 , · · · , iK ) the instance composed by the job set F1 ∪ · · · ∪ FK K
#
P (i1 , · · · , iK ) the problem 1|β| (Cmax , Fmax ) restricted on the instance (i1 , · · · , iK )
PY (i1 , · · · , iK ) the problem 1|β|Cmax : Fmax ≤ Y restricted on the instance (i1 , · · · , iK )
(f )
PY (i1 , · · · , iK ) a restricted version of PY (i1 , · · · , iK ) with if ≥ 1 for which it is required
that feasible schedules end with an f -batch
(f ) (f )
CY (i1 , · · · , iK ) the optimal makespan of problem PY (i1 , · · · , iK )
(f,k )
CY f (i1 , · · · , iK ) defined in equation (3)
(f )
DY (i1 , · · · , iK ) defined in equation (4)
XY (i1 , · · · , iK ) defined in equation (5), the set of family indices attaining the minimum
in equation (1)
(f )
ΨY (i1 , · · · , iK ) defined in equation (6), the set of the f -job indices attaining the minimum
in equation (2)
CY (i1 , · · · , iK ) the optimal makespan of problem PY (i1 , · · · , iK )
Algorithm DP(Y ) the proposed dynamic programming algorithm for problem
1|β|Cmax : Fmax ≤ Y
Algorithm Family-CF the proposed algorithm for problem 1|β|# (Cmax , Fmax )

In a schedule π, we use Cj (π) (Cf,j (π)) and CB (π) to denote the completion time
of job Jj (Jf,j ) and the completion time of a batch B, respectively. If no ambiguity
can occur, Cj (π) (Cf,j (π)) and CB (π) are correspondingly abbreviated by Cj (Cf,j )
and CB . Note that all the jobs in a batch B share a common completion time CB .
For a feasible schedule π, the makespan is given by Cmax (π) = max1≤j≤n Cj , and
the maximum flow time is given by Fmax (π) = max1≤j≤n Fj , where Fj = Cj − rj
(Ff,j = Cf,j − rf,j ) denotes the flow time of job Jj (Jf,j ). Note that Cmax and
Fmax are not identical unless all jobs are released at time 0. In this paper, we
SCHEDULING FAMILY JOBS 3

simultaneously consider the objective functions Cmax and Fmax . For convenience,
the definitions of some abbreviations/notations commonly used in this paper are
depicted in Table 1.
The scheduling model studied in this paper is motivated by the following sce-
nario in the practical production as shown in Figure 1. A production line (a p-batch
machine) in a factory can manufacture several types (job families) of products, and
accept orders (jobs) from different customers. These orders may have different re-
quirements about starting processing times (their release dates) and different sizes
(processing times). Moreover, the orders of different types of products are forbidden
to process together due to the production conditions and other factors. It is neces-
sary for deciders to consider the amount of resource consumption processing these
orders and the satisfaction degree of the customers. The maximum completion time
(makespan) is a typical objective function studied in the literature which represents
the costs of schedules due to the resource consumption. For a customer, it is ideal
to receive his delivered order at time rf,j + pf,j (suppose that the order is Jf,j and it
can be delivered once finishing processing), and so, he can get the biggest satisfac-
tion. When the orders are delivered later, the satisfaction degrees of the customers
decrease. Notice that the flow time Ff,j = Cf,j − rf,j gains the minimum pf,j at
the delivered time rf,j + pf,j . Obviously, the later the practical delivered time of an
order is, the larger its flow time is. Thus, the maximum flow time can be used to
represent the satisfaction degree of the customers. In the standard single-machine
scheduling, the schedule generated by the ERD rule is optimal for both makespan
and maximum flow time. Unfortunately, this uniformity does not occur in many
other machine environments. In this case we should consider the trade-off between
makespan and maximum flow time which belong to the field of Pareto optimization
research.

4!#+!),5"0)563 :$9&"2+!)6
<=.+/6)7"09)
,>-2'*=/6)7"093 噯

4!#+!),5"0)5'3 :$9&"2+!)'

<=.+/?)7"09)
!"#$%&'"()*'(+),-)./0-&%1) ,>-2'*=/?)7"093

2-%1'(+3 噯

,8+9"$!%+)%"(9$2.&'"():2-;3
4!#+!),5"0)573 :$9&"2+!)7


<=.+/@)7"09) 噯
,>-2'*=/@)7"093

4!#+!),5"0)5(3 :$9&"2+!)(
,A-&'9?-%&'"()
>2-;3

Figure 1. The structure of the scheduling problem


4 ZHICHAO GENG AND JINJIANG YUAN

Since Cmax and Fmax are both regular (i.e., nondecreasing in the completion
times of the jobs), each job should be processed as early as possible, and adding
artificial idle times between jobs will bring no benefits. Then a schedule π can be
given by a batch sequence (B1 , · · · , Bq ) which indicates the composition and the
processing order of the batches in π. Obviously, for a schedule π = (B1 , · · · , Bq )
a batch Bi can start for processing only if the batches B1 , · · · , Bi−1 have finished
processing and all the jobs in Bi have been released. This implies that the starting
time SBi and completion time CBi of each batch Bi , i = 1, · · · , q, can be defined
iteratively by setting
(
SBi = max{CBi−1 , max{rj : Jj ∈ Bi }},
CBi = SBi + p(Bi ),

where B0 is a dummy batch with CB0 = 0. For convenience, for the subschedule
π ′ = (B1 , · · · , Bq−1 ) of a feasible schedule π = (B1 , · · · , Bq ), we write π ′ = π \ Bq
and π = π ′ ∪ Bq in the paper.
The main goal of this paper is to enumerate all Pareto optimal points with regard
to criteria Cmax and Fmax (we call such a scheduling problem Pareto optimization
scheduling). From Hoogeveen [10] and T’kindt and Billaut [34], as a type of bicri-
teria (multicriteria) scheduling problem, Pareto optimization scheduling on a single
machine for minimizing two regular objective functions f and g, can be formulated
in the following way.
Pareto Optimization: A feasible schedule π is called Pareto optimal, if there
exists no feasible schedule σ such that f (σ) ≤ f (π), g(σ) ≤ g(π), and at least one
of the two inequalities strictly holds. In this case, the objective vector (f (π), g(π))
is called a PoP corresponding to π. Pareto optimization scheduling aims at enu-
merating all PoPs and, for each PoP, finding a corresponding PoS. Following the
notation of T’kindt and Billaut [34], Pareto optimization scheduling problem on
a single machine to minimize two objective functions f and g can be denoted by
1|β| # (f, g), where β denotes the restricted constraints of the feasible schedules.
Related to problem 1|β| # (f, g), there are two constrained optimization problems
1|β|f : g ≤ V and 1|β|g : f ≤ U , where g ≤ V (f ≤ U ) means the restriction that,
for each feasible schedule, the value of objective function g (f ) is no more than a
given upper bound V (U ).
To find a PoP (or PoS), a commonly used approach is ǫ-constraint approach (see
Hoogeveen [11], Hoogeveen and van de Velde [11, 13], and He et al. [9]), which is
based on the following lemma (see Hoogeveen [10]).

Lemma 1.1. Suppose that problem 1|β|g have feasible schedules with objective val-
ues being no more than a given value V̂ , U is the optimal value of the constrained
problem 1|β|f : g ≤ V̂ , and V is the optimal value of the constrained problem
1|β|g : f ≤ U . Then (U, V ) is a PoP of problem 1|β| # (f, g), and every optimal
schedule of problem 1|β|g : f ≤ U is a PoS corresponding to (U, V ).

The Pareto optimization scheduling problem studied in this paper can be denoted
by

1|p-batch, family-job, b ≥ n, rj |# (Cmax , Fmax ),


SCHEDULING FAMILY JOBS 5

where “p-batch” means the parallel-batch setting, “family-job” means that the fam-
ilies are incompatible, and “b ≥ n” means the unbounded batch capacity. Further-
more, the related constrained problem is denoted by
1|p-batch, family-job, b ≥ n, rj |Cmax : Fmax ≤ Y.
To simplify the presentation, we use “β” to indicate the scheduling conditions
“p-batch, family-job, b ≥ n, rj ” in the remaining of the paper. Then the above two
problems can be simply written as 1|β|# (Cmax , Fmax ) and 1|β|Cmax : Fmax ≤ Y ,
and the corresponding single-criterion problems for minimizing Cmax and Fmax are
also written as 1|β|Cmax and 1|β|Fmax .
1.2. Literature review and our contribution. The batch production exists
in many production situations. There are two main types of batch scheduling,
referring to p-batch and s-batch (serial-batch). Different from p-batch, s-batch
requires that all the jobs within the same batch are processed one after another in
a serial fashion [35], and have the same completion times, which is defined as the
completion time of the last job in the batch. Ikura and Gimple [14] first studied
the p-batch scheduling in which the batches have a common processing time. The
general p-batch scheduling model was introduced in Lee et al. [16] with the bounded
capacity and was motivated by the burn-in operations during the final testing stage
of circuit board manufacturing. Whereas, a practical production scenario of the
s-batch processing stems from the aluminum-making process in an aluminum plant
[29]. Later, Brucker et al. [3] extended the research of p-batch to the unbounded
version. Cheng et al. [5] and Liu et al. [20] studied p-batch scheduling with
release dates. More results on this aspect can be referred to the surveys (Potts and
Kovalyov [32] and Allahverdi et al. [1]).
In the recent years, some research have focused on the batch scheduling problems
with deterioration and/or learning effects. Li et al. [18] investigated the schedul-
ing problems on a p-batch machine to minimize the makespan with simple linear
deterioration of processing times and release dates, and several algorithms were
proposed both for bounded and unbounded model. Qi et al. [33] considered several
single p-batch scheduling problems, which involve three objectives for minimizing
the maximum cost, the number of tardy jobs, and the total weighted completion
time, and they devised the corresponding solving algorithms. Miao et al. [24] stud-
ied p-batch scheduling problems of deteriorating jobs with identical release dates,
and an optimal algorithm and an FPTAS were designed for single-machine case
and multiple-machine case, respectively. Pei et al. [29] investigated the problem of
the s-batch scheduling with deteriorating jobs and an independent constant setup
time in an aluminum manufacturing factory for the first time. Recently, Pei et al.
[30] continued this research by proposing another type of scheduling problem with
deteriorating jobs, in which jobs are of multiple types and, similar to the family
jobs considered in this paper, the jobs of different types are forbidden to process in
a common s-batch. Pei et al [28] also studied the serial batch scheduling problems
with position-based learning effect. Two scheduling problems for single-machine to
minimize maximum earliness and parallel-machine to minimize the total number of
tardy jobs are both considered, respectively. More research on the batch scheduling
(or on the group scheduling, their difference were described in Table 1 [27]) with
deterioration and/or learning effects can be referred to [2, 26, 27, 31].
P-batch scheduling with incompatible families has been studied extensively in
the literature. Yuan et al. [36] considered the scheduling problem on an unbounded
6 ZHICHAO GENG AND JINJIANG YUAN

p-batch machine with family jobs and release dates to minimize makespan. Nong
et al. [25] and Li et al. [19] further considered the bounded version and parallel-
machine setting for the problem studied in Yuan et al. [36], respectively. Jolai [15],
Chakhlevitch et al. [4], and Malve et al. [23] studied some special cases where all
jobs have the same or different release dates and in each family all jobs have the
same processing times. Additionally, Liu et al. [21] and Li et al. [17] investigated
the p-batch scheduling problems with family jobs to minimize the total number of
tardy jobs.
Pareto optimization scheduling also has attracted a great deal of research in-
terests. Detailed developments can be found in Hoogeveen [10]. Hoogeveen [11]
showed that the Pareto optimization problem for minimizing any two maximum
is solvable in O(n4 ) time. Hoogeveen and van de Velde [12] presented
cost criteria P
3
an O(n log pj )-time algorithm for the Pareto optimization problem on a single
machine to minimize the total completion time and a maximum cost. However, the
complexity analysis of their algorithmP is pointed by Gao and Yuan [6] to be invalid,
and they presented a new O(n3 log pj )-time algorithm. He et al. [9] first studied
the Pareto optimization problem on an unbounded p-batch machine for minimize
makespan and maximum lateness, and presented an O(n3 )-time algorithm. Geng
and Yuan further extended this research. Concretely, for the case of the objective
being minimizing makespan and maximum cost, and for case of the general version
with fixed K job families and the objective being minimizing makespan and maxi-
mum lateness, they gave O(n4 )-time algorithm [7] and an O(n2K+1 )-time algorithm
[8], respectively.
The work in Yuan et al. [36] is most related to the research in this paper. From
Yuan et al. [36], problem 1|β|Cmax is strongly NP-hard when the number of families
is arbitrary and is polynomially solvable when the number of families is fixed. To
the best of our knowledge, no results have been presented for problem 1|β|Fmax in
the literature, even if the jobs just come from one family.
In this paper, we first show that problem 1|β|Fmax is strongly NP-hard when the
number of families is arbitrary. And then, for the case of the fixed number of fam-
ilies, we turn to show that Pareto optimization problem 1|β|# (Cmax , Fmax ) can be
solved in O(n3K+3 ) time, and as a byproduct, problem 1|β|Fmax also is polynomi-
ally solvable. Concretely, we first present a dynamic programming algorithm, called
DP(Y ), to find an optimal schedule of the constrained problem 1|β|Cmax : Fmax ≤ Y
with a running time of O(nK+1 ). It is not difficult to see that by iteratively calling
algorithm DP(Y ) and adopting the binary-search method, problem 1|β|Fmax can
be solved in O(nK+1 log P ) time, where P is the total processing time of jobs. How-
ever, the time complexity of such a routine is of only polynomial but not strongly
polynomial. Instead of this, we first identify a tight upper bound Y by a detailed
theoretical analysis, and then obtain a PoS of problem 1|β|Cmax : Fmax ≤ Y by the
improved form of algorithm DP(Y ). On this basis, we finally provide a polynomial-
time algorithm, called Family-CF, for problem 1|β|# (Cmax , Fmax ) to generate all
PoPs and the corresponding PoSs. We also show that the number of PoPs of prob-
lem 1|β|# (Cmax , Fmax ) is at most O(nK+1 ), by considering the critical batches of
the schedules generated by Family-CF.
This paper is organized as follows. In Section 2, we show that problem 1|β|Fmax
is strongly NP-hard when the number of families is arbitrary. In Section 3, we
present the algorithm DP(Y ) for problem 1|β|Cmax : Fmax ≤ Y and the algorithm
SCHEDULING FAMILY JOBS 7

Family-CF for problem 1|β|# (Cmax , Fmax ), together with the analysis. In the final
section, we make a summary of the conclusions and future research work.

2. The NP-hardness proof. We assume in this section that the number of fam-
ilies is arbitrary, and show the strong NP-hardness of problem 1|β|Fmax by using
problem 1|β|Cmax for the reduction. It is not difficult to see that the strong NP-
hardness of problem 1|β|Cmax nearly implies that of problem 1|β|Fmax . However,
for rigorousness, here we give the details of the proof. From Yuan et al. [36], we
have the following lemma.
Lemma 2.1. Problem 1|β|Cmax is strongly NP-hard.
Theorem 2.2. Problem 1|β|Fmax is strongly NP-hard.
Proof. Suppose that I is an instance of problem 1|β|Cmax , which consists of n jobs
J1 , · · · , Jn belonging to K families. These jobs have nonnegative-integer-valued
release dates r1 , · · · , rn and positive-integer-valued processing times p1 , · · · , pn . Let
R = rmax + 1, where rmax = max1≤i≤n ri . Then Cmax (π) ≥ R for every feasible
schedule π of I.
Now we construct an instance I ′ of problem 1|β|Fmax by adding a new Pjob Jn+1
in I, where {Jn+1 } is from a new family, rn+1 = R, and pn+1 = R + 1≤i≤n pi .
Then I ′ = I ∪ {Jn+1 }.
Let σ ′ be an optimal schedule of problem 1|β|Fmax on instance I ′ . By the two-
exchange argument, we can easily verify that {Jn+1 } is the last batch in σ ′ . Let σ be
the schedule obtained from σ ′ by deleting the last batch {Jn+1 }. Then σ is a feasible
schedule of problem 1|β|Cmax on instance I. Since Cmax (σ) ≥ R = rn+1 , we have
Cn+1 (σ ′ ) =P Cmax (σ) + pn+1 . This implies that Fn+1 (σ ′ ) = Cmax (σ) + pn+1 − R =
Cmax (σ) + 1≤i≤n pi . Note that, for each j with 1 ≤ j ≤ n, Fj (σ ′ ) ≤ Cj (σ ′ ) =
Cj (σ) ≤ Cmax (σ). Then we have Fmax (σ ′ ) = Fn+1 (σ ′ ) = Cmax (σ) + 1≤i≤n pi . We
P

claim that σ is an optimal schedule of problem 1|β|Cmax on instance I.


To prove the claim, we suppose to the contrary that there is a feasible schedule
π of problem 1|β|Cmax on instance I so that Cmax (π) < Cmax (σ). Let π ′ be the
schedule of I ′ obtained from π by adding {Jn+1 } as the the last batch. Since
Cmax (π) ≥ R = rn+1 , we have Cn+1P (π ′ ) = Cmax (π) + pn+1 , and so, Fn+1 (π ′ ) =
Cmax (π) + pn+1 − R = Cmax (π) + 1≤i≤n pi . For each j with 1 ≤ j ≤ n, we
have Fj (π ′ )P ≤ Cj (π ′ ) = Cj (π) ≤ Cmax ′
P(π). Then we have′ Fmax (π ) = Fn+1 (π ) =

Cmax (π) + 1≤i≤n pi < Cmax (σ) + 1≤i≤n pi = Fmax (σ ). This contradicts the
optimality of σ ′ . The claim follows.
The above claim implies that problem 1|β|Cmax polynomially reduces to problem
1|β|Fmax . From Lemma 2.1, we conclude that problem 1|β|Fmax is strongly NP-
hard. The result follows.

3. Algorithms and analysis. In this section, the number of families K (K ≥ 1)


is assumed to be a fixed constant. We first give a dynamic programming algorithm
for the constrained problem 1|β|Cmax : Fmax ≤ Y in subsection 3.1. By theoretical
analysis, in subsection 3.2 we identify a tight upper bound that can be used to obtain
a PoS of problem 1|β|Cmax : Fmax ≤ Y . Based on the former two subsections, in the
final subsection we devise an algorithm for problem 1|β|# (Cmax , Fmax ) to generate
all the PoPs and the corresponding PoSs. We also analyse the time complexity of
the proposed enumerating algorithm mainly by an estimation of the number of the
PoPs.
8 ZHICHAO GENG AND JINJIANG YUAN

3.1. A dynamic programming for problem 1|β|Cmax : Fmax ≤ Y . A feasible


schedule σ = (B1 , · · · , Bq ) of family jobs is call an ERD-family schedule if, for any
two jobs Jf,i and Jf,j of a common family Ff (1 ≤ f ≤ K), rf,i < rf,j implies
Cf,i (σ) ≤ Cf,j (σ), that is, the jobs released earlier finish processing no later than
the jobs released later. Then we have the following lemma.
Lemma 3.1. For each PoP of problem 1|β|# (Cmax , Fmax ), there exists a corre-
sponding PoS which is an ERD-family schedule.
Proof. Let (C, F ) be a PoP and σ = (B1 , · · · , Bq ) a corresponding PoS. Then
Cmax (σ) = C and Fmax (σ) = F . If σ is not an ERD-family schedule, then there
exist a pair of jobs Jf,i and Jf,j of some family Ff with Jf,i ∈ Bu and Jf,j ∈ Bv
such that rf,i < rf,j and batch Bu is behind batch Bv . Two cases are distinguished:
If p(Bv ) ≤ p(Bu ), we consider a new schedule σ ′ obtained from σ by moving
job Jf,j backward from Bv to Bu . Then for each job Jk except job Jf,j , we have
Ck (σ ′ ) ≤ Ck (σ), and so, Fk (σ ′ ) ≤ Fk (σ). For job Jf,j , we have Cf,j (σ ′ ) = Cf,i (σ ′ ) ≤
Cf,i (σ), and since rf,i < rf,j , we also have Ff,j (σ ′ ) = Cf,j (σ ′ ) − rf,j < Cf,i (σ ′ ) −
rf,i ≤ Cf,i (σ) − rf,i = Ff,i (σ). It follows that Cmax (σ ′ ) ≤ Cmax (σ) and Fmax (σ ′ ) ≤
Fmax (σ). By the Pareto optimality of σ, σ ′ is also a PoS corresponding to (C, F ).
If p(Bv ) > p(Bu ), we consider a new schedule σ ′′ obtained from σ by moving
job Jf,i forwards from Bu to Bv . Obviously, the completion time of each job does
not increase. Thus, Cmax (σ ′′ ) ≤ Cmax (σ) and Fmax (σ ′′ ) ≤ Fmax (σ). Again by the
Pareto optimality of σ, σ ′′ is also a PoS corresponding to (C, F ).
A finite number of repetitions of the above procedure yields a PoS of the required
form.
By Lemma 3.1, it allows us to only consider ERD-family schedules in the research.
Just for this, the jobs in each family Ff are re-indexed in the ERD order so that
rf,1P≤ rf,2 ≤ · · · ≤ rf,nf . The total running time used for the re-indexing is
O( 1≤f ≤K nf log nf ) = O(n log n). In addition, every two jobs Jf,i and Jf,j of
a common family Ff with rf,i = rf,j can always be scheduled in the same batch
without increasing the objective function value, and so, we merge such two jobs into

a new job Jf,i with p′f,i = max{p′f,i , p′f,j } and rf,i

= rf,i = rf,j . Since each merging
operation only takes on the consecutive jobs with equal release dates in the ERD
order, we can in O(n) time obtain an equivalent reduced instance in which the jobs
in each family Ff are ordered in such a way that rf,1 < rf,2 < · · · < rf,n′f , where n′f
is the number of jobs in Ff in the reduced instance. In the sequel, we just consider
the reduced instance and still use nf to denote the number of jobs in Ff .
For ease of exposition, we add a null job Jf,0 to each family Ff such that pf,0 =
rf,0 = 0 and Jf,0 forms a single batch starting at time 0 in any feasible schedule. For
(i)
each i (0 ≤ i ≤ nf ), we write Ff = {Jf,0 , Jf,1 , · · · , Jf,i }. Set X = {(i1 , · · · , iK ) :
0 ≤ if ≤ nf , 1 ≤ f ≤ K} and X + = {(i1 , · · · , iK ) ∈ X : i1 + · · · + iK ≥ 1}. When
the f -th component if of a vector (i1 , · · · , iK ) ∈ X is replaced by a new index kf ,
the resulted vector is denoted by (i1 , · · · , kf , · · · , iK ). A batch B only composed
by f -jobs, i.e., B ⊆ Ff , is called an f -batch. The following subproblems are helpful
for our discussion.
• Problem P (i1 , · · · , iK ): This is the problem 1|β|# (Cmax , Fmax ) restricted on the
(i ) (i )
instance F1 1 ∪ · · · ∪ FKK with (i1 , · · · , iK ) ∈ X. Our algorithm will generate all
PoPs and the corresponding PoSs. Note that P (n1 , · · · , nK ) is just the primitive
problem 1|β|# (Cmax , Fmax ).
SCHEDULING FAMILY JOBS 9

• Problem PY (i1 , · · · , iK ): This is the problem 1|β|Cmax : Fmax ≤ Y restricted on


(i ) (i )
the instance F1 1 ∪ · · · ∪ FKK with (i1 , · · · , iK ) ∈ X. Its goal is to find an optimal
schedule. The optimal makespan is denoted by CY (i1 , · · · , iK ). If the problem is
infeasible, we define CY (i1 , · · · , iK ) = +∞. Note that PY (n1 , · · · , nK ) is just the
primitive problem 1|β|Cmax : Fmax ≤ Y .
(f )
• Problem PY (i1 , · · · , iK ): This is a restricted version of PY (i1 , · · · , iK ) with
if ≥ 1 for which it is required that feasible schedules end with an f -batch. Its goal is
(f )
to find an optimal schedule. The optimal makespan is denoted by CY (i1 , · · · , iK ).
(f )
If the problem is infeasible, we define CY (i1 , · · · , iK ) = +∞. Note that problem
(f )
PY (i1 , · · · , iK ) has no definition when if = 0.
The dynamic programming that will be described soon solves problem 1|β|Cmax :
Fmax ≤ Y (or PY (n1 , · · · , nK )) by solving a series of sub-problems PY (i1 , · · · , iK )
(f )
and PY (i1 , · · · , iK ).
Recall that the families are assumed to be incompatible. Thus, in any optimal
schedule of problem PY (i1 , · · · , iK ), the last batch is composed by the jobs from
some family Ff . Naturally, the optimal makespan of problem PY (i1 , · · · , iK ) can
be defined by
(f )
CY (i1 , · · · , iK ) = min{CY (i1 , · · · , iK ) : 1 ≤ f ≤ K and if ≥ 1}, (1)
(f )
For problem PY (i1 , · · · , iK ), the following lemma holds.
(f )
Lemma 3.2. For problem PY (i1 , · · · , iK ), there is an optimal schedule, say π,
with the last batch being Bq = {Jf,kf +1 , · · · , Jf,if }, such that schedule π\Bq is
optimal for the sub-problem PY (i1 , · · · , kf , · · · , iK ).
(f )
Proof. Note that the feasibility of π for problem PY (i1 , · · · , iK ) implies that sched-
ule π\Bq is feasible for problem PY (i1 , · · · , kf , · · · , iK ). If π\Bq is not optimal, we
arbitrarily pick up an optimal schedule of problem PY (i1 , · · · , kf , · · · , iK ), say σ.
Then we have Cmax (σ) < Cmax (π\Bq ) and Fmax (σ) ≤ Y . Place batch Bq behind
schedule σ to obtain a new schedule σ ∪ Bq . We have

Cmax (σ ∪ Bq ) = max{Cmax (σ), rf,kf +1 } + max{pj : Jj ∈ Bq }


≤ max{Cmax (π\Bq ), rf,kf +1 } + max{pj : Jj ∈ Bq }
= Cmax (π).

And so, for each job Jj ∈ Bq , we have Fj (σ ∪ Bq ) = Cmax (σ ∪ Bq ) − rj ≤ Cmax (π) −


rj ≤ Y . This shows that the schedule σ ∪ Bq is an optimal schedule of problem
(f )
PY (i1 , · · · , iK ) with the required property.

(f ) (f )
By Lemma 3.2, the optimal makespan CY (i1 , · · · , iK ) of problem PY (i1 , · · · ,
iK ) can be defined by
(f ) (f,kf ) (f )
CY (i1 , · · · , iK ) = min{CY (i1 , · · · , iK ) : kf ∈ DY (i1 , · · · , iK )}, (2)

where
(f,kf )
CY (i1 , · · · , iK ) = max{CY (i1 , · · · , kf , · · · , iK ), rf,if } + max pf,lf (3)
kf +1≤lf ≤if
10 ZHICHAO GENG AND JINJIANG YUAN

and
(f )
DY (i1 , · · · , iK )
(f,kf )
(4)
={kf : 0 ≤ kf ≤ if − 1 and CY (i1 , · · · , iK ) − rf,kf +1 ≤ Y }.
(f,k ) (f )
Specifically, CY f (i1 , · · · , iK ) stands for the optimal value, and DY (i1 , · · · , iK )
stands for the set of the index kf such that the maximum flow time is upper bounded
(f )
by Y , when {Jf,kf +1 , · · · , Jf,if } is the last batch of problem PY (i1 , · · · , iK ). We
here introduce these two notations wholly for formulating conveniently.
Now we can describe the dynamic programming formally as follows. For problems
(f )
PY (n1 , · · · , nK ), PY (i1 , · · · , iK ) and PY (i1 , · · · , iK ).
Algorithm DP(Y ):
• Initial condition: CY (0, · · · , 0) = 0.
• Recursive relation: For each (i1 , · · · , iK ) ∈ X + ,
(f )
CY (i1 , · · · , iK ) = min{CY (i1 , · · · , iK ) : 1 ≤ f ≤ K and if ≥ 1}, (1)
where for each f ∈ {1, 2, · · · , K} with if ≥ 1,
(f ) (f,kf ) (f )
CY (i1 , · · · , iK ) = min{CY (i1 , · · · , iK ) : kf ∈ DY (i1 , · · · , iK )} (2)

• Termination: The optimal value is CY (n1 , · · · , nK ) and the optimal schedule


can be obtained by backtracking.
Note that we need compute the value of maxkf +1≤lf ≤if pf,lf in equation (3). To
avoid computation repetitions, we compute the values of maxkf +1≤lf ≤if pf,lf be-
forehand for all choices of (f, P kf , if ) with 0 ≤ kf < if ≤ nf and f ∈ {1, · · · , K},
and this procedure takes O( 1≤f ≤K n2f ) = O(n2 ) time. With these values in hand,
for each (i1 , · · · , iK ) ∈ X + with if ≥ 1 and each kf with 0 ≤ kf ≤ if − 1, cal-
(f,k )
culating CY f (i1 , · · · , iK ) by equation (3) and checking the validity of inequality
in (4) can be implemented in constant time. Therefore, we can determine the set
(f )
DY (i1 , · · · , iK ) in equation (4) in O(nf ) time for each f ∈ {1, · · · , K}.
In the procedure of running algorithm DP(Y ) to solve each problem PY (i1 , · · · ,
iK ) with (i1 , · · · , iK ) ∈ X, the values CY (i′1 , · · · , i′K ) and C (f ) (i′1 , · · · , i′K ), where
vector (i′1 , · · · , i′K ) satisfies (i′1 , · · · , i′K ) ∈ X, (i′1 , · · · , i′K ) ≤ (i1 , · · · , iK ) and (i′1 ,
(f )
· · · , i′K ) 6= (i1 , · · · , iK ), have to be computed. In each iteration, the values CY (i′1 ,
′ ′ ′
· · · , iK ) and CY (i1 , · · · , iK ) can Q be calculated in O(nf ) and O(K) time, respectively.
In addition, we have at most 1≤f ≤K (nf +1) possible choices for (i′1 , · · · , i′K ). Thus,
for each P problem PY (i1Q , · · · , iK ), the overall running time of algorithm DP(Y ) is
O((K + 1≤f ≤K nf ) · 1≤f ≤K (nf + 1)) = O(nK+1 ), which is of polynomial since
K ≥ 1 is fixed.
Combining the above discussion, we can conclude the following critical lemma.
Lemma 3.3. Algorithm DP(Y ) solves problem 1|β|Cmax : Fmax ≤ Y in O(nK+1 )
time. Moreover, DP(Y ) also correctly solves each sub-problem PY (i1 , · · · , iK ) with
(f )
(i1 , · · · , iK ) ∈ X and CY (i1 , · · · , iK ) < +∞, and each sub-problem PY (i1 , · · · , iK )
(f )
with (i1 , · · · , iK ) ∈ X + , if ≥ 1 and CY (i1 , · · · , iK ) < +∞.
By Lemma 3.3, it is not difficult to see that the decision version of the single-
criterion problem 1β|Fmax can also be solved in O(nK+1 ) time just by algorithm
DP(Y ). Further, by using binary-search, the single-criterion problem 1|β|Fmax can
SCHEDULING FAMILY JOBS 11

be solved in O(nK+1 log P ) time which is of polynomial but not strongly polynomial,
where P is the total processing time of jobs.

3.2. The identification of a tight upper bound. Note that the optimal sched-
ule obtained by algorithm DP(Y ) may be not Pareto optimal. To get a PoS, we try
to identify a tight upper bound by tracking the running procedure of DP(Y ). To
this end, we first establish some properties on algorithm DP(Y ).
Lemma 3.4. Suppose that (i1 , · · · , iK ) ∈ X + and CY (i1 , · · · , iK ) < +∞. Then
the optimal makespan CY (i1 , · · · , iK ) of problem PY (i1 , · · · , iK ) is a nondecreasing
function in each argument if with 1 ≤ f ≤ K, i.e., CY (i1 , · · · , if − 1, · · · , iK ) ≤
CY (i1 , · · · , if , · · · , iK ) for if = 1, · · · , nf .
Proof. Suppose that f is a family-index with 1 ≤ f ≤ K and 1 ≤ if ≤ nf . Let
σ = (B1 , · · · , Bq ) be the schedule obtained by DP(Y ) on problem PY (i1 , · · · , iK ).
By Lemma 3.3, we have Cmax (σ) = CY (i1 , · · · , iK ) and Fmax (σ) ≤ Y . Suppose that
Jf,if ∈ Bl with 1 ≤ l ≤ q. Let σ ′ = (B1 , · · · , Bl \ {Jf,if }, · · · , Bq ) be a new schedule
obtained from σ by removing job Jf,if . Then σ ′ is a feasible schedule of problem
PY (i1 , · · · , if − 1, · · · , iK ), since the fact Fmax (σ) ≤ Y implies Fmax (σ ′ ) ≤ Y . Since
p(Bl \{Jf,if }) ≤ p(Bl ), we have CY (i1 , · · · , if −1, · · · , iK ) ≤ Cmax (σ ′ ) ≤ Cmax (σ) =
CY (i1 , · · · , if , · · · , iK ). Then the lemma follows.

The following lemma can be easily observed.


Lemma 3.5. Suppose that σ is the optimal schedule obtained by algorithm DP(Y )
on problem PY (i1 , · · · , iK ) with (i1 , · · · , iK ) ∈ X + and CY (i1 , · · · , iK ) < +∞, and
B = {Jf,kf +1 , · · · , Jf,if } is the last batch of σ for some family-index f (1 ≤ f ≤ K)
and some f -job-index kf (0 ≤ kf ≤ if − 1). Then the following results holds:
(a) Cmax (σ) = CFmax (σ) (i1 , · · · , iK ) = CY (i1 , · · · , iK ), that is, σ is also optimal
for problem PFmax (σ) (i1 , · · · , iK ) which originates from problem PY (i1 , · · · , iK ) by
contracting the upper bound Y to Fmax (σ).
(b) σ\B (if σ\B 6= ∅) is optimal for the sub-problem PY (i1 , · · · , kf , · · · , iK ).
Let XY (i1 , · · · , iK ) be the set of family indices attaining the minimum in equation
(1), i.e.,
(f )
XY (i1 , · · · , iK ) = {f : CY (i1 , · · · , iK ) = CY (i1 , · · · , iK ) and 1 ≤ f ≤ K}. (5)
(f )
Let ΨY (i1 , · · · , iK ) be the set of the f -job indices attaining the minimum in equa-
tion (2), i.e.,
(f )
ΨY (i1 , · · · , iK )
(f ) (f,kf ) (f )
(6)
= {kf : CY (i1 , · · · , iK ) = CY (i1 , · · · , iK ) and kf ∈ DY (i1 , · · · , iK )}.
(f )
The sets XY (i1 , · · · , iK ) and ΨY (i1 , · · · , iK ) will help us to find an optimal and
Pareto optimal schedule of problem PY (n1 , · · · , nK ) by backtracking. For simplicity,
(f )
we use two-tuples (f, kf ) ∈ (XY , ΨY )(i1 , · · · , iK ) to indicate the relations f ∈
(f )
XY (i1 , · · · , iK ) and kf ∈ ΨY (i1 , · · · , iK ).
Lemma 3.6. Suppose that Y ′ and Y ′′ are two upper bounds so that Y ′ < Y ′′ and
CY ′ (i1 , · · · , iK ) = CY ′′ (i1 , · · · , iK ) < +∞. Then XY ′ (i1 , · · · , iK ) ⊆ XY ′′ (i1 , · · · ,
(f ) (f )
iK ), and ΨY ′ (i1 , · · · , iK ) ⊆ ΨY ′′ (i1 , · · · , iK ) for each f ∈ XY ′ (i1 , · · · , iK ).
12 ZHICHAO GENG AND JINJIANG YUAN

(f )
Proof. Pick up a two-tuples (f, kf ) from set (XY ′ , ΨY ′ )(i1 , · · · , iK ). From (4), (5)
and (6), we have
(f,kf )
CY ′ (i1 , · · · , iK ) − rf,kf +1 ≤ Y ′ , (7)
and
(f,kf )
CY ′ (i1 , · · · , iK ) = CY ′ (i1 , · · · , iK ). (8)
′ ′′ ′′ ′
Since Y < Y means that Y is a looser upper bound than Y , for the optimal
makespan of problem PY (i1 , · · · , kf , · · · , iK ) we have CY ′′ (i1 , · · · , kf , · · · , iK ) ≤
(f,k )
CY ′ (i1 , · · · , kf , · · · , iK ). Then recalling the definition of CY f (i1 , · · · , iK ) in equa-
tion (3), it follows that
(f,kf )
CY ′′ (i1 , · · · , iK )
= max{CY ′′ (i1 , · · · , kf , · · · , iK ), rf,if } + max pf,lf
kf +1≤lf ≤if
(9)
≤ max{CY ′ (i1 , · · · , kf , · · · , iK ), rf,if } + max pf,lf
kf +1≤lf ≤if
(f,k )
= CY ′ f (i1 , · · · , iK )
(f,k )
From (7) and (9), we have CY ′′ f (i1 , · · · , iK ) − rf,kf +1 ≤ Y ′ < Y ′′ . Further, by
(f ) (f )
the definition of set DY (i1 , · · · , iK ) in (4), it follows that kf ∈ DY ′′ (i1 , · · · , iK ).
By Lemma 3.3 and equations (1) and (2), we have
(f,kf ) (f,kf )
CY ′′ (i1 , · · · , iK ) ≤ CY ′′ (i1 , · · · , iK ) ≤ CY ′ (i1 , · · · , iK ) = CY ′ (i1 , · · · , iK ),
where the second inequality follows from (9) and the equality follows from (8).
Since CY ′ (i1 , · · · , iK ) = CY ′′ (i1 , · · · , iK ) by the assumption in lemma, we have
(f,k )
CY ′′ (i1 , · · · , iK ) = CY ′′ f (i1 , · · · , iK ). Finally, by the definitions (5) and (6) and
(f ) (f )
the notation (XY , ΨY )(i1 , · · · , iK ), we have (f, kf ) ∈ (XY ′′ , ΨY ′′ )(i1 , · · · , iK ). The
lemma follows.
Lemma 3.6 shows that the set of the (family and job) indices attaining the
minimum in (1) and (2) does not become large, when contracting the upper bound
of problem PY (i1 , · · · , iK ) without changing the optimal makespan.
Generally, we cannot guarantee that the optimal schedule obtained by algorithm
DP(Y ) for problem PY (i1 , · · · , iK ) is Pareto optimal for problem P (i1 , · · · , iK ). Our
adopted strategy is to generate the PoPs of problem P (i1 , · · · , iK ) dynamically.
For each vector (i1 , · · · , iK ) ∈ X, let Ω(i1 , · · · , iK ) be the set of all PoPs of prob-
lem P (i1 , · · · , iK ). Then Ω(0, · · · , 0) = {(0, 0)}. Moreover, the Pareto optimality
of points in Ω(i1 , · · · , iK ) implies that
CY (i1 , · · · , iK ) = C, for (C, Y ) ∈ Ω(i1 , · · · , iK ). (10)
+
Iteratively, suppose that (i1 , · · · , iK ) ∈ X and, for each (i′1 , · · ·
, i′K ) ∈ X with
(i′1 , · · · , i′K ) ≤ (i1 , · · · , iK ) and (i′1 , · · · , i′K ) 6= (i1 , · · · , iK ), set Ω(i′1 , · · · , i′K ) has
been generated. We are ready to generate the set Ω(i1 , · · · , iK ).
Suppose that Y is an upper bound so that CY (i1 , · · · , iK ) < +∞. For each two-
(f )
tuples (f, kf ) ∈ (XY , ΨY )(i1 , · · · , iK ), we define the PoP of P (i1 , · · · , kf , · · · , iK )
to be (ĈY (i1 , · · · , kf , · · · , iK ), F̂Y (i1 , · · · , kf , · · · , iK )) which satisfies
F̂Y (i1 , · · · , kf , · · · , iK ) = min{Y ′ :(C, Y ′ ) ∈ Ω(i1 , · · · , kf , · · · , iK ) and
(f,kf )
(11)
CY ′ (i1 , · · · , iK ) ≤ CY (i1 , · · · , iK )}.
SCHEDULING FAMILY JOBS 13

This is to say, we choose the PoP of sub-problem P (i1 , · · · , kf , · · · , iK ) with the


minimum of the maximum flow time as long as it does not increase the optimal
makespan of problem PY (i1 , · · · , iK ). Moreover, we use π̂(i1 , · · · , kf , · · · , iK ) to
denote a PoS of problem P (i1 , · · · , kf , · · · , iK ) which is corresponding to the objec-
tive vector (ĈY (i1 , · · · , kf , · · · , iK ), F̂Y (i1 , · · · , kf , · · · , iK )).
The following lemma plays a critical role for identifying a tight upper bound.
(f )
Lemma 3.7. Suppose (f, kf ) ∈ (XY , ΨY )(i1 , · · · , iK ), Y ′ = F̂Y (i1 , · · · , kf , · · · ,
(f,k )
iK ). Then Y ′ ≤ Y and CY ′ f (i1 , · · · , iK ) = CY (i1 , · · · , iK ).
Proof. By Lemma 1.1, there is a PoP (C ′′ , Y ′′ ) ∈ Ω(i1 , · · · , kf , · · · , iK ) so that
Y ′′ ≤ Y and C ′′ = CY (i1 , · · · , kf , · · · , iK ). Equation (10) implies that C ′′ =
CY ′′ (i1 , · · · , kf , · · · , iK ), and so, CY (i1 , · · · , kf , · · · , iK ) = CY ′′ (i1 , · · · , kf , · · · , iK ).
By definition (3), we have
(f,kf )
CY ′′ (i1 , · · · , iK )
= max{CY ′′ (i1 , · · · , kf , · · · , iK ), rf,if } + max pf,lf
kf +1≤lf ≤if

= max{CY (i1 , · · · , kf , · · · , iK ), rf,if } + max pf,lf (12)


kf +1≤lf ≤if
(f,kf )
= CY (i1 , · · · , iK )
= CY (i1 , · · · , iK ),
(f )
where the last equality follows from the assumption (f, kf ) ∈ (XY , ΨY )(i1 , · · · , iK ).
From equation (11), we have Y ′ ≤ Y ′′ , and so, Y ′ ≤ Y .
(f,k )
From equation (11) again, we have CY ′ f (i1 , · · · , iK ) ≤ CY (i1 , · · · , iK ). To
(f,k )
verify the equality, we only need to show that CY ′ f (i1 , · · · , iK ) ≥ CY (i1 , · · · , iK ).
Corresponding to Y ′ = F̂Y (i1 , · · · , kf , · · · , iK ), write C ′ = Ĉ(i1 , · · · , kf , · · · , iK )
shortly. Then (C ′ , Y ′ ) ∈ Ω(i1 , · · · , kf , · · · , iK ). By (10), C ′ = CY ′ (i1 , · · · , kf , · · · ,
iK ). Since both (C ′ , Y ′ ) and (C ′′ , Y ′′ ) are PoPs of problem P (i1 , · · · , kf , · · · , iK ),
the fact Y ′ ≤ Y ′′ implies that C ′ ≥ C ′′ . By definition (3) again, we have
(f,kf )
CY ′ (i1 , · · · , iK )
= max{CY ′ (i1 , · · · , kf , · · · , iK ), rf,if } + maxkf +1≤lf ≤if pf,lf
= max{C ′ , rf,if } + maxkf +1≤lf ≤if pf,lf
≥ max{C ′′ , rf,if } + maxkf +1≤lf ≤if pf,lf
(f,kf )
= CY ′′ (i1 , · · · , iK )
= CY (i1 , · · · , iK ).
where the last equality follows from (12). The lemma follows.

The following notations will be used in the discussion of Lemma 3.8. Let
F̃Y (i1 , · · · , kf , · · · , iK )
(13)
= max{F̂Y (i1 , · · · , kf , · · · , iK ), CY (i1 , · · · , iK ) − rf,kf +1 },
(f )
Y ∗ = min{F̃Y (i1 , · · · , kf , · · · , iK ) : (f, kf ) ∈ (XY , ΨY )(i1 , · · · , iK )}, (14)

and C = CY (i1 , · · · , iK ).
14 ZHICHAO GENG AND JINJIANG YUAN

It is necessary to mention that Y ∗ determined by (14) is exactly the tight upper


bound as required. Using the upper bound Y ∗ and running DP(Y ) (set Y = Y ∗ )
can yield a PoS. To assert this point, let π be the schedule obtained by algorithm
DP(Y ) on problem PY ∗ (i1 , · · · , iK ) which originates from contracting the upper
bound Y to Y ∗ . By Lemma 3.3, Cmax (π) = CY ∗ (i1 , · · · , iK ).
Lemma 3.8. π is a PoS of problem P (i1 , · · · , iK ) and (C ∗ , Y ∗ ) is the PoP corre-
sponding to π.
Proof. We first show that Cmax (π) = C ∗ and Fmax (π) = Y ∗ . From the definitions
(13) and (14), we may suppose that
Y ∗ = F̃Y (i1 , · · · , kf , · · · , iK ) = max{F̂Y (i1 , · · · , kf , · · · , iK ), C ∗ − rf,kf +1 }
(f )
for some two-tuples (f, kf ) ∈ (XY , ΨY )(i1 , · · · , iK ). Further, by the definitions (5)
(f )
and (6) and the notation (XY , ΨY )(i1 , · · · , iK ), we know that
(f,kf )
C ∗ = CY (i1 , · · · , iK ) = CY (i1 , · · · , iK )
and
(f,kf )
C ∗ − rf,kf +1 = CY (i1 , · · · , iK ) − rf,kf +1 ≤ Y.
Write shortly Y ′ = F̂Y (i1 , · · · , kf , · · · , iK ). Then Y ′ ≤ Y ∗ . From Lemma 3.7,
we have Y ′ ≤ Y . Hence, Y ∗ = max{Y ′ , C ∗ − rf,kf +1 } ≤ Y . Now the relation
Y ′ ≤ Y ∗ ≤ Y implies that
C ∗ = CY (i1 , · · · , iK ) ≤ CY ∗ (i1 , · · · , iK ) ≤ CY ′ (i1 , · · · , iK ).
Lemma 3.7 also asserts that CY ′ (i1 , · · · , iK ) = CY (i1 , · · · , iK ). It follows that
Cmax (π) = CY ∗ (i1 , · · · , iK ) = CY (i1 , · · · , iK ) = C ∗ . (15)
Equation (15) shows that π is an optimal schedule of problem PY (i1 , · · · , iK ).
Next, we show that Fmax (π) = Y ∗ . In fact, the definition of π implies Fmax (π) ≤
Y . Then we only need to show that Fmax (π) ≥ Y ∗ . Let B ∗ = {Jg,kg +1 , · · · , Jg,ig }

be the last batch of π for some family-index g (1 ≤ g ≤ K) and some job-index


kg (1 ≤ kg ≤ ig ). From equations (5) and (6) and the optimality of π, we have
(g)
(g, kg ) ∈ (XY ∗ , ΨY ∗ )(i1 , · · · , iK ). Further, since Y ∗ ≤ Y and CY ∗ (i1 , · · · , iK ) =
CY (i1 , · · · , iK ), by Lemma 3.6 we have XY ∗ (i1 , · · · , iK ) ⊆ XY (i1 , · · · , iK ), and
(f ) (f ) (f )
ΨY ∗ (i1 , · · · , iK ) ⊆ ΨY (i1 , · · · , iK ), and so, (g, kg ) ∈ (XY , ΨY )(i1 , · · · , iK ) ac-
(f )
cording to the notation (XY , ΨY )(i1 , · · · , iK ).
π\B ∗ is an optimal schedule of problem PFmax (π\B ∗ ) (i1 , · · · , kf , · · · , iK ) from
(g,k )
Lemma 3.5. Since CFmaxg(π\B ∗ ) (i1 , · · · , iK ) = CY (i1 , · · · , iK ) = C ∗ , it follows from
equation (11) that Fmax (π\B ∗ ) ≥ F̂Y (i1 , · · · , kg , · · · , iK ).
Note that Fmax (π) = max{Fmax (π\B ∗ ), Cmax (π) − rg,kg +1 }. Then we have
Fmax (π) ≥ max{F̂Y (i1 , · · · , kg , · · · , iK ), Cmax (π) − rg,kg +1 }
= F̃Y (i1 , · · · , kg , · · · , iK )
≥ Y ∗,
where the equality follows from (13) and the last inequality follows from (14). Con-
sequently, Fmax (π) = Y ∗ .
In the sequel, we show that π is also Pareto optimal for problem P (i1 , · · · , iK ).
By contradiction, we suppose to the contrary that π is not a PoS of problem
SCHEDULING FAMILY JOBS 15

P (i1 , · · · , iK ). Since π is optimal for problem PY (i1 , · · · , iK ), by Lemma 1.1,


there exists a PoS π ′ of problem P (i1 , · · · , iK ) such that Cmax (π ′ ) = Cmax (π)
and Fmax (π ′ ) < Fmax (π) = Y ∗ ≤ Y . By Lemma 3.5, we may assume that π ′ is
obtained by algorithm DP(Y ) on problem PFmax (π′ ) (i1 , · · · , iK ). By Lemma 3.6, we
(f )
further have XFmax (π′ ) (i1 , · · · , iK ) ⊆ XY (i1 , · · · , iK ), and ΨFmax (π′ ) (i1 , · · · , iK ) ⊆
(f )
ΨY (i1 , · · · , iK ) for f ∈ XFmax (π′ ) (i1 , · · · , iK ). From equations (13) and (14), it
follows that
(f )
Y ∗ = min{F̃Y (i1 , · · · , kf , · · · , iK ) : (f, kf ) ∈ (XY , ΨY )(i1 , · · · , iK )} ≤
(f )
(16)
min{F̃Y (i1 , · · · , kf , · · · , iK ) : (f, kf ) ∈ (XFmax (π′ ) , ΨFmax (π′ ) )(i1 , · · · , iK )}.
Let f ∗ be the family-index of the last batch in π ′ . Let kf ∗ be the maximum job-
index of the jobs in family Ff ∗ processed before the last batch in π ′ . By equations (5)
(f )
and (6), we have (f ∗ , kf ∗ ) ∈ (XFmax (π′ ) , ΨFmax (π′ ) )(i1 , · · · , iK ), and so, (f ∗ , kf ∗ ) ∈
(f )
(XY , ΨY )(i1 , · · · , iK ).
Let π ′′ = π ′ \{Jf ∗,kf ∗ +1 , · · · , Jf ∗ ,if ∗ }. From Lemma 3.5, π ′′ is an optimal sched-
ule of problem PFmax (π′ ) (i1 , · · · , kf ∗ , · · · , iK ). Note that Fmax (π ′′ ) ≤ Fmax (π ′ ) < Y
and
max{Cmax (π ′′ ), rf ∗ ,if ∗ } + max pf ∗ ,lf ∗ = Cmax (π ′ ).
kf ∗ +1≤lf ∗ ≤if ∗

By equation (11), we have F̂Y (i1 , · · · , kf ∗ , · · · , iK ) ≤ Fmax (π ′′ ). By equation (13),


we further have
F̃Y (i1 , · · · , kf ∗ , · · · , iK )
= max{F̂Y (i1 , · · · , kf ∗ , · · · , iK ), Cmax (π ′ ) − rf ∗ ,kf ∗ +1 }
(17)
≤ max{Fmax (π ′′ ), Cmax (π ′ ) − rf ∗ ,kf ∗ +1 }
= Fmax (π ′ )
From (14), (16) and (17), we conclude that
Fmax (π) = Y ∗
(f )
= min{F̃Y (i1 , · · · , kf , · · · , iK ) : (f, kf ) ∈ (XY , ΨY )(i1 , · · · , iK )}
(f )
≤ min{F̃Y (i1 , · · · , kf , · · · , iK ) : (f, kf ) ∈ (XFmax (π′ ) , ΨFmax (π′ ) )(i1 , · · · , iK )}
≤ F̃Y (i1 , · · · , kf ∗ , · · · , iK )
≤ max{Fmax (π ′′ ), Cmax (π ′ ) − rf ∗ ,kf ∗ +1 }
= Fmax (π ′ ).
This contradicts the hypothesis that Fmax (π ′ ) < Fmax (π). The result follows.
To grasp the intuition for the algorithm DP(Y ) in Subsection 3.1 and the iden-
tification of the tight upper bound in this subsection, we now present an example
to interpret them.
Example 1. There are two job families F1 and F2 . Family F1 has two jobs J1,1
and J1,2 , where their processing times are p1,1 = 2, p1,2 = 1 and their release
dates are r1,1 = 0, r1,2 = 1, respectively. Family F2 has just one job J2,1 , where
its processing time is p2,1 = 1 and its release date is r2,1 = 3. When setting the
upper bound Y of the maximum flow time large enough (e.g., Y = 10) and running
the dynamic programming algorithm DP(Y ), we can deprive the optimal makspan
16 ZHICHAO GENG AND JINJIANG YUAN

Cmax = 4. However, the corresponding optimal schedules have two schedules σ


and π as described in Figure 2. The maximum flow time of schedules σ and π
are 3 and 2, respectively. To find the corresponding PoP, intuitively it is sensible
to set the upper bound Y = 2. Of course, when the number of the jobs is large,
the identification of the tight upper bound becomes harder. In this subsection, we
have given an algorithm to derive a tight upper bound and prove its correctness
theoretically.

Job B
Job A Job C Job A Job BJob C
s: p:
0 1 3 4 0 1 3 4

Figure 2. An intuitive interpretation of the proposed algorithm

3.3. The algorithm for problem 1|β|# (Cmax , Fmax ) and the time complexity
analysis. With Lemma 3.8 in hand, we can present the following algorithm to
generate all PoPs of problem 1|β|# (Cmax , Fmax ) iteratively. Let
X
M = 1 + max{rj : 1 ≤ j ≤ n} + pj .
1≤j≤n

Then M is sufficiently large so that M ≥ F + 1 for each (C, F ) ∈ Ω(n1 , · · · , nK ).


Algorithm Family-CF: For problem 1|β|# (Cmax , Fmax ).
Step 1. (Initialization): Set C (1) (0, · · · , 0) = F (1) (0, · · · , 0) = 0. Then (0, 0) is the
unique PoP of problem P (0, · · · , 0).
Step 2. (Recursion): Suppose that (i1 , · · · , iK ) ∈ X + and, for each (i′1 , · · · , i′K ) ∈
X with (i′1 , · · · , i′K ) ≤ (i1 , · · · , iK ) and (i′1 , · · · , i′K ) 6= (i1 , · · · , iK ), all PoPs and
the corresponding PoSs of problem P (i′1 , · · · , i′K ) have been generated. Then we
find all PoPs and the corresponding PoSs of problem P (i1 , · · · , iK ) by the following
subroutine:
Step 2.1. Set i := 0 and set F (0) (i1 , · · · , iK ) := M .
Step 2.2. Set Y := F (i) (i1 , · · · , iK ) − 1 and do algorithm DP(Y ) on problem

PY (i1 , · · · , iK ) to obtain an optimal schedule σi+1 (i1 , · · · , iK ). Return to Step 2,
′ (i+1) ′
if Cmax (σi+1 (i1 , · · · , iK )) = +∞. Set C (i1 , · · · , iK ) = Cmax (σi+1 (i1 , · · · , iK )),
(i+1)
and calculate F (i1 , · · · , iK ) by equation (14) together with equations (11) and
(13), otherwise. Then (C (i+1) (i1 , · · · , iK ), F (i+1) (i1 , · · · , iK )) is the (i + 1)-th PoP.
Do algorithm DP(Y ) on problem PY (i1 , · · · , iK ) by setting Y := F (i+1) (i1 , · · · , iK ),
and write the obtained schedule as σi+1 (i1 , · · · , iK ). Set i := i + 1 and go back to
Step 2.2.
Step 3. (Termination): Output all PoPs and the corresponding PoSs of problem
P (n1 , · · · , nK ) (namely, 1|β|# (Cmax , Fmax )).
Clearly, the running time of the above algorithm largely depends on the number
of PoPs of problem 1|β|# (Cmax , Fmax ). Hence, in the following we propose a method
to estimate the number of PoPs by utilizing the characteristics of the critical batch
in the obtained PoSs.
SCHEDULING FAMILY JOBS 17

Lemma Q 3.9. The number of the PoPs of problem 1|β|# (Cmax , Fmax ) is at most
n K+1
K · 2 · 1≤f ≤K (nf + 1) = O(n ).
Proof. Suppose that the above algorithm (Family-CF) finally outputs m PoSs
σ1 (n1 , · · · , nK ), σ2 (n1 , · · · , nK ), · · · , σm (n1 , · · · , nK ).
For convenience, we shortly write σi (n1 , · · · , nK ) = σi (1 ≤ i ≤ m). Then we
have that Cmax (σi−1 ) < Cmax (σi ) and Fmax (σi−1 ) > Fmax (σi ) for 2 ≤ i ≤ m. Set
Ω = {σ1 , · · · , σm }.
For each σ ∈ Ω, a job Jj is called a critical job of σ if Fj (σ) = Fmax (σ), i.e.,
Cj (σ) = Fmax (σ) + rj . Each batch containing a critical job is called a critical batch.
Recall that rf,1 < rf,2 < · · · < rf,nf for each family-index f . Then each critical
batch contains exactly one critical job which is just the first job of the batch. In
order to estimate the number of PoSs (or PoPs), we deal with Ω in the following
way.
Firstly, for each family-index f , let Ωf be the set of schedules σ ∈ Ω that end
with an f -batch. Then Ωf , f = 1, · · · , K, form a partition of Ω.
Secondly, for a fixed family-index f , define Ωf (hf , jf ) to be the set of schedules
σ ∈ Ωf so that the last critical batch of σ is {Jf,hf , · · · , Jf,jf }. Then Ωf (hf , jf ), 1 ≤
hf ≤ jf ≤ nf , form a partition of Ωf .
Finally, for a given schedule σ ∈ Ωf (hf , jf ), we use n′g (σ) (g = 1, · · · , K) (shortly,

ng ) to denote the number of jobs of family Fg scheduled before the critical batch
{Jf,hf , · · · , Jf,jf } in σ. Note that n′f = hf − 1. For each Ωf (hf , jf ) and a fixed
family-index g (g 6= f ), let
( ′
(n1 , · · · , n′g−1 , n′g+1 , · · · , n′f −1 , jf , n′f +1 , · · · , n′K ), if g < f,
~x =
(n′1 , · · · , n′f −1 , jf , n′f +1 , · · · , n′g−1 , n′g+1 , · · · , n′K ), if g > f,
and define Ωf (hf , jf ; ~x) to be the set of schedules σ ∈ Ωf (hf , jf ) so that, for each
family-index z (1 ≤ z ≤ K and z 6= g, f ), exact n′z jobs of family Fz are scheduled
before the last critical batch in σ. Without loss of generality, we assume in the
following that g < f .
We claim that, for any two schedules σ ′ , σ ′′ ∈ Ωf (hf , jf ; ~x), if Cmax (σ ′ ) <
Cmax (σ ′′ ), then n′g (σ ′ ) > n′g (σ ′′ ). By contradiction, suppose that n′g (σ ′ ) ≤ n′g (σ ′′ ).
Note that σ ′ and σ ′′ are the schedules obtained by algorithm DP(Y ) (see Step
2.2 in Algorithm Family CF) for some upper bounds Y ′ and Y ′′ with Y ′ > Y ′′ ,
respectively.
Consider the following three sub-problems
SP-1: PY ′ (n′1 , · · · , n′g−1 , n′g (σ ′ ), n′g+1 , · · · , n′f −1 , jf , n′f +1 , · · · , n′K ),
SP-2: PY ′ (n′1 , · · · , n′g−1 , n′g (σ ′′ ), n′g+1 , · · · , n′f −1 , jf , n′f +1 , · · · , n′K ),
SP-3: PY ′′ (n′1 , · · · , n′g−1 , n′g (σ ′′ ), n′g+1 , · · · , n′f −1 , jf , n′f +1 , · · · , n′K ).
Here, SP-1 and SP-2 have a same upper bound Y ′ and process the same jobs from
each family Ff except that from family Fg , while SP-2 and SP-3 process the same
jobs but have different upper bounds.
Let C (1) , C (2) and C (3) be the corresponding optimal makespan obtained by
algorithm DP(Y ) on SP-1, SP-2 and SP-3, respectively. From Lemma 3.4, we have
C (1) ≤ C (2) due to the assumption n′g (σ ′ ) ≤ n′g (σ ′′ ). Since Y ′ > Y ′′ means that Y ′
is a looser upper bound than Y ′′ , we have C (2) ≤ C (3) . Consequently, it holds that
Fmax (σ ′ ) = C (1) − rf,hf ≤ C (2) − rf,hf ≤ C (3) − rf,hf = Fmax (σ ′′ ),
18 ZHICHAO GENG AND JINJIANG YUAN

where the two equalities holds since that the last (critical) batch of σ ′ and σ ′′ are
both {Jf,hf , · · · , Jf,jf }. Further, since σ ′ and σ ′′ are two PoSs, the assumption
Cmax (σ ′ ) < Cmax (σ ′′ ) implies Fmax (σ ′ ) > Fmax (σ ′′ ). This leads to a contradiction.
The claim follows.
By the above claim and the fact that 0 ≤ n′g (σ) ≤ ng for each σ ∈ Ωf (hf , jf ; ~x),
we have |Ωf (hf , jf ; ~x)| ≤ ng + 1 for a fixed triple-tuples (hfQ , jf ; ~x). Note that, for
each Ωf (hf , jf ) and a fixed family-index g (g 6= f ), at most 1≤z≤K,z6=g,f (nz + 1)
vectors ~x exist. Thus, we have
P P
|Ωf (hf , jf )| ≤ K · ~x |Ωf (hf , jf ; ~x)| ≤ K · ~x (ng + 1)
Q Q
≤ K · (ng + 1) · 1≤z≤K,z6=f,g (nz + 1) = K · 1≤z≤K,z6=f (nz + 1).
nf (nf +1)
Since there are at most 2 two-tuples (hf , jf ) for a given family-index f , it
follows that
nf (nf + 1) Y nf Y
|Ωf | ≤ ·K· (nz + 1) = K · · (nz + 1).
2 2
1≤z≤K,z6=f 1≤z≤K

Therefore, the number of the PoSs in Ω can be given by


X Y X nf Y n
|Ω| = |Ωf | ≤ K · (nz +1)· = K· (nz +1)· = O(nK+1 ).
2 2
1≤f ≤K 1≤z≤K 1≤f ≤K 1≤z≤K

This completes the proof.

Theorem 3.10. Algorithm Family-CF solves problem 1|β|# (Cmax , Fmax ) in


O(n3K+3 ) time.

Proof. Algorithm Family-CF dynamically generates all the POPs and the corre-
sponding PoSs of problem P (n1 , · · · , nK ), by generating and using the PoPs and the
corresponding PoSs of the related sub-problems P (i1 , · · · , iK ) with (i1 , · · · , iK ) ∈
X + and (i1 , · · · , iK ) 6= (n1 , · · · , nK ).
For each problem P (i1 , · · · , iK ) with (i1 , · · · , iK ) ∈ X + , the schedules obtained
by Steps 2.2 in algorithm Family-CF are Pareto optimal, which is ensured by Lem-
mas 3.3 - 3.8, and all the PoPs and their corresponding PoSs are enumerated in Step
2.2. From Lemma 3.9, for each problem P (i1 , · · · , iK ) with (i1 , · · · , iK ) ∈ X + , the
i
number of PoPs is at most K · 1≤f ≤K (if + 1) · 1≤f ≤K 2f = O(nK+1 ). From the
Q P
implementation procedure of algorithm Family-CF, to obtain each PoP and the cor-
responding PoS of problem P (i1 , · · · , iK ), it needs to run algorithm DP(Y ) twice,
which takes O(nK+1 )time, and calculate a tight upper bound F (i+1) (i1 , · · · , iK ) by
equation (14) together with equations (11) and (13), which takes O(nK+2 ) time.
Therefore, algorithm Family-CF can enumerate all the PoPs and the correspond-
K+1 K+2
ing PoSs of problem Q P (i1 , · · · , iK ) in O(n K · n ) = O(n2K+3 ) time. Since
we have in total 1≤f ≤K (nf + 1) = O(n ) such problems P (i1 , · · · , iK ) with
(i1 , · · · , iK ) ∈ X + , the overall time complexity is O(n3K+3 ) which is of polynomial
since the number of families K is fixed.

Theorem 3.10 implies the following corollary.

Corollary 1. Problem 1|β|Fmax is strongly polynomially solvable when the number


of families is fixed.
SCHEDULING FAMILY JOBS 19

3.4. Computational results. In this section, computational experiments are car-


ried out to evaluate the performance of the proposed algorithm. Algorithm Family-
CF for problem 1|β|# (Cmax , Fmax ) are coded in C++ language and their code is
run on a Pentium(R)-4, 300MHz PC with 2GB of RAM. More than 30 instances
are generated randomly and for all them our proposed algorithm Family-CF can
enumerate all the PoPs in several seconds. In the following we list one example of
these instances.
There two job families F1 and F2 , and each of them has ten jobs whose processing
times and release dates are described in Table 2 and Table 3. The output (PoPs) of
running algorithm Family-CF on this instance is listed in Table 4. In Table 4 the
jobs set J (i, j) denotes the set composed by the first i jobs of family F1 and the
first j jobs of family F2 , i.e., J (i, j) = {J1,1 , · · · , J1,i ; J2,1 , · · · , J2,j }, and the PoP
(C, F ) denotes C = Cmax and F = Fmax .

Table 2: The jobs in family F1


F1 J1,1 J1,2 J1,3 J1,4 J1,5 J1,6 J1,7 J1,8 J1,9 J1,10
r1,i 0 2 3 4 6 7 9 11 14 17
p1,i 2 2 4 5 7 12 3 6 1 3

Table 3: The jobs in family F2


F1 J1,1 J1,2 J1,3 J1,4 J1,5 J1,6 J1,7 J1,8 J1,9 J1,10
r1,i 1 2 4 6 8 10 11 14 16 19
p1,i 2 1 1 4 3 8 10 2 11 9

4. Conclusions. In the forgoing sections, we have showed that single-criterion


problem 1|β|Fmax is strongly NP-hard when the number of families is arbitrary.
And for the case of the fixed number of families, we turned to solve the Pareto
optimization problem 1|β|# (Cmax , Fmax ), and presented algorithms to enumerate
all the PoPs in O(n3K+3 ) time. As a byproduct, problem 1|β|Fmax is strongly
polynomially solvable. In this paper we gave an entire solution of the considered
problem, however, the time complexity of the proposed algorithm is on the high
side. Therefore, we need to devise more effective solving algorithms for this model
in the future research. Alternatively, it is also interesting to investigate more models
with other criteria. For example, the corresponding bounded model (b < n), which
is harder, should be considered. Moreover, we may study other types of multi-
criteria scheduling problems, for example, hierarchical optimization problems or
constrained optimization problems.

Acknowledgments. The authors would like to thank the associate editor and an
anonymous referee for their constructive comments and kind suggestions. Research
of this paper was supported by NSFC (11671368), NSFC (11571321), and NSF-
Henan (15IRTSTHN006).

REFERENCES

[1] A. Allahverdi, C. T. Ng, T. C. E. Cheng and M. Y. Kovalyov, A survey of scheduling problems


with setup times or costs, European Journal of Operational Research, 187 (2008), 985–1032.
[2] J. Bai, Z. R. Li and X. Huang, Single-machine group scheduling with general deterioration
and learning effects, Applied Mathematical Modelling, 36 (2012), 1267–1274.
20 ZHICHAO GENG AND JINJIANG YUAN

Table 4: The PoPs of the instances


jobs PoPs jobs PoPs jobs PoPs jobs PoPs jobs PoPs
J (1, 1) (4, 3) J (1, 2) (4, 3) J (2, 1) (6, 4) J (2, 2) (6, 4) J (1, 3) (5, 3)
(5, 5)
J (2, 3) (5, 4) J (3, 1) (8, 6) J (3, 2) (8, 5) J (3, 3) (9, 5) J (1, 4) (2, 2)
(7, 7) (7, 7) (8, 7)
J (2, 4) (4, 2) J (3, 4) (7, 5) J (4, 1) (3, 2) J (4, 2) (4, 2) J (4, 3) (5, 2)
J (4, 4) (10, 4) J (1, 5) (2, 2) J (2, 5) (4, 2) J (3, 5) (7, 5) J (4, 5) (13, 5)
(9, 6) (9, 6)
J (5, 1) (3, 2) J (5, 2) (4, 2) J (5, 3) (5, 2) J (5, 4) (10, 4) J (5, 5) (13, 5)
(12, 6)
J (1, 6) (2, 2) J (2, 6) (4, 2) J (3, 6) (7, 5) J (4, 6) (9, 6) J (5, 6) (13, 10)
J (6, 1) (3, 2) J (6, 2) (4, 2) J (6, 3) (5, 2) J (6, 4) (10, 4) J (6, 5) (13, 5)
(12, 6)
J (6, 6) (18, 10) J (1, 7) (2, 2) J (2, 7) (4, 2) J (3, 7) (7, 5) J (4, 7) (9, 6)
J (5, 7) (13, 10) J (6, 7) (22, 12) J (7, 1) (3, 2) J (7, 2) (4, 2) J (7, 3) (5, 2)
(21, 13)
(19, 15)
J (7, 4) (10, 4) J (7, 5) (13, 5) J (7, 6) (18, 10) J (7, 7) (22, 12) J (1, 8) (2, 2)
(12, 6) (21, 13)
J (2, 8) (4, 2) J (3, 8) (7, 5) J (4, 8) (9, 6) J (5, 8) (13, 10) J (6, 8) (24, 12)
(23, 13)
(19, 15)
J (7, 8) (24, 12) J (8, 1) (3, 2) J (8, 2) (4, 2) J (8, 3) (5, 2) J (8, 4) (10, 4)
(23, 13)
(21, 15)
J (8, 5) (13, 5) J (8, 6) (18, 10) J (8, 7) (22, 12) J (8, 8) (24, 12) J (1, 9) (2, 2)
(12, 6) (21, 13) (23, 13)
J (2, 9) (4, 2) J (3, 9) (7, 5) J (4, 9) (9, 6) J (5, 9) (13, 10) J (6, 9) (19, 15)
J (7, 9) (21, 15) J (8, 9) (25, 16) J (9, 1) (3, 2) J (9, 2) (4, 2) J (9, 3) (5, 2)
(23, 17)

Table 4(continued): The PoPs of the instances


J (9, 4) (10, 4) J (9, 5) (13, 5) J (9, 6) (18, 10) J (9, 7) (22, 12) J (9, 8) (24, 12)
(12, 6) (21, 13) (23, 13)
J (9, 9) (25, 16) J (1, 10) (2, 2) J (2, 10) (4, 2) J (3, 10) (7, 5) J (4, 10) (9, 6)
(24, 17)
J (5, 10) (13, 10) J (6, 10) (19, 15) J (7, 10) (21, 15) J (8, 10) (25, 16) J (9, 10) (25, 16)
(23, 17) (24, 17)
J (10, 1) (3, 2) J (10, 2) (4, 2) J (10, 3) (5, 2) J (10, 4) (13, 5) J (10, 5) (18, 10)
(12, 6)
J (10, 1) (3, 2) J (10, 2) (4, 2) J (10, 3) (5, 2) J (10, 4) (10, 4) J (10, 5) (13, 5)
(12, 6)
J (10, 6) (18, 10) J (10, 7) (22, 12) J (10, 8) (24, 12) J (10, 9) (25, 16) J (10, 10) (25, 16)
(21, 13) (23, 13)
SCHEDULING FAMILY JOBS 21

[3] P. Brucker, A. Gladky, H. Hoogeveen, M. Y. Kovalyov, C. N. Potts and T. Tautenhahn,


Scheduling a batching machine, Journal of Scheduling, 1 (1998), 31–54.
[4] K. Chakhlevitch, C. A. Glass and H. Kellerer, Batch machine production with perishability
time windows and limited batch size, European Journal of Operational Research, 210 (2011),
39–47.
[5] T. C. E. Cheng, Z. H. Liu and W. C. Yu, Scheduling jobs with release dates and deadlines
on a batch processing machine, IIE Transactions, 33 (2001), 685–690.
[6] Y. Gao and J. J. Yuan, A note on Pareto minimizing total completion time and maximum
cost, Operational Research Letter , 43 (2015), 80–82.
[7] Z. C. Geng and J. J. Yuan, Pareto optimization scheduling of family jobs on a p-batch machine
to minimize makespan and maximum lateness, Theoretical Computer Science, 570 (2015),
22–29.
[8] Z. C. Geng and J. J. Yuan, A note on unbounded parallel-batch scheduling, Information
Processing Letters, 115 (2015), 969–974.
[9] C. He, Y. X. Lin and J. J. Yuan, Bicriteria scheduling on a batching machine to minimize
maximum lateness and makespan, Theoretical Computer Science, 381 (2007), 234–240.
[10] H. Hoogeveen, Multicriteria scheduling, European Journal of Operational Research, 167
(2005), 592–623.
[11] J. A. Hoogeveen, Single-machine scheduling to minimize a function of two or three maximum
cost criteria, Journal of Algorithms, 21 (1996), 415–433.
[12] J. A. Hoogeveen and S. L. van de Velde, Minimizing total completion time and maximum
cost simultaneously is solvable in polynomial time, Operations Research Letters, 17 (1995),
205–208.
[13] J. A. Hoogeveen and S. L. van de Velde, Scheduling with target start times, European Journal
of Operational Research, 129 (2001), 87–94.
[14] Y. Ikura and M. Gimple, Efficient scheduling algorithms for a single batch processing machine,
Operations Research Letters, 5 (1986), 61–65.
[15] F. Jolai, Minimizing number of tardy jobs on a batch processing machine with incompatible
job families, European Journal of Operational Research, 162 (2005), 184–190.
[16] C. Y. Lee, R. Uzsoy and L. A. Martin-Vega, Efficient algorithms for scheduling semi-conductor
burn-in operations, Operations Research, 40 (1992), 764–775.
[17] S. S. Li and R. X. Chen, Single-machine parallel-batching scheduling with family jobs to
minimize weighted number of tardy jobs, Computers and Industrial Engineering, 73 (2014),
5–10.
[18] S. S. Li, C. T. Ng, T. C. E. Cheng and J. J. Yuan, Parallel-batch scheduling of deteriorating
jobs with release dates to minimize the makespan, European Journal of Operational Research,
210 (2011), 482–488.
[19] S. S. Li and J. J. Yuan, Parallel-machine parallel-batching scheduling with family jobs and
release dates to minimize makespan, Journal of Combinatorial Optimization, 19 (2010), 84–
93.
[20] Z. H. Liu and W. C. Yu, Scheduling one batch processor subject to job release dates, Discrete
Applied Mathematics, 105 (2000), 129–136.
[21] L. L. Liu and F. Zhang, Minimizing the number of tardy jobs on a batch processing machine
with incompatible job families, ISECS International Colloquium on Computing, Communi-
cation, Control, and Management, 3 (2008), 277–280.
[22] Z. H. Liu, J. J. Yuan and T. C. E. Cheng, On scheduling an unbounded batch machine,
Operations Research Letters, 31 (2003), 42–48.
[23] S. Malve and R. Uzsoy, A genetic algorithm for minimizing maximum lateness on parallel
identical batch processing machines with dynamic job arrivals and incompatible job families,
Computers and Operations Research, 34 (2007), 3016–3028.
[24] C. X. Miao, Y. Z. Zhang and Z. G. Cao, Bounded parallel-batch scheduling on single and
multi machines for deteriorating jobs, Information Processing Letters, 111 (2011), 798–803.
[25] Q. Q. Nong, C. T. Ng and T. C. E. Cheng, The bounded single-machine parallel-batching
scheduling problem with family jobs and release dates to minimize makespan, Operations
Research Letters, 36 (2008), 61–66.
[26] E. S. Pan, G. N. Wang, L. F. Xi, L. Chen and X. L. Han, Single-machine group scheduling
problem considering learning, forgetting effects and preventive maintenance, International
Journal of Production Research, 52 (2014), 5690–5704.
22 ZHICHAO GENG AND JINJIANG YUAN

[27] J. Pei, X. B. Liu, P. M. Pardalos, A. Migdalas and S. L. Yang, Serial-batching scheduling


with time-dependent Setup time and effects of deterioration and learning on a single-machine,
Journal of Global Optimization, 67 (2017), 251–262.
[28] J. Pei, B. Y. Cheng, X. B. Liu, P. M. Pardalos and M. Kong, Single-machine and parallel-ma-
chine serial-batching scheduling problems with position-based learning effect and linear setup
time, Annals of Operations Research, (2017), 1–25.
[29] J. Pei, X. B. Liu, P. M. Pardalos, W. J. Fan and S. L. Yang, Single machine serial-batching
scheduling with independent setup time and deteriorating job processing times, Optimization
Letters, 9 (2015), 91–104.
[30] J. Pei, X. B. Liu, P. M. Pardalos, W. J. Fan and S. L. Yang, Scheduling deteriorating jobs on
a single serial-batching machine with multiple job types and sequence-dependent setup times,
Annals of Operations Research, 249 (2017), 175–195.
[31] J. Pei, P. M. Pardalos, X. B. Liu, W. J. Fan and S. L. Yang, Serial batching scheduling of
deteriorating jobs in a two-stage supply chain to minimize the makespan, European Journal
of Operational Research, 244 (2015), 13–25.
[32] C. N. Potts and M. Y. Kovalyov, Scheduling with batching: a review, European Journal of
Operational Research, 120 (2000), 228–249.
[33] X. L. Qi, S. G. Zhou and J. J. Yuan, Single machine parallel-batch scheduling with dete-rio-
rating jobs, Theoretical Computer Science, 410 (2009), 830–836.
[34] V. T’kindt and J. C. Billaut, Multicriteria Scheduling: Theory, Models and Algorithms, 2nd
edition, Springer, Berlin, 2006.
[35] H. Xuan and L. X. Tang, Scheduling a hybrid flowshop with batch production at the last
stage, Computers and Operations Research, 34 (2007), 2718–2733.
[36] J. J. Yuan, Z. H. Liu, C. T. Ng and T. C. E. Cheng, The unbounded single machine par-
allel batch scheduling problem with family jobs and release dates to minimize makespan,
Theoretical Computer Science, 320 (2004), 199–212.

Received February 2017; revised August 2017.


E-mail address: gengzhichao 16@sina.com
E-mail address: yuanjj@zzu.edu.cn

S-ar putea să vă placă și