Documente Academic
Documente Profesional
Documente Cultură
MIT 2.852
Manufacturing Systems Analysis
Lectures 1921
Stanley B. Gershwin
Spring, 2007
Denitions
Events may be controllable or not, and predictable or not. controllable uncontrollable predictable loading a part lunch unpredictable ??? machine failure
Denitions
Scheduling is the selection of times for future controllable events. Ideally, scheduling systems should deal with all
controllable events, and not just production.
That is, they should select times for operations,
set-up changes, preventive maintenance, etc.
They should at least be aware of set-up changes, preventive maintenance, etc.when they select times for operations.
c 2007 Stanley B. Gershwin. Copyright
Denitions
Because of recurring random events, scheduling is an on-going process, and not a one-time calculation.
Scheduling, or shop oor control, is the bottom of the scheduling/planning hierarchy. It translates plans into events.
Issues in
Factory Control
Problems are dynamic ; current decisions inuence future behavior and requirements. There are large numbers of parameters, time-varying quantities, and possible decisions. Some time-varying quantities are stochastic . Some relevant information (MTTR, MTTF, amount of inventory available, etc.) is not known.
Problem
Discrete Time, Discrete State, Deterministic F
B 1 A 3 6 7 4 5 E 4 K 3 I 8 9 6 9 C D 1 8 6 L 5 G 4 H 2 1 J 7 5 2 10 5 M N 6 4 6 Z 2 2
Dynamic Programming
Example
Dynamic Programming
Example
Problem
Let g (i, j ) be the cost of traversing the link from i to j . Let i(t) be the tth node on a path from A to Z . Then the path cost is
T t=1
where T is the number of nodes on the path, i(0) = A, and i(T ) = Z . T is not specied; it is part of the solution.
Dynamic Programming
Example
Solution
A possible approach would be to enumerate all possible paths (possible solutions). However, there can be a lot of possible solutions. Dynamic programming reduces the number of possible solutions that must be considered.
Dynamic Programming
Example
Solution
Instead of solving the problem only for A as the initial point, we solve it for all possible initial points. For every node i, dene J (i) to be the optimal cost to go from Node i to Node Z (the cost of the optimal path from i to Z ). We can write J (i) =
T t=1
where i(0) = i; i(T ) =Z; (i(t 1), i(t)) is a link for every t.
c 2007 Stanley B. Gershwin. Copyright
Dynamic Programming
Then J (i) satises J (Z ) = 0
Example
Solution
and, if the optimal path from i to Z traverses link (i, j ), J (i) = g (i, j ) + J (j ).
Z i j
10
Dynamic Programming
Example
Suppose that for each node j for which a link exists from i to j , the optimal path and optimal cost J (j ) from j to Z is known.
c 2007 Stanley B. Gershwin. Copyright
11
Dynamic Programming
Example
Solution
Then the optimal path from i to Z is the one that minimizes the sum of the costs from i to j and from j to Z . That is, J (i) = min [g (i, j ) + J (j )]
12
Dynamic Programming
Example
Solution
Bellmans Principle of Optimality: if i and j are nodes on an optimal path from A to Z, then the portion of that path from A to Z between i and j is an optimal path from i to j .
A j Z
13
Dynamic Programming
Example
Solution
Example: Assume that we have determined that J (O ) = 6 and J (J ) = 11. To calculate J (K ), J (K ) = min g (K, O ) + J (O ) g (K, J ) + J (J ) 3+6 9 + 11 = 9.
= min
14
Dynamic Programming
Algorithm 1. Set J (Z ) = 0. 2. Find some node i such that
Example
Solution
J (i) has not yet been found, and for each node j in which link (i, j ) exists, J (j ) is already calculated. Assign J (i) according to
J (i) = min [g (i, j ) + J (j )]
15
Dynamic Programming
F B 13 A G 14 D 17 14 13 9 K I 12 C 8 11 J 11 11
Example
Solution
L 5
Z
H 6 4 M
6 O
16
Dynamic Programming
the state (i) ;
Example
Solution
The important features of a dynamic programming problem are the decision (to go to j after i); T the objective function t=1 g (i(t 1), i(t)) the cost-to-go function (J (i)) ;
that the solution is determined for every i, not just A and not just nodes on the optimal path; that J (i) depends on the nodes to be visited after i, not those between A and i. The only thing that matters is the present state and the future; that J (i) is obtained by working backwards.
c 2007 Stanley B. Gershwin. Copyright
17
Dynamic Programming
This problem was
Example
Solution
discrete time, discrete state, deterministic. Other versions: discrete time, discrete state, stochastic continuous time, discrete state, deterministic continuous time, discrete state, stochastic continuous time, mixed state, deterministic continuous time, mixed state, stochastic in stochastic systems, we optimize the expected cost.
c 2007 Stanley B. Gershwin. Copyright
18
Dynamic
Programming
Suppose
if you are at i and you choose j , you actually go to k with probability p(i, j, k).
j
c 2007 Stanley B. Gershwin. Copyright
Then the cost of a sequence of choices is random. The objective function is T E g (i(t 1), i(t))
t=1
19
Dynamic
Programming
Context: The planning/scheduling hierarchy Long term: factory design, capital expansion, etc. Medium term: demand planning, stafng, etc. Short term: response to short term events part release and dispatch In this problem, we deal with the response to short term events. The factory and the demand are given to us; we must calculate short term production rates; these rates are the targets that release and dispatch must achieve.
c 2007 Stanley B. Gershwin. Copyright
20
Dynamic Programming
u1 (t) u2 (t)
r, p
Perfectly exible machine, two part types. i time units required to make Type i parts, i = 1, 2. Constant demand rates d1, d2. Exponential failures and repairs with rates p and r .
21
Dynamic
Programming
Cumulative Production and Demand
production P i (t)
Objective: Minimize the difference between cumulative production and cumulative demand. The surplus satises xi(t) = Pi(t) Di(t)
surplus xi (t)
demand Di (t) = di t
c 2007 Stanley B. Gershwin. Copyright
t
22
Dynamic
Programming
Feasibility:
For the problem to be feasible, it must be possible to make approximately diT Type i parts in a long time period of length T, i = 1, 2. (Why approximately?) The time required to make diT parts is idiT . During this period, the total up time of the machine ie, the time available for production is approximately r/(r + p)T . Therefore, we must have 1d1T + 2d2T r/(r + p)T , or
2
i di
r r+p
23
i=1
Dynamic
Programming
If this condition is not satised, the demand cannot be met. What will happen to the surplus? The feasibility condition is also written
2 di
where i = 1/i.
i=1
r r+p
24
Dynamic
Programming
The surplus satises
ui(s)ds;
0
Di(t) = dit
dxi(t) dt
= ui(t) di
25
Dynamic
Programming
To dene the objective more precisely, let there be a function g (x1, x2) such that g is convex lim g (x1, x2) = ; lim g (x1, x2) = ;
x2 x1
g (0, 0) = 0
x1 x2
26
Dynamic
Programming
Examples:
27
Dynamic
Programming
Objective: min E
g (x1(t), x2(t))dt
g(x1,x2 )
x2
x1
28
Dynamic
Programming
Constraints:
u1(t) 0;
u2(t) 0
29
Dynamic
Programming
Assume the machine is up for a short period [t, t + t]. Let t be small enough so that ui is constant; that is ui(s) = ui(t), s [t, t + t] The machine makes ui(t)t parts of type i. The time required to make that number of Type i parts is iui(t)t. Therefore or
i
u2 1/2
iui(t)t t
iui(t) 1
1 1/
30
Dynamic
Programming
Machine state dynamics: Dene (t) to be the repair state of the machine at time t. (t) = 1 means the machine is up; (t) = 0 means the machine is down. prob((t + t) = 0|(t) = 1) = pt + o(t)
prob((t + t) = 1|(t) = 0) = rt + o(t)
The constraints may be written iui(t) (t);
i
c 2007 Stanley B. Gershwin. Copyright
ui(t) 0
31
Dynamic
Programming
subject to:
dxi(t) = ui(t) di dt prob((t + t) = 0|(t) = 1) = pt + o(t) prob((t + t) = 1|(t) = 0) = rt + o(t) iui(t) (t); ui(t) 0
i
32
Dynamic Programming
Elements of a DP Problem
state: x all the information that is available to determine the future evolution of the system. control: u the actions taken by the decision-maker. objective function: J the quantity that must be minimized; dynamics: the evolution of the state as a function of the control variables and random events. constraints: the limitations on the set of allowable controls initial conditions: the values of the state variables at the start of the time interval over which the problem is described. There are also sometimes terminal conditions such as in the network example.
c 2007 Stanley B. Gershwin. Copyright
33
Dynamic Programming
Elements of a DP Solution
control policy: u(x(t), t). A stationary or time-invariant policy is of the form u(x(t)). value function: (also called the cost-to-go function) the value J (x, t) of the objective function when the optimal control policy is applied starting at time t, when the initial state is x(t) = x.
34
Bellmans Equation
Problem: such that
u(t),0tT
Continuous x, t
Deterministic
min
T
dx(t) dt
= f (x(t), u(t), t)
x(0) specied h(x(t), u(t)) 0 x Rn, u Rm, f Rn, h Rk, and g and F are scalars. Data: T, x(0), and the functions f, g, h, and F .
c 2007 Stanley B. Gershwin. Copyright
35
Bellmans Equation
The cost-to-go function is J (x, t) = min
T
Continuous x, t
Deterministic
t1
g (x(t), u(t))dt +
t1
36
Bellmans Equation
Continuous x, t
Deterministic
g (x(t), u(t))dt + F (x(T ))
t1
min
u(t),
0tt1
g (x(t), u(t))dt +
0
min
u(t), t1 tT
t1
min
u(t),
0tt1
t1
37
Bellmans Equation
where J (x(t1), t1) = such that dx(t) dt min
Continuous x, t
Deterministic
u(t),t1 tT
t1
= f (x(t), u(t), t)
38
Bellmans Equation
Continuous x, t
Deterministic
Break up [t1, T ] into [t1, t1 + t] [t1 + t, T ] : t1+t J (x(t1), t1) = min g (x(t), u(t))dt
u(t1) t1
+J (x(t1 + t), t1 + t)} where t is small enough so that we can approximate x(t) and u(t) with constant x(t1) and u(t1), during the interval. Then, approximately, J (x(t1), t1) = min g (x(t1), u(t1))t + J (x(t1 + t), t1 + t)
u(t1)
c 2007 Stanley B. Gershwin. Copyright
39
Bellmans Equation
Or,
Continuous x, t
Deterministic
J x
J t
(x(t1), t1)t
40
Bellmans Equation
Therefore
Continuous x, t
Deterministic
J (x, t1) = J (x, t1) J J + min g (x, u)t + (x, t1)f (x, u, t1)t + (x, t1)t u x t where x = x(t1); u = u(t1) = u(x(t1), t1). Then (dropping the t subscript) J J (x, t) = min g (x, u) + (x, t)f (x, u, t) u t x
c 2007 Stanley B. Gershwin. Copyright
41
Bellmans Equation
Continuous x, t
Deterministic
This is the Bellman equation . It is the counterpart of the recursion equation for the network example. If we had a guess of J (x, t) (for all x and t) we could conrm it by
performing the minimization.
If we knew J (x, t) for all x and t, we could determine u by performing the
minimization. U could then be written
J u = U x, ,t . x This would be a feedback law . The Bellman equation is usually impossible to solve analytically or numerically. There are some important special cases that can be solved analytically.
c 2007 Stanley B. Gershwin. Copyright
42
Bellmans Equation
Bang-Bang Control
Continuous x, t
Example
min subject to dx dt
|x|dt
0
=u
x(0) specied 1 u 1
c 2007 Stanley B. Gershwin. Copyright
43
Bellmans Equation
The Bellman equation is J (x, t) = t min
u, 1u1
Continuous x, t
Example
J (x, t) = J (x) is a solution because the time horizon is innite and t does not appear explicitly in the problem data (ie, g (x) = |x| is not a function of t. Therefore dJ 0 = min |x| + (x)u .
dx u,
1u1
J (0) = 0 because if x(0) = 0 we can choose u(t) = 0 for all t. Then x(t) = 0 for all t and the integral is 0. There is no possible choice of u(t) that will make the integral less than 0, so this is the minimum.
c 2007 Stanley B. Gershwin. Copyright
44
Bellmans Equation
Continuous x, t
Example
45
Bellmans Equation
Continuous x, t
Example
Consider the set of x where dJ/dx(x) < 0. For x in that set, u = 1, so dJ 0 = |x| + (x) dx or dJ (x) = |x| dx Similarly, if x is such that dJ/dx(x) > 0 and u = 1,
dJ
(x) = |x| dx
c 2007 Stanley B. Gershwin. Copyright
46
Bellmans Equation
Continuous x, t
Example
To complete the solution, we must determine where dJ/dx > 0, < 0, and = 0. We already know that J (0) = 0. We must have J (x) > 0 for all
x = 0 because |x| > 0 so the integral of |x(t)| must be positive.
Since J (x) > J (0) for all x = 0, we must have dJ (x) < 0 for x < 0 dx
dJ
(x) > 0 for x > 0 dx
c 2007 Stanley B. Gershwin. Copyright
47
Bellmans Equation
Therefore so
Continuous x, t
Example
dJ
(x) >= x dx
J = x2 2 1 if x < 0 u= 0 if x = 0
1 if x > 0
48
and
Bellmans
Equation
Continuous x, t,Discrete
Stochastic
T
= f (x, , u, t)
prob [(t + t) = i | (t) = j ] = ij t for all i, j, i = j x(0), (0) specied h(x(t), (t), u(t)) 0
c 2007 Stanley B. Gershwin. Copyright
49
Bellmans
Equation
Continuous x, t,Discrete
Stochastic
Getting the Bellman equation in this case is more complicated because changes by large amounts when it changes. Let H () be some function of . We need to calculate ((t + t)) = E {H ((t + t)) | (t)}
EH =
j
50
Bellmans
Equation
Continuous x, t,Discrete
Stochastic
j =(t)
H (j )j(t)t + H ((t)) 1
j =(t)
j(t)t + o(t)
j =(t)
51
Bellmans
Equation
Continuous x, t,Discrete
Stochastic
T
52
Bellmans
Equation
Continuous x, t,Discrete
Stochastic
t+t
min
u(s),
0st+t
g (x(s), u(s))ds
t
min
u(s), t+tsT
T E g (x(s), u(s))ds + F (x(T ))
t+t
53
Bellmans
Equation
= min
u(s), tst+t
Continuous x, t,Discrete
Stochastic
t+t
g (x(s), u(s))ds
t
Next, we expand the second term in a Taylor series about x(t). We leave (t + t) alone, for now.
+J (x(t + t), (t + t), t + t)
54
Bellmans
Equation
Continuous x, t,Discrete
Stochastic J (x(t), (t), t) =
x where
J t
+ o(t).
55
Bellmans
Equation
Continuous x, t,Discrete
Stochastic
((t + t)),
Using the expansion of EH
J (x(t), (t), t) = min g (x(t), u(t))t u(t) + J (x(t), (t), t) + J (x(t), j, t)j(t)t j
J J (x(t), (t), t)t + o(t) + (x(t), (t), t)x(t) + x t and u(t) with u.
c 2007 Stanley B. Gershwin. Copyright
Bellmans
Equation
Continuous x, t,Discrete
Stochastic
We can subtract J (x, , t) from both sides and use the expression for x to get ...
J (x, , t) =
min g (x, u)t + J (x, , t) + J (x, j, t)jt u j J J + (x, , t)x + (x, , t)t + o(t)
x t
57
Bellmans
Equation
Continuous x, t,Discrete
Stochastic
or,
0 = min g (x, u)t + J (x, j, t)jt u j J J + (x, , t)f (x, , u, t)t + (x, , t)t + o(t) x t
58
Bellmans
Equation
J t
Continuous x, t,Discrete
Stochastic
(x, , t) =
J (x, j, t)j+
J min g (x, u) + (x, , t)f (x, , u, t) u x Bad news: usually impossible to solve; Good news: insight.
59
Bellmans
Equation
Continuous x, t,Discrete
Stochastic
An approximation: when T is large and f is not a function of t, typical trajectories look like this:
x
60
Bellmans
Equation
Continuous x, t,Discrete
Stochastic
That is, in the long run, x approaches a steady-state probability distribution. Let J be the expected value of g (x, u), where u is the optimal control. Suppose we started the problem with x(0) a random variable whose probability distribution is the steady-state distribution. Then, for large T , T EJ = minu E 0 g (x(t), u(t))dt + F (x(T )) J T
61
Bellmans
Equation
Continuous x, t,Discrete
Stochastic
For x(0) and (0) specied J (x(0), (0), 0) J T + W (x(0), (0)) or, more generally, for x(t) = x and (t) = specied, J (x, , t) J (T t) + W (x, )
62
Flexible
Manufacturing
System Control
dxi(t) dt
= ui(t) di,
i = 1, ..., N
63
Flexible
Manufacturing
System Control
Dene () = {u| J t
i
(x, , t) =
J (x, j, t)j+
u()
min
g (x) +
J x
(x, , t)(u d)
64
Flexible
Manufacturing
System Control
Recall that
65
Flexible
Manufacturing
System Control
so J =
j
W (x, j )j+
66
Flexible
Manufacturing
System Control
This is actually two equations, one for = 0, one for = 1. W J = g (x) + W (x, 1)r W (x, 0)r (x, 0)d, x for = 0, J = g (x) + W (x, 0)p W (x, 1)p + min for = 1.
u(1)
W x
(x, 1)(u d)
67
Flexible Manufacturing
System Control
Now, x and u are scalars, and
Single-part-type case
dW dx
(x, 0)d,
0u
dW dx
(x, 1)(u d)
68
Flexible Manufacturing
System Control
Single-part-type case
69
Flexible Manufacturing
System Control
Single-part-type case
W (x, ) has been shown to be convex in x. If the minimum of W (x, 1) occurs at x = Z and W (x, 1) is differentiable for all x, then dW <0x<Z dx dW =0x=Z dx Therefore, dW >0x>Z dx if x < Z, u = ,
if x = Z, u unspecied, if x > Z, u = 0.
c 2007 Stanley B. Gershwin. Copyright
70
Single-part-type case
dx(t) dt
= u(t) d
production
Cumulative
Production and Demand
dt+Z
hedging point Z
surplus x(t)
demand dt
How do we choose Z ?
c 2007 Stanley B. Gershwin. Copyright
71
Flexible Manufacturing
System Control
Single-part-type case
Determination of Z
Z
72
Single-part-type case
Determination of Z r d p d
bp( d)
db( d) + p
ebZ
P (Z, 1) =
c 2007 Stanley B. Gershwin. Copyright
db( d) db( d) + p
73
Flexible Manufacturing
System Control
Since g (x) = g+x+ + gx, if Z 0, then J = gZP (Z, 1) if Z > 0, J = g+ZP (Z, 1) +
Z
Single-part-type case
Determination of Z
Flexible Manufacturing
System Control
To minimize J :
Single-part-type case
Determination of Z
if g+ Kb(g+ + g) < 0, Z =
g ln Kb(1 + g ) + b
Z is a function of d, , r , p, g+, g.
c 2007 Stanley B. Gershwin. Copyright
75
Single-part-type case
Determination of Z
76
Single-part-type case
Determination of Z (f (x, 0) + f (x, 1))dx d
0
77
Flexible Manufacturing
System Control
Or, prob(x 0) = p
Single-part-type case
Determination of Z
db( d) + p Kb =
max 1,
1 Kb
g+ g + + g
78
Single-part-type case
Determination of Z
g+ + g
, then Z = 0 and p ;
prob(x 0) = if p p + bd( d) g+
p + bd( d)
<
g+ + g
prob(x 0) =
79
Single-part-type case
Z vs. d
80
Single-part-type case
Z vs. g+
g+
81
Single-part-type case
Z vs. g
82
Flexible Manufacturing
System Control
Single-part-type case
Z vs. p
83
Two-part-type case
x1 x2
d1 d2
Type 1 Type 2
r, p
1 1/
u1
84
Flexible Manufacturing
System Control
Two-part-type case
85
Two-part-type case
u1 = u2 = 0
1 1/
u1
dx dt
x1 u1 = 0 u2 = 2
u2 1/ 2 1/ 1 u1 0 1 1/ u1
2 1/
u2
86
Two-part-type case
u1 = u2 = 0
1 1/ u1
dx dt
x1 u1 = 0 u2 = 2
u2 1/ 2 1/ 1 u1 0 1 1/ u1
2 1/
u2
87
Two-part-type case
x2
e23
2 1
5 4
Z
e 34 3 3
6
I
e61
1
e61
e23 2
x1
e56
u1
4 5
e12
e34
e45
88
Flexible Manufacturing
System Control
Two-part-type case
Proposed approximate solution for multiple-part, single machine system: Rank order the part types, and bring them to their hedging points in that order.
89
Single-part-type case
Surplus and tokens
Operating Machine M according to the hedging point policy is equivalent to operating this assembly system according to a nite buffer policy.
90
Single-part-type case
Surplus and tokens
M FG S B D
91
Flexible Manufacturing
System Control
Machine Operator
Single-part-type case
Material/token policies An operation cannot take place unless there is a token available. Tokens authorize production.
Operation
92
Multi-stage systems
To control
M1 B1 M2
Proposed policy
B2
M3
93
Multi-stage systems
M1 B1 SB 1 S1 BB1 D M2
Proposed policy
B2 SB 2 S2 BB2
M3 SB 3 S3 BB3
SBi are surplus buffers and are nite. BBi are backlog buffers and are innite. The sizes of Bi and SBi are control parameters.
94
Multi-stage
systems
Three kinds of scheduling policies, which are sometimes exactly the same. Surplus-based: make decisions based on how much production exceed demand. Time-based: make decisions based on how early or late a product is. Token-based: make decisions based on presence or absence of tokens.
c 2007 Stanley B. Gershwin. Copyright
95
Multi-stage systems
Cumulative
Production
and Demand
Objective of Scheduling
Surplus and time
production P(t)
earliness
Objective is to keep cumulative production close to cumulative demand. Surplus-based policies look at vertical differences between the graphs. Time-based policies look at the horizontal differences.
96
surplus/backlog x(t)
demand D(t)
t
Multi-stage systems
Other policies
CONWIP, kanban, and hybrid
kanban: innite population, nite buffers hybrid: nite population, nite buffers
97
Multi-stage systems
CONWIP
Supply
Other policies
CONWIP, kanban, and hybrid
Demand
Token flow
Demand is less than capacity. How does the number of tokens affect performance (production rate, inventory)?
98
Multi-stage systems
0.875 0.87 0.865 0.86 0.855 0.85 0.845 0.84 0.835
Other policies
CONWIP, kanban, and hybrid
30 n1 n2 n3
25
20
Average Buffer Level
15
10
20
40
60 Population
80
100
120
20
40
60
Population
80
100
120
99
Multi-stage systems
Other policies
Basestock
Demand
100
Multi-stage systems
First-In, First Out.
Other policies
FIFO
Simple conceptually, but you have to keep track of arrival times. Leaves out much important information: due date, value of part, current surplus/backlog state, etc.
101
Multi-stage systems
Easy to implement. Earliest due date.
Other policies
EDD
Does not consider work remaining on the item, value of the item, etc..
102
Multi-stage systems
Other policies
SRPT
Whenever there is a choice of parts, load the one with least remaining work before it is nished. Variations: include waiting time with the work time. Use expected time if it is random.
103
Multi-stage systems
Other policies
Critical ratio
Widely used, but many variations. One version: Processing time remaining until completion Dene CR = Due date - Current time Choose the job with the highest ratio (provided it is positive). If a job is late, the ratio will be negative, or the denominator will be zero, and that job should be given highest priority. If there is more than one late job, schedule the late jobs in SRPT order.
104
Multi-stage systems
Other policies
Least Slack
When there is a choice, select the part with the least slack.
105
Multi-stage systems
Due to Eli Goldratt.
Other policies
Drum-Buffer-Rope
Based on the idea that every system has a bottleneck. Drum: the common production rate that the system operates at, which is the rate of ow of the bottleneck. Buffer: DBR establishes a CONWIP policy between the entrance of the system and the bottleneck. The buffer is the CONWIP population. Rope: the limit on the difference in production between different stages in the system. But: What if bottleneck is not well-dened?
c 2007 Stanley B. Gershwin. Copyright
106
Conclusions
Many policies and approaches. No simple statement telling which is better. Policies are not all well-dened in the literature or in practice. My opinion: This is because policies are not derived from rst principles. Instead, they are tested and compared. Currently, we have little intuition to guide policy development and choice.
107
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.