Sunteți pe pagina 1din 4

ISyE 3232

Z. She , YL. Chang

Stochastic Manufacturing and Service Models

Fall 2015

Solutions to Homework 8
1. (a) Yes, it is a Markov chain. The state space is {0, 1, 2, 3, }.

98/100 j = i + 1;
a = (1, 0, 0, ), Pi,j =
2/100
j = 0.
(b) Yes, it is irreducible.
(c) State 0 is aperiodic (due to a loop). By the solidarity property, the Markov chain is
aperiodic.
(d) Let p = 98/100 and q = 1 p = 0.02. We can write down the following balance equations:
0

0.020 + 0.021 + 0.022 + . . . = 0.02

i = 0.02

i=0

0.980 = 0.98(0.02)

0.981 = 0.982 (0.02)

3 = 0.982 = 0.983 (0.02)


..
..
.
.
Thus, i = 0.98i (0.02) for i = 0, 1, 2, . . ..
(e) Yes, because it is irreducible and the unique stationary distribution exists.
(f) We need E[0 | X0 = 0]. It is 1/0 = 1/0.02 = 50 days.
2. Assume the state space is {0, 1, 2, 3}. One can draw the transition diagram as the following:

(1) It is NOT irreducible since state 0 can only commute with state 1.
(2) = {5/26, 8/26, 2/6, 1/6} is a stationary distribution. But it
Pis not unique since =
{5/39, 8/39, 4/9, 2/9} is also one. Both solve = P and iS i = 1. In fact, we
cannot find from = P because it does not work for reducible DTMCs. We get
two from P which is given in part (5). In fact, any convex combination of these two
stationary distributions is a stationary distribution as well. Therefore you can find other
stationary distributions based on these two.
(3) Periodicity is only defined for irreducible Markov chains.
1

(4) State 4 can come back to itself in either 2 steps or 3 steps, so the period is 1. Any other
state can come back to itself in one step, so its period is 1.
(5) Markov chain can be decomposed into two Markov chains with state space {0, 1} and
{2, 3} respectively. And each of them is irreducible and aperiodic, having finite state
space,
lim Pijn = j for i, j {0, 1} or i, j {2, 3}
n

where {0 , 1 } is the unique stationary distribution for Markov chain with state space
{0, 1} and {2 , 3 } is the unique stationary distribution for Markov chain with state space
{2, 3}. Pijn = 0, for all other (i, j) pairs, of which one is from {0, 1} and the other is from
j {2, 3}. So

5/13 8/13 0
0
5/13 8/13 0
0
.
P 100
0
0
2/3 1/3
0
0
2/3 1/3
Note that there are two :
(5/13, 8/13, 0, 0) if we start from state 0 or 1; and
(0, 0, 2/3, 1/3) if we start from state 2 or 3.
(6) States 0 and 1 are recurrent and positive recurrent. States 2 and 3 are recurrent and
positive recurrent.
3. (a) The state diagram is as follows:

0.8

0.5

0.3
0.5

0.1

0.1

0.2

0.5

0.3

d
0.7

(b-d) The recurrent states are {b,d,f} {c} are recurrent and irreducible sets, and both are
aperiodic, {a,e} is transient.
(e) There are two recurrent sets {b, c, f } and {c}. We need to calculate the probability that
we eventually join one of the recurrent sets from each transient set.
Let fa,{b,d,f } represent the probability that we eventually join {b, d, f } from state a.
Similarly, we can define fa,{c} , fe,{b,c,f } and fe,{c} .
Then fa,{b,d,f } = 1 and fa,{c} = 0.

If we jump to state a from e, we will eventually join {b, d, f } .


On the other hand, if we jump to c from e, then we will stay in {c}.
There is equal probability (0.1) of jumping from e to a and from e to c.
Thus, fe,{b,d,f } = 0.5 and fe,{c} = 0.5.
Finally, P is

a
e

b
0.5405
0.5405
0.5405
0
0.5405
0.2703

d
0.2703
0.2703
0.2703
0
0.2703
0.1351

f
c
0.1892 0
0.1892 0
0.1892 0
0
1
0.1892 0
0.0946 0.5

a
0
0
0
0
0
0

e
0

0
.
0

0
0

For example, 0.2703 in the last row is obtained from fe,{b,c,f } b = 0.5 0.5405. Also,
0.5 in the last row is obtained from fe,{c} c = 0.5 1.
Remark: This DTMC has three stationary distributions:
(b , d , f , c , a , e ) = (0.5405, 0.2703, 0.1892, 0, 0, 0) if we start from states b, d, f ,
or a;
(b , d , f , c , a , e ) = (0, 0, 0, 1, 0, 0) if we start from state c; and
(b , d , f , c , a , e ) = (0.2703, 0.1351, 0.0946, 0.5, 0, 0) if we start from state e.
4. (a) The whole chain is irreducible and finite, so every state is positive recurrent.
(b) P is

0.2617
0.2617

0.2617

0.2617
0.2617

0.1386
0.1386
0.1386
0.1386
0.1386

0.1373
0.1373
0.1373
0.1373
0.1373

0.2746
0.2746
0.2746
0.2746
0.2746

0.1878
0.1878

0.1878

0.1878
0.1878

(c) Since the chain is irreducible, positive recurrent and aperiodic, we know that =
(n)
(0.2617, 0.1386, 0.1373, 0.2746, 0.1878) and limn Pii = i
5. (a) {1, 2} and {6, 7} are irreducible, positive recurrent subsets. If X0 = 1 or 2, then we
only see states 1 and 2 and can use the irreducible subset to compute the stationary
distribution (1 , 2 ). We get 1 = 0.4, and 2 = 0.6. So
E(1 |X0 = 1) =

1
1
5
= 2.5, E(2 |X0 = 2) =
= .
1
2
3

If X0 = 6 or 7, then we only see states 6 and 7 and thus can calculate 6 = 0.5 and
7 = 0.5 from the irreducible subset. Thus,
E(6 |X0 = 6) = 1/6 = 2, E(7 |X0 = 7) = 1/6 = 2.
States 3, 4 and 5 are transient, so
E(3 |X0 = 3) = E(4 |X0 = 4) = E(5 |X0 = 5) = ,

n
does not exist for i = 6, 7, j = 6, 7. Since
(b) States 6 and 7 have period 2, so limn Pi,j
n
states 6 and 7 do not communicate with other states, limn Pi,j
is 0 for i = 6, 7, j =
1, 2, 3, 4, 5 and i = 1, 2, 3, 4, 5, j = 6, 7.
States 1 and 2 have stationary distribution (0.4, 0.6). State 3, 4 and 5 are transient states
and they all eventually join {1, 2}. Then

0.4 0.6 0 0 0
0
0

0.4 0.6 0 0 0
0
0

0.4 0.6 0 0 0
0
0

.
0.4
0.6
0
0
0
0
0
lim P n =

0.4 0.6 0 0 0
0
0

0
0 0 0 0 does not exist does not exist
0
0 0 0 0 does not exist does not exist

6. (a) If the Markov chain is recurrent, then it will be aperiodic since Pii > 0 for each state i.
(b) The Markov chain is irreducible since the states are communicate with each other.
(c) The flow balance equations are:
0 = (q + r)0 + q1 (1 q r)0 = q1 0 p = 1 q
1 p = 2 q
...
i p = i+1 q
...
P
and i=0 i = 1.
Solving above equations we have: i = ( pq )i (1 pq ) for each i = 0, 1, 2,
Since the chain is irreducible and aperiodic, the stationary distribution exists and it is
unique provided that p/q < 1.
(d) The Markov Chain is positive recurrent since p < q.
(e) The Markov Chain is not positive recurrent since p > q, the chain will eventually drift to
infinity.

S-ar putea să vă placă și