Sunteți pe pagina 1din 6

IOE516 - Stochastic Processes II Homework Set 2 - Suggested Solutions

Winter 2013
{1, 2, 3} with transition matrix 0 1/3 0

Question 1:

Let (Xn )n0 be a Markov chain on 0 1 P = 0 2/3 p 1p

Calculate pn 11 in each of the following cases: (a) p = 1/16 and (b) p = 1/6 and (c) p = 1/12. Answer: (a) In the case of p = 1/16, we have three distinct eigenvalues 1 = 1, 2 = 1/12 n n 0 1 and 3 = 1/4. Therefore pn 11 = a1 + a2 (1/12) + a3 (1/4) . Since p11 = 1, p11 = 2 0, p11 = 0, so a1 = 1/65 and a2 = 18/13 and a3 = 2/5. (b) In the case of p = 1/6, we have three eigenvalues 1 = 1, 2 = 1/6 + 1/6i and 3 = 1/6 1/6i. We can write the complex numbers in polar form: x + yi = r(cos + i sin ) where r = x2 + y 2 and = tan1 (y/x). In this case, 1/6 1/6i = 1 1 n n n n 3 (cos 4 i sin 4 ). Therefore p11 = a1 + ( 3 2 ) (a2 cos 4 + a3 sin 4 ). Since 2 1 2 p0 11 = 1, p11 = 0, p11 = 0, so a1 = 1/25 and a2 = 24/25 and a3 = 18/25. (c) In the case of p = 1/12, we have three eigenvalues 1 = 1, 2 = 3 = 1/6. Since n we have two identical eigenvalues, we can write pn 11 = a1 + (a2 n + a3 )(1/6) . Since 0 1 2 p11 = 1, p11 = 0, p11 = 0, so a1 = 1/49 and a2 = 6/7 and a3 = 48/49. Question 2: Show that any nite-state Markov chain must contain at least one recurrent state. Give a simple example of an irreducible (all elements communicate) and transient Markov chain with countably innite states, and provide a reason. Proof. If all states are transient, then each of them is visited not at all or only nitely many times. This implies that, in a nite state Markov Chain, there is a nite number of total visits. However, the time is innite, and this is not possible. A simple example: asymmetric random walk: p0,1 = 1 and for i = 1, 2, . . ., we have pi,i+1 = 3/4 and pi,i1 = 1/4. Since Xn almost surely, this means that it cannot re-visit any state innitely ofen.

Question 3: A gambler has 2 dollars and needs to increase it to 10 dollars in a hurry. He can play a game with the following rules: a fair coin is tossed; if a player bets on the right side, he wins a sum equal to his stake, and his stake is returned; otherwise he loses his stake. The gambler decides to use a bold strategy in which he stakes all his money if he has 5 dollars or less, and otherwise stakes just enough to increase his capital, if he wins, to 10 dollars. Let X0 = 2 and let Xn be his capital after n throws. Prove that the gambler will achieve his aim with probability 1/5. What is the expected number of tosses until the gambler either achieves his aim or loses his capital? What is the expected number of tosses conditional on the gambler achieving his aim? Hint: Let A be the event that he achieves his aim and nd P (Xn+1 = j | Xn = i, A). Answer: Let hi = Pi (hit 10). Then we have h0 = 0 and h10 = 1. According the 1 1 1 1 1 rules, we also have h2 = 2 h0 + 1 2 h4 and h4 = 2 h0 + 2 h8 and h6 = 2 h10 + 2 h2 and 1 1 h8 = 2 h10 + 2 h6 . Then we have h2 = 1/5 and h4 = 2/5 and h6 = 3/5 and h8 = 4/5. Noet that h2 is the probability that the gambler will achieve his aim. Let ki = Ei (time to hit 0 or 10) and we want to nd k2 . It is obvious that k0 = k10 = 1 1 1 0. According to the rules, we have k2 = 1 + 1 2 k0 + 2 k4 and k4 = 1 + 2 k0 + 2 k8 and 1 1 1 1 k6 = 1+ 2 k10 + 2 k2 and k8 = 1+ 2 k10 + 2 k6 . Thus, k2 = 2. Moreover, k4 = k6 = k8 = 2. First we have to obtain the conditional transitional probabilities: p ij = P(Xn+1 = j | Xn = i, A) = hj pij hi

Thus, we have p 2,0 = 0, p 2,4 = 1, p 4,0 = 0, p 4,8 = 1, p 8,10 = 5/8, p 8,6 = 3/8, p 6,10 = 5/6 i = Ei (time to hit 10 | hit 10) and we want to nd k 2 . It and p 6,2 = 1/6. Now let k is obvious that k10 = 0 and k0 = . According to the rule and the new transitional 3 2 = 1 + k 4 and k 4 = 1 + k 8 and k 6 = 1 + 1 k probabilities, we have k 6 2 and k8 = 1 + 8 k6 . 2 = 3.6. Moreover, k 4 = 2.6 and k 6 = k 8 = 1.6. So we have k Question 4: given by Let (Xn )n0 be a Markov chain on {0, 1, . . .} with transition probabilities ( p01 = 1, pi,i+1 + pi,i1 = 1, pi,i+1 = i+1 i )2 pi,i1 , i1

Show that if X0 = 0 then the probability that Xn 1 for all n 1 is 6/ 2 . Proof. Let hi = Pi (hit 0). We write down the usual system of equations h0 = 1, hi = pi,i+1 hi+1 + pi,i1 hi1 , for i = 1, 2, . . .

Consider ui = hi1 hi , then pi,i+1 ui+1 = pi,i1 ui , so ( ) pi,i1 pi,i1 pi1,i2 . . . p10 ui+1 = ui = u1 = i u1 pi,i+1 pi,i+1 pi1,i . . . p12 2

where the nal equality denes i . Since u1 + . . . + ui = h0 hi , so hi = 1 A(0 + . . . + i1 ) where 0 = 1. Since )1 . By the transition probabilii=0 i < , we have A = ( i=0 i 1 2 2 1 ties, it is easy to check that i = 1/(i + 1) . Thus, A = i=1 i2 = 6 . The probability of 6 interest is equal to 1 h1 = A = 2. Question 5: Let Y1 , Y2 , . . . be independent identically distributed random variables with P(Y1 = 1) = P(Y1 = 1) = 1 2

and set X0 = 1, Xn = X0 + Y1 + . . . + Yn for n 1. Dene H0 = inf {n 0 : Xn = 0}. Find the probability generating function (s) = E(sH0 ). Suppose the distribution of Y1 , Y2 , . . . is changed to P(Y1 = 2) = 1/2, and P(Y1 = 1) = 1/2. Show that now satises s3 2 + s = 0. Proof. Set Hj = inf {n 0 : Xn = j } and for 0 s < 1, (s) = E1 (sH0 ). Now suppose we start at 2. Apply the Strong Markov Property at H1 to see that under P2 , conditional on H1 < , we have 0, H0 = H1 + H 0 , the time taken after H1 to get to 0, is independent of H1 and has the (uncondiwhere H tioned) distribution of H1 . Thus, E2 (sH0 ) = E2 (sH1 | H1 < )E2 (sH0 | H1 < )P2 (H1 < ) = E2 (sH1 1(H1 < ))E2 (sH0 | H1 < ) = E2 (sH1 )2 = E1 (sH0 )2 = (s)2 0, Also, by the Markov property at time 1, conditional on X1 = 2, we have H0 = 1 + H 0 , the time taken after time 1 to get to 0, has the same distribution as H0 does where H under P2 . Thus, 1 1 (s) = E1 (sH0 ) = E1 (sH0 | X1 = 2) + E1 (sH0 | X1 = 0) 2 2 1 1 = E1 (s1+H0 | X1 = 2) + E1 (s | X1 = 0) 2 2 1 1 1 1 0 H = sE2 (s ) + s = s(s)2 + s 2 2 2 2 1 2 2 Thus, = (s) satises 1 (0) 1 and 2 s + 2 s = 0 and = (1 1 s )/s. Since continuous, we are forced to take the negative root at s = 0. Thus, = (1 1 s2 )/s. Now suppose the distribution of Y1 , Y2 , . . . is changed to P(Y1 = 2) = 1/2, and P(Y1 = 1) = 1/2. We can go through the same argument. Now suppose we start at 3. Apply the Strong Markov Property at H1 to see that under P3 , conditional on A = {H2 < , H1 < }, we have 0, H0 = H2 + H1 + H 3

0 (the time taken after H1 to where H2 and H1 (the time taken after H2 to get to 1) and H get to 0) have the same distributions. Moreover, they are indepedent of each other. Thus, E3 (sH0 ) = E3 (sH2 +H1 | A)E3 (sH0 | H1 < )P3 (A) = E3 (sH2 +H1 1(A))E3 (sH0 | A) = E3 (sH2 )3 = E1 (sH0 )3 = (s)3 0, Also, by the Markov property at time 1, conditional on X1 = 3, we have H0 = 1 + H 0 , the time taken after time 1 to get to 0, has the same distribution as H0 does where H under P3 . Thus, 1 1 (s) = E1 (sH0 ) = E1 (sH0 | X1 = 3) + E1 (sH0 | X1 = 0) 2 2 1 1 0 1+H E1 (s | X1 = 3) + E1 (s | X1 = 0) = 2 2 1 1 1 1 = sE3 (sH0 ) + s = s(s)3 + s 2 2 2 2 Thus, = (s) satises s3 2 + s = 0.

Question 6*: Show that the simple symmetric random walk on Z3 is transient. [Bonus: extend this result to Z4 .] Proof. The transition probabilities of the simple symmetric random walk on Z3 are given by pij = 1/6 if |i j | = 1 and 0 otherwise. Thus, the chain jumps to each of its nearest neighbours with equal probabilities. Suppose we start at 0. We can only return to 0 after an even number 2n steps. Of these 2n steps there must be i up, i down, j north, j south, k east and k west for some i, j, k 0, with i + j + k = n. By counting the ways in which this can be done, we obtain ( )2n ( ) ( )2n ( ) ( ) 1 (2n)! 2n 1 n 2 1 2n (2n) p00 = = . (i!j !k !)2 6 2 3 n ijk
i+j +k=n
i,j,k0

i+j +k=n

i,j,k0

Now

i,j,k0

n ijk

) ( )n 1 =1 3

i+j +k=n

since the left-hand side being the total probability of all the ways of placing n balls randomly into three boxes. For the case where n = 3m, we have ( ) ( ) n n! n = ijk i!j !k ! mmm for all i, j , k , so (
(2n) p00

2n n

) ( )n ( )3/2 ) ( )2n ( n 1 1 6 1 . 3 2 mmm 3 2A n 4

by Stirlings formula as n . Recall that Stirlings formula n! A n(n/e)n as n . (6m) 3/2 . But p(6m) (1/6)2 p(6m2) and Hence, < by comparison with 00 m=0 p00 n=0 n (n00 (6m) (6m4) ) p00 (1/6)4 p00 for all m. Thus we must have p < , and the walk is n=0 00 transient. Question 7: A particle moves on the eight vertices of a cube in the following way: at each step the particle is equally like to move to each of the three adjacent vertices, independently of its past motion. Let i be the initial vertex occupied by the particle, o the vertex opposite i. Calculate each of the following quantities: 1. the expected number of steps until the particle returns to i; 2. the expected number of visits to o until the particle returns to i; 3. the expected number of steps until the rst visit to o. Answer: 1. The expected number of steps until the particle returns to i is given by Ei (Ti ) = 1/i , where is the stationary probability distribution. This is unique since the chain and irreducible and nite. It is easy to show that j = 1/8 is stationary. Thus, Ei (Ti ) = 8. 2. The expected number of visits to o until the particle returns to i is given by the i such that the vector i = ( i , . . . , i ) satises i = 1, 0 < i < , and number o 1 8 i j i for all j . Thus, i = 1. i P = i . This is a constant vector j o 3. We collapse all the vertices with the same distance to i together, we then get a modied Markov chain with I = {0, 1, 2, 3} and transition matrix 0 1 0 0 1/3 0 2/3 0 P= 0 2/3 0 1/3 0 0 1 0 The expected number of steps until the rst visit to o is k0 = E0 (time to hit 3). Let 2 kj = Ej (time to hit 3). Then we have k3 = 0, and k0 = 1 + k1 and k1 = 1 + 1 3 k0 + 3 k2 1 2 and k2 = 1 + 3 k1 + 3 k3 . Upon solving the system, we get k0 = 10. Question 8: Let P be an irreducible stochastic matrix on a nite set I . Show that a distribution is invariant for P if and only if (I P + A) = a, where A = (aij : i, j I ) with aij = 1 for all i and j , and a = (ai : i I ) with ai = 1 for all i. Deduce that if P is irreducible then I P + A is invertible. |I | Proof. If the distribution is invariant, then P = , 0 i 1 and i=1 = 1. Thus, (I P + A) = I P + A = +
|I | i=1

i a = a.

Now we show the other direction. If (I P + A) = a, then i + j I j (1 pji ) = 1 for all i I . Suppose |I | = n. We sum the last equality over all i I and have i + j (1 pji ) = n.
iI iI j I

However the LHS is equal to, since P is stochastic, i.e.,


iI

iI

pji = 1 for all j I ,


iI

i +

iI j I

j (1 pji ) =

iI

i +

j I iI

j (1 pji ) = n

i .

This implies that iI i = 1. This in turn implies (I P ) = 0, i.e., P = . It remains to argue that i 0 for all i I . This is because rank (I P ) = n 1 for irreducible stochastic matrix P . Since i = 1 adds the remaining dimension, the solution is unique without the non-negativity constraints. We know that a unique positive invariant distribution exists for irreducible nite chains, and the solution has to be the positive invariant distribution. Any nite-state irreducible chain is positive recurrent, and there exists a unique invariant distribution. Therefore I P + A is invertible, since = (I P + A)1 a exists.

S-ar putea să vă placă și