Sunteți pe pagina 1din 30

Introduction to Markov Chain

A Markov chain is a random process It has the property that given the values of the process from time zero up through the current time, the conditional probability of the value of the process at any future time depends only on its value at the current time. Future and the past are conditionally independent given the present.

Examples
Random walk Queuing processes Birth death process

Discrete time markov chains


A sequence of integer-valued random variables, X0,X1, . . . is called a Markov chain if for n 1,

Consider a person who has had too much to drink and is staggering around. Suppose that with each step, the person randomly moves forward or backward by one step. This is the idea to be captured in the following example.

State space and transition probabilities


The set of possible values that the random variables Xn can take is called the state space of the chain. The state space to be the set of integers or some specified subset of the integers. The conditional probabilities P(Xn+1 = j|Xn = i) are called transition probabilities.

we assume that the transition probabilities do not depend on time n. Such a Markov chain is said to have stationary transition probabilities or to be time homogeneous. For a time-homogeneous Markov chain, we use the notation

The pij are also called the one-step transition probabilities because they are the probabilities of going from state i to state j in one time step. One of the most common ways to specify the transition probabilities is with a state transition diagram. This particular diagram says that the state space is the finite set {0,1}, and that p01 = a, p10 = b, p00 = 1a, and p11 = 1b. Note that the sum of all the probabilities leaving a state must be one. This is because for each state i,

Fig 12.3 is a special case in which ai=a and bi=b for all i. State transition diagram telling us that

To introduce a barrier at zero, leading to the state transition diagram in Figure 12.4. In this case, we speak of a random walk with a barrier. For i 1, the formula for pij is given by (12.9), while for i = 0,

If a0 = 1, the barrier is said to be reflecting. If a0 = 0, the barrier is said to be absorbing. Once a chain hits an absorbing state, the chain stays in that state from that time onward. A random walk with a barrier at the origin has several interpretations.

When thinking of a drunken person staggering around, we can view a wall or a fence as a reflecting barrier; if the person backs into the wall, then with the next step the person must move forward away from the wall. Similarly, we can view a curb or step as an absorbing barrier; if the person trips and falls down when stepping over a curb, then the walk is over.

A random walk with a barrier at the origin can be viewed as a model for a queue with an infinite buffer. Consider a queue of packets buffered at an Internet router. The state of the chain is the number of packets in the buffer. This number cannot go below zero. The number of packets can increase by one if a new packet arrives, decrease by one if a packet is forwarded to its next destination, or stay the same if both or neither of these events occurs.

Consider a random walk with barriers at the origin and at N, as shown in Figure 12.5. The formula for pij is given by (12.9) above for 1 i N 1, by (12.10) above for i = 0, and, for i = N, by

This chain can be viewed as a model for a queue with a finite buffer, especially if ai = a and bi = b for all i. When a0 = 0 and bN = 0, the barriers at 0 and N are absorbing, and the chain is a model for the gamblers ruin problem. In this problem, a gambler starts at time zero with 1 i N 1 dollars and plays until he either runs out of money, that is, absorption into state zero, or his winnings reach N dollars and he stops playing (absorption into state N).

If N = 2 and b2 = 0, the chain can be interpreted as the story of life if we view state i = 0 as being the healthy state, i = 1 as being the sick state, and i = 2 as being the death state. In this model, if you are healthy (in state 0), you remain healthy with probability 1a0 and become sick (move to state 1) with probability a0.

If you are sick (in state 1), you become healthy (move to state 0) with probability b1, remain sick (stay in state 1) with probability 1(a1+b1), or die (move to state 2) with probability a1. Since state 2 is absorbing (b2 = 0), once you enter this state, you never leave.

Stationary distributions

S-ar putea să vă placă și