Sunteți pe pagina 1din 3

14

Markov Chains
________________________________________________________________________

A Markov chain is a special type of discrete-time stochastic process. A stochastic


process, in turn, studies how a random variable evolves over a period of time. When we
wish to study how the market shares of the detergent or cola manufacturers change from
month-to-month or year-to-year, for example, or how the price a share changes over time,
we are, in fact, dealing with Markov chains. In a discrete-time process, we observe some
characteristic, say market shares, at discrete points in time, like in different months.

Inputs

To see the inputs and assumptions required for Markovian analysis, consider this brand-
switching example. Assume that a certain market of detergents is being shared by three
brands D1, D2, and D3 as 20%, 50% and 30% respectively. Further, a study of the market
behaviour reveals that the following pattern has almost stabilised over time. Every month,
30 per cent of the customers of D1 shift to D2 and 10 per cent shift to D3, while remaining
60 per cent stick to D1 itself. For D2, the shifts to brands D1 and D3 is 20 per cent and 40
per cent respectively while 40 per cent hold on the same brand. Similarly, for brand D3, it
is found that 30 per cent shift to D1, 20 per cent to D2 and 50 per cent use D3 only. This
information is summarised (in proportion form) and given in the table below.

Transition Matrix
________________________________________________________________________
To Brand (next month)
From Brand (this month) -------------------------------------------------------
D1 D2 D3
D1 0.60 0.30 0.10
D2 0.20 0.40 0.40
D3 0.30 0.20 0.50

Before discussing about using this information, let us formally state the important
concepts and assumptions of the analysis, given here.

Finite States Each of the brands in use here represents a state. Accordingly, there are
three states in this case. The analysis dealt with here assumes that there is limited number
of states in the system. Further, all the states here are non-absorbing in the sense that a
customer can switch between all the three brands, and none of them is absorbing. If there
were a detergent brand from which a customer will never switch, it would be called
absorbing.
Stationarity of Transition Probabilities The fact that 60 per cent customers of D1
retain their brand, 30 per cent shift to D2 and 10 per cent shift to D3, implies that these are
respective probabilities of transition from one state to others. The values given in the
table above represent, therefore, transition probabilities. These transition probabilities are
assumed to remain constant over time.

First Order Process The choice of a brand in a given month is assumed to be


dependent only on the choice made in the preceding month. Hence, it illustrates a first
order process. A situation in which the choice is influenced by the choices of past two
months would describe a second-order process.

Uniform Time-periods The changes from one state to another are assumed to take
place once every month. Thus, time interval between successive transitions is taken to be
unchanged.

Initial Condition In the detergent example above, it is assumed that current market
shares are 20%, 50% and 30% for the three brands respectively. This can be represented
as (0.20 0.50 0.30), and is called initial condition. An initial condition (0 0 1) for the
market obviously means that the entire market is held by D3. For a customer, however, an
initial condition (0 0 1) implies that she has currently bought brand D3.

Analysis

Using the initial condition and transition probabilities, broadly two types of results can
be obtained: specific-state probabilities and steady-state probabilities.

Specific-state Probabilities If qi(k) be the probability (q) of the system being in state (i)
in a period (k), then, in the context of detergent example, QD1(0) would mean the
probability of a customer choosing D1 this month (t = 0). Initial condition can be given as
Q(0) = [qD1(0) qD2(0) qD3(0)].

For the system, we have Q(k) = Q(k – 1)×P = Q(k – 2)×P2 = … = Q(0)×Pk , where P
is transition probability matrix. The elements of the matrix Q(k) would give market
shares of three brands in period k. Similarly, conditional state probabilities can be
obtained by using transition probabilities. Thus, qij(k) would mean the probability of
being in state j given initial condition i, after k transitions. Thus, if a customer has bought
brand D2 in the last period, the chances that she would buy brand D3 after 3 periods,
q23(3), shall be given by the element 2,3 in the matrix P3(obtained as P × P × P).

Steady State Probabilities If the transitions continue indefinitely, then the system
tends to stabilise so that the market shares of the different firms tend to remain constant.
The phenomenon of steady state, or equilibrium, can be expressed symbolically as Q(k) =
Q(k – 1) so that the state probabilities in period k are the same as in the period k – 1.
The steady state probabilities or equilibrium probabilities q1, q2, q3,… qn are obtained by
solving the following set of equations:
q1 = p11q1 + p21q2 + .......... +pn1qn
q2 = p12q1 + p22q2 + .......... +pn2qn
q3 = p13q1 + p23q2 + .......... +pn3qn
.
qn = p1nq1 + p2nq2 + .......... +pnnqn
and 1 = q1 + q2 + q3 + …….. + qn

For the detergent market case, q1, q2, q3, … qn would represent market shares of different
brands in the long run.

Absorbing States Markovian analysis is extended to cover absorbing states as well. To


illustrate, with passage of time (take months, for example) an accounts receivable would
become over due one month, 2 months etc; be paid off or be written off as bad debt. The
last two of the states are called absorbing states, while others are called transient states.
In such cases, what sought to be answered is that if the chain begins in a given transient
state, what is the probability of it being ending up in each absorbing state? To solve such
a problem, the n × n transition probabilities matrix, with n states, of which m are
absorbing states, is partitioned into 4 sub-matrices as follows:

(a) P: corresponding to states 1, 2, … n – m,


(b) Q: square matrix of order n – m of transient states probabilities,
(c) R: n – m by m matrix of transitions from transient to absorbing states,
(d) Identity matrix indicating that one cannot move from one absorbing state to
another.

Now, the matrix (I – Q)-1, known as the fundamental matrix of Markov chains, is
obtained. The ijth element of this matrix gives expected number of periods that will be
spent in transient state ti before reaching an absorbing state. Finally, (I – Q)-1R provides
answer to the question that if the chain begins in a given transient state, ti , what is the
probability that it will be eventually absorbed in absorbing state aj. The answer lies in the
ijth element of the matrix.

S-ar putea să vă placă și