Sunteți pe pagina 1din 16

Theory of the Decisions

Unit 3: Phase 6 - Solve problems by applying the algorithms of Unit 3

Presented to the tutor:


PAULA ANDREA CARVAJAL

Presented By:

XXXXXXXXXXX
XXXXXXXXXXX
XXXXXXXXXXXXX
XXXXXXXXXX

Group: 212066_30

NATIONAL UNIVERSITY OPEN AND DISTANCE - UNAD


BASIC SCIENCE SCHOOL TECHNOLOGY AND ENGINEERING
May-2019
INTRODUCTION

In probability theory, a special type of discrete stochastic process in which the probability
of an event occurring depends only on the immediately preceding event is known as a
Markov chain or a Márkov model. This characteristic of lack of memory receives the name
of property of Márkov.

More formally, the definition assumes that in stochastic processes the probability of
something happening only depends on the historical past of the reality we are studying.
For this reason, it is often said that these chains have memory. The base of the chains is
the one known as Marko’s property, which summarizes what was said previously in the
following rule: what the chain experiences at a time t + 1 only depends on what happened
at time t (the immediately preceding one).

The Markov chains have experienced an important real application in the field of business
and finance, allowing, as has been pointed out, analyzing and estimating future behavior
patterns of individuals based on previous experience and results. This can be reflected in
different fields such as the prevention of delinquency, the study of consumer behavior in
a sector or the seasonal need for personnel and labor.

In the development of the present work, the algorithms to be solved in each of the five
exercises presented are described, stable state, and subsequently beginning of the
multiplication state.
Problem 1. Markov chains (steady state):

XYZ insurance company charges its customers according to their accident


history. If you have not had accidents the last two years will be charged for
the new policy $ 715,000 (state 0); if you have had an accident in each of the
last two years you will be charged $ 835,000 (State 1); If you had accidents
the first of the last two years you will be charged $ 789.000 (state 2) and if
you had an accident the second of the last two years will be charged $ 813.000
(State 3). The historical behavior of each state is given by the following cases
of accident, taken in four different events.

XXXXXXXXXXXXXXXXX

ACCIDENTS IN THE YEAR


STATE E0 E1 E2 E3 TOTAL
E0 920 1380 1840 460 4600
E1 1740 0 1160 2900 5800
E2 900 900 1800 900 4500
E3 1140 1520 0 1140 3800

a. What is the transition matrix resulting from proportionality according to the accident
history?

PROBABILIDADES
STATE E0 E1 E2 E3 TOTAL
E0 0,2 0,3 0,4 0,1 1,00
E1 0,3 0,0 0,2 0,5 1,00
E2 0,2 0,2 0,4 0,2 1,00
E3 0,3 0,4 0,0 0,3 1,00

PROBABILIDADES
STATE E0 E1 E2 E3 TOTAL
E0 0,2 0,2 0,3 0,3 1,00
E1 0,3 0,3 0,2 0,2 1,00
E2 0,2 0,2 0,3 0,3 1,00
E3 0,3 0,2 0,2 0,3 1,00

W 0,2 0,3 0,4 0,1 = 1,0
X 0,3 0,0 0,2 0,5 = 1,0
Y 0,2 0,2 0,4 0,2 = 1,0
Z p=
0,3 0,4 0,0 0,3 = 1,0

0,2W + 0,3X + 0,4Y + 0,4Z


W
EC1 =
0,3W + 0,0X + 0,2Y + 0,4Z
X
EC2 =
0,4W + 0,2X + 0,4Y + 0,0Z
Y
EC3 =
0,1W + 0,5X + 0,2Y + 0,3Z
Z
q= W X Y Z EC4 =
EC5 w+x+y+y+z= 1

Estado
If you have not had accidents the last two years the last two years will
E0: be charged for the new policy $715.000,00
if you have had an accident in each of the last two years you will be
E1: charged $835.000,00
E2: If you had accidents the first of the last two years you will be charged $789.000,00
if you had an accident the second of the last two years will be
E3: charged $813.000,00

SOLVER AND WinQSB

E1 E2 E3 E4
0,2500 0,2500 0,2500 0,2500
0 0 0 0 IGUAL A
W X Y Z INDEP 0,000000
-0,8 0,3 0,4 0,1 0 0,000000
0,3 -1,0 0,2 0,5 0 0,000000
0,2 0,2 -0,6 0,2 0 0,000000
0,3 0,4 0,0 -0,7 0 0,000000
1 1 1 1 -1
probabilidad estados *
wxyz probabilidad
E0 $715.000,00 0,2505 $ 179.100
E1 $835.000,00 0,2329 $ 194.442
E2 $789.000,00 0,2446 $ 193.006
E3 $813.000,00 0,2720 $ 221.155
b. What is the average $ 787.704
premium paid by a
customer in Payoff,
according to historical
accident rate? =
Steady state

E1 E2 E3 E4
0,2500 0,2500 0,2500 0,2500

E1 E2 E3 E4
0,2500 0,2250 0,2500 0,2750

E1 E2 E3 E4
0,2500 0,2350 0,2450 0,2700

E1 E2 E3 E4
0,2505 0,2320 0,2450 0,2725

E1 E2 E3 E4
0,2505 0,2332 0,2446 0,2718

E1 E2 E3 E4
0,2505 0,2328 0,2447 0,2721

E1 E2 E3 E4
0,2505 0,2329 0,2446 0,2720

E1 E2 E3 E4
0,2505 0,2329 0,2446 0,2720

XXXXXXXXXXXXXX
Problem 2. Markov chains (steady state):

In Colombia there are 5 main mobile operators such as Tigo, Comcel, Movistar, ETB
and Uff, which we will call states. The following chart summarizes the odds that each
client has to stay in their current operator or make a change of company.
STATE TIGO COMCEL MOVISTAR ETB UFF
TIGO 0,1 0,2 0,4 0,1 0,2
COMCEL 0,3 0,2 0,1 0,2 0,2
MOVISTAR 0,1 0,3 0,2 0,2 0,2
ETB 0,1 0,3 0,2 0,1 0,3
UFF 0,1 0,2 0,3 0,3 0,1
The current percentages of each operator in the current market are for Tigo 0.3 for
Comcel 0.2, for Movistar 0.3, for ETB 0.1 and 0.1 for Uff (initial state).
STATE TIGO COMCEL MOVISTAR ETB UFF SUMATORIA
TIGO 0,1 0,2 0,4 0,1 0,2 1
COMCEL 0,3 0,2 0,1 0,2 0,2 1
MOVISTAR 0,1 0,3 0,2 0,2 0,2 1
ETB 0,1 0,3 0,2 0,1 0,3 1
UFF 0,1 0,2 0,3 0,3 0,1 1

STATE TIGO COMCEL MOVISTAR ETB UFF SUMATORIA


TIGO 0,1 0,2 0,4 0,1 0,2 1
COMCEL 0,3 0,2 0,1 0,2 0,2 1
MOVISTAR 0,1 0,3 0,2 0,2 0,2 1
ETB 0,1 0,3 0,2 0,1 0,3 1
UFF 0,1 0,2 0,3 0,3 0,1 1

ESTADO INICIAL
TIGO COMCEL MOVISTAR ETB UFF
0,3 0,2 0,3 0,1 0,1

ESTADO INICIAL P0
TIGO COMCEL MOVISTAR ETB UFF
0,3 0,2 0,3 0,1 0,1

ESTADO INICIAL P1
TIGO COMCEL MOVISTAR ETB UFF
0,14 0,24 0,25 0,17 0,2

ESTADO INICIAL P2
TIGO COMCEL MOVISTAR ETB UFF
0,148 0,242 0,224 0,189 0,197

ESTADO INICIAL P3
TIGO COMCEL MOVISTAR ETB UFF
0,1484 0,2413 0,2251 0,186 0,1992
ESTADO INICIAL P4
TIGO COMCEL MOVISTAR ETB UFF
0,14826 0,24111 0,22547 0,18648 0,19868

ESTADO INICIAL P5
TIGO COMCEL MOVISTAR ETB UFF
0,148222 0,241195 0,225409 0,186394 0,19878

XXXXXXXX
Problem 3. Markov chains (Initial state multiplication):

In Colombia there are 6 main mobile operators such as Avantel, Tigo, Comcel,
Movistar, ETB and Uff, which we will call states. The following chart
summarizes the odds that each client has to stay in their current operator or
make a change of company.

Matriz de Transición
Estado TIGO COMCEL MOVISTAR ETB AVANTEL UFF SUM
TIGO 0,1 0,2 0,4 0,1 0,1 0,1 1,00
COMCEL 0,1 0,20 0,10 0,20 0,30 0,10 1,00
MOVISTAR 0,1 0,30 0,20 0,20 0,20 0,00 1,00
ETB 0,1 0,30 0,20 0,10 0,10 0,20 1,00
AVANTEL 0,3 0,00 0,20 0,20 0,20 0,10 1,00
UFF 0,1 0,20 0,20 0,30 0,00 0,20 1,00

The current percentages of each operator in the current market are for Avantel 0.1, Tigo
0.2 for Comcel 0.2, for Movistar 0.3, for ETB 0.1 and 0.2 for Uff (initial state).

Estado Inicial SUMA


P0 TIGO COMCEL MOVISTAR ETB AVANTEL UFF
1,00
0,20 0,20 0,30 0,10 0,10 0,10

Estado Inicial SUMA


P1 Avantel Tigo Comcel Movistar Etb Uff
1,00
0,1200 0,2200 0,2200 0,1800 0,1700 0,0900

Estado Inicial SUMA


P2 Avantel Tigo Comcel Movistar Etb Uff
1,00
0,1340 0,2060 0,2020 0,1790 0,1740 0,1050
Estado Inicial SUMA
P3 Avantel Tigo Comcel Movistar Etb Uff
1,00
0,1348 0,2033 0,2062 0,1792 0,1683 0,1082

Estado Inicial SUMA


P4 Avantel Tigo Comcel Movistar Etb Uff
1,00
0,1337 0,2049 0,2066 0,1794 0,1673 0,1081

Estado Inicial SUMA


P5 Avantel Tigo Comcel Movistar Etb Uff
1,00
0,1335 0,2051 0,2062 0,1795 0,1676 0,1081

SOLVER

Estado Inicial SUMA


P1 Avantel Tigo Comcel Movistar Etb Uff 1
0,20 0,20 0,30 0,10 0,10 0,10
0,1200 0,2200 0,2200 0,1800 0,1700 0,0900
XXXXXXXXXXXXX
Problem 4. Markov chains (Initial state multiplication):

Suppose that 4 types of soft drinks are obtained in the market: Colombian,
Pepsi Cola, Fanta and Coca Cola when a person has bought Colombian there
is a probability that they will continue to consume 40%, 20% of which will buy
Pepsi Cola, 10% that Fanta buys and 30% that Coca Cola consumes; when
the buyer currently consumes Pepsi Cola there is a probability that he will
continue to buy 30%, 20% buy Colombiana, 20% that Fanta consumes and
30% Coca Cola; if Fanta is currently consumed, the likelihood of it continuing
to be consumed is 20%, 40% buy Colombian, 20% consume Pepsi Cola and
20% go to Coca Cola. If you currently consume Coca Cola the probability that
it will continue to consume is 50%, 20% buy Colombian, 20% that consumes
Pepsi Cola and 10% that is passed to Fanta.

At present, each Colombian brand, Pepsi Cola, Fanta and Coca Cola have the following
percentages in market share respectively (30%, 25%, 15% and 30%) during week 3.
COLOMBIANA PEPSI FANTA COCACOLA

COLOMBIANA 0,40 0,20 0,10 0,30

PEPSI COLA 0,20 0,30 0,20 0,30

FANTA 0,40 0,20 0,20 0,20

COCACOLA 0,20 0,20 0,10 0,50

d. Find the transition matrix.

COLOMBIANA PEPSI FANTA COCACOLA

COLOMBIANA 0,30 0,22 0,13 0,35

PEPSI COLA 0,28 0,23 0,15 0,34

FANTA 0,32 0,22 0,14 0,32

COCACOLA 0,26 0,22 0,13 0,39

e. Find the probability that each user stays with the mark or change to another for
period 4 (problem 4) and period 5 (problem 5).

S0 0,30 0,25 0,15 0,30

S1 0,29 0,225 0,14 0,345

S2 0,286 0,2225 0,137 0,355

S3 0,2845 0,22225 0,136 0,357

S4 0,284 0,222 0,136 0,358


Winqsb
XXXXXXXXXXXXXXX
Problem 5. Markov chains (Initial state multiplication):
Suppose you get 6 types of Jeans brands in the Colombian market: Brand 1, Brand 2,
Brand 3, Brand 4, Brand 5 and Brand 6. The following table shows the odds that you
continue to use the same brand or change it.

STATE BRAND 1 BRAND 2 BRAND 3 BRAND 4 BRAND 5 BRAND 6


BRAND 1 0,2 0,16 0,15 0,21 0,18 0,1
BRAND 2 0,14 0,18 0,2 0,19 0,15 0,14
BRAND 3 0,13 0,16 0,15 0,21 0,2 0,15
BRAND 4 0,21 0,2 0,15 0,2 0,18 0,06
BRAND 5 0,15 0,15 0,15 0,19 0,15 0,21
BRAND 6 0,17 0,16 0,17 0,18 0,19 0,13

At present, brand, have the following percentages in market share respectively (20%,
15%, 17%, 15%, 13% y 20%) during week 4.
MATRIZ DE TRANSICION

STATE BRAND 1 BRAND 2 BRAND 3 BRAND 4 BRAND 5 BRAND 6 SUMA


BRAND 1 0,17 0,1698 0,16 0,1981 0,1738 0,1283 1,000
BRAND 2 0,1654 0,1697 0,1618 0,1973 0,1755 0,1303 1,000
BRAND 3 0,1675 0,1696 0,161 0,1962 0,1737 0,132 1,000
BRAND 4 0,1687 0,1702 0,1612 0,1986 0,1722 0,1291 1,000
BRAND 5 0,1686 0,1691 0,1617 0,1958 0,1761 0,1287 1,000
BRAND 6 0,1669 0,1685 0,1606 0,1973 0,1742 0,1325 1,000

ESTADO INICIAL
BRAND 1 BRAND 2 BRAND 3 BRAND 4 BRAND 5 BRAND 6 SUMA
P0 0,20 0,15 0,17 0,15 0,13 0,20 1,00
P1 0,168 0,169 0,161 0,197 0,17 0,130241 1,00
P2 0,1679 0,1695 0,1611 0,1973 0,17 0,130008977 1,00
P3 0,16791 0,16954 0,16108 0,19725 0,17 0,130008585 1,00
e. Find the probability that each user stays with the mark or change to another for
period 4 (problem 4) and period 5 (problem 5).

BRAND 1 BRAND 2 BRAND 3 BRAND 4 BRAND 5 BRAND 6


P4 0,20 0,15 0,17 0,15 0,13 0,2

BRAND 1 BRAND 2 BRAND 3 BRAND 4 BRAND 5 BRAND 6


P5 0,1681 0,1677 0,1615 0,1969 0,1770 0,1288

SOLVER

BRAND 1 BRAND 2 BRAND 3 BRAND 4 BRAND 5 BRAND 6 SUMA


P4 0,1445278 0,14592622 0,13864305 0,16977995 0,14994618 0,11190153 1,00
0,1445278 0,14592622 0,13864305 0,1697799 0,14994616 0,1119016
CONCLUSIONS

With the development of this work we can conclude that we can say that the Markov
chains are a tool to analyze the behavior and the government of certain types of stochastic
processes, that is, processes that evolve in a non-deterministic way over time. a set of
states.
That for their elaboration they require knowledge of diverse elements such as the state
and the transition matrix.
These elements were discovered by its creator Márkov, who made a sequence of
connected experiments in chain and the need to discover mathematically the physical
phenomena
This method is very important, since it has begun to be used in recent years as an
instrument of marketing research, to examine and forecast the behavior of customers
from the point of view of their loyalty to a brand and its forms of change to other brands,
the application of this technique, is not only limited to marketing, but its field of action has
been applied in various fields.
BIBLIOGRAPHIC REFERENCES

Ibe, O. (2013). Markov Processes for Stochastic Modeling: Massachusetts, USA:


University of Massachusetts Editorial. Retrieved
from http://bibliotecavirtual.unad.edu.co:2051/login.aspx?direct=true&db=nlebk&AN=51
6132&lang=es&site=eds-live
.
Dynkin, E. (1982). Markov Processes and Related Problems of Analysis: Oxford, UK:
Mathematical Institute Editorial. Retrieved
from http://bibliotecavirtual.unad.edu.co:2048/login?url=http://search.ebscohost.com/logi
n.aspx?direct=true&db=e000xww&AN=552478&lang=es&site=ehost-live
.
Pineda, R. (2017). Virtual learning object Unit 3. Markov decision processes. [Video
File]. Retrieved from http://hdl.handle.net/10596/13271

Panofsky, A. (2012). Examples In Markov Decision Processes: Singapore: Imperial


College Press Optimization Series. Retrieved
from http://bibliotecavirtual.unad.edu.co:2051/login.aspx?direct=true&db=nlebk&AN=54
5467&lang=es&site=eds-live

Know and develop the themes of unit 3: Third Web Conference, Markov decision
processes [Video File]. Retrieved from http://bit.ly/2ULl9OQ

S-ar putea să vă placă și