Sunteți pe pagina 1din 7

Markov Chains The Game of Craps

Spencer Judge Feb 2012 Daniel Fikse

Introduction

Our paper discusses the game of craps from a probabilistic perspective. We ask three questions: (1) what is the overall probability of winning a game of craps, (2) how does the probability of winning change as the game progresses, and (3) how many dice rolls should it take on average for the game to end? We model the games dierent outcomes as an absorbing Markov chain, to be discussed later. First, for readers who have never played craps, we explain the game. Craps hinges on the successive rolling of two six-sided dice. Say you are a player: at the beginning of the game, you have the option to make a bet for Pass or Dont Pass. The Dont Pass bet is generally avoided, so for simplicitys sake we will assume that if you are betting you are betting Pass. The dice are rolled, and if the sum of the dice values is 7 or 11, you win; if it is a 2, a 3, or a 12, you lose. If the sum is between 4 and 10 (inclusive), that number becomes the Point?, and the game moves into the next phase. From here the dice are rolled repeatedly until either a 7 is rolled, in which case you lose, or the Point value is rolled, in which case you win. What we want to determine, then, are seven probabilities associated with seven initial states: 1. The overall probability of winning once a Pass Line bet is placed at the beginning of the game.

2. The probability of winning once the Point is set at 4. 3. The probability of winning once the Point is set at 5. ... 7. The probability of winning once the Point is set at 10. along with the expected number of rolls to complete the game from each of the seven initial states.

Data Collection

Very little data collection was necessary beyond conrming the rules of craps as outlined above. Other than that, we assembled a table of probabilities for the dierent outcomes 2 through 12 of rolling two six- sided dice: Probability of 2d6 Sums 2 3 4 5 6 7 8 9 10 11 12 1+1 1/36 = 3% 1+2, 2+1 2/36 = 6% 1+3, 2+2, 3+1 3/36 = 8% 1+4, 2+3, 3+2, 4+1 4/36 = 11% 1+5, 2+4, 3+3, 4+2, 5+1 5/36 = 14% 1+6, 2+5, 3+4, 4+3, 5+2, 6+1 6/36 = 17% 2+6, 3+5, 4+4, 5+3, 6+2 5/36 = 14% 3+6, 4+5, 5+4, 6+3 4/36 = 11% 4+6, 5+5, 6+4 3/36 = 8% 5+6, 6+5 2/36 = 6% 6+6 1/36 = 3%

With this we had all the information we needed to formulate our model.

Implementation

From the discussion of the rules above, it is seen that with each dice roll, there are three basic outcomes, or states to enter following the roll. Either you have won, you have lost, or the Point is set and you have not yet won or lost. The probability of transitioning from the starting state to the winning state, for example, is either a 7 (with probability
3 ) 36 3 36 2 + 36 = 2 , since at the beginning of the game you can roll 9 2 ) 36

or an 11 (with probability

to win the game. More broadly,

say you are in state 4, meaning the Point has been set at 4. You can transition to the winning state with a probability of the losing state with a probability of state 4 again with the probability 1roll again any of the other states 5-10 with probability 0 since the Point has already been set A process like this where transitions between states occur in steps, and where the probability of transitioning to a certain state with the next step depends only on the current state and not any past states, is called a Markov chain.[2] These can be modeled with what is known as a transition matrix, which for us looks like Start 0 0 0 0 0 0 0 0 0 W in 6 2 + 36 36 1 0 3/36 4/36 5/36 5/36 4/36 3/36 Lose 2 + 36 + 0 1 6/36 6/36 6/36 6/36 6/36 6/36 s4
1 36 3 36 6 36 1 12 3 36

=
1 6

1 12

since this is the probability of rolling a 4

since this is a probability of rolling a 7


3 4

1= 6

since you will either win, lose, or have to

1 36

0 0 1 0 0 0 0 0
9 36

s5 4/36 0 0 0 1 10 36 0 0 0 0

s6 5/36 0 0 0 0 1 11 36 0 0 0

s8 5/36 0 0 0 0 0 1 11 36 0 0

s9 4/36 0 0 0 0 0 0 10 1 36 0

s10 3/36 0 0 0 0 0 0 0 9 1 36

Table 1: Markov State Transition Matrix

where each entry pij of P is the probability of the transition from state i to j. Moreover, since there are states in our model which are absorbing, or impossible to leave (winning and 3

losing, in which case the game is over) and it is possible to transition eventually from any one state to one of the absorbing states, it is an called an absorbing Markov chain[3] Consequently, it has certain properties which will enable us to answer our questions. Reordering the transition matrix so that the non- absorbing (transient) states are rst, we have:
P = Start 0 0 0 0 0 0 0 0 0 s4
1 9 13 18

s5
1 12

s6
5 36

s8
5 36

s9
1 9

s10
1 12

W in
2 9 1 9 1 12 5 36 5 36 1 9 1 12

Lose
1 9 1 6 1 6 1 6 1 6 1 6 1 6

0
3 4

0 0 0 0 0 0 0

0 0
25 36

0 0 0 0 0 0

0 0 0
25 36

0 0 0 0 0

0 0 0 0
13 18

0 0 0 0

0 0 0 0 0
3 4

0 0 0

0 0

1 0

0 1

Table 2: Markov State Transition Matrix (Canonical Form)

We will call the portion of the transition matrix representing the transient states (the top left) Q, and the portion representing the absorbing states (the top right) R. Taking

N = (Q I) 1,

where I is a 7 7 identity matrix, we can nd t = N c, where c is a 7 1 column vector of 1s:

t = Nc where each value ti of t is the expected number of dice rolls starting from state i before the game is over, as well as

B = NR where each value bij of B is the probability of ultimately transitioning from a transient state i to an absorbing state j (winning or losing).

Results

We found

Start p4 p5 p6 t= p8 p9 p10

3.37576 3.6 4. 3.27273 3.27273 3.6 4.

Table 3: Expected number of rolls until absorption from state j W in 0.492929 0.4 0.333333 0.454545 0.454545 0.4 0.333333 Lose 0.507071 0.6 0.666667 0.545455 0.545455 0.6 0.666667

Start p4 p5 B= p6 p8 p9 p10

Table 4: Absorption probabilities from state j

From the matrix t it is seen that from the start of the game, the game can be expected to last an average of 3.4 rolls of the dice; if the point is set at 4, it will be on average 3.6 more rolls; and so on. From B it is seen that the overall odds of winning are not too bad: .493 (certainly high enough to be interesting). It is also clear the odds of winning are in general lower once the Point has been set, no matter what the Point is. Raising the transition matrix P to the n power and plotting the probability pn (the probability of winning after n rolls) [2] versus n, 1,7 we obtain:
Figure 1: Probability of winning after exactly x rolls.
0.007 0.006 0.005 0.004 0.003 0.002 0.001
0

f1 dx 0.0363702

10

which is not a very hopeful-looking graph. The odds of winning decrease drastically after the beginning of the game; therefore it would be very unwise to increase the wager on an initial Pass bet as the game continues.

Possible Improvements

The game of craps is potentially much more complicated than the version we have analyzed in which a player has only the single betting option. Although this is the conventional way of playing, there are a variety of other bets that the player can make as the game continues. Some of these are based on one or two rolls and would thus be modeled as separate, smaller Markov chains, but some depend to some extent on what has happened on past rolls and thus could not be modeled as Markov chains, which require the probability of entering a state to depend only on the current state. [2] [1]

References
[1] G. Tellis A Craps Tutorial
http://crapsmath.com/craps part 1.html [2] M.A. Khamsi Markov Chains http://www.sosmath.com/matrix/markov/markov.html [3] S. Elizalde Math 20. Discrete Probability, Lecture 14 http://www.math.dartmouth.edu/archive/m20x06/public html/

Code

1 P = {{0 , 1/9 , 1/12 , 5/36 , 5/36 , 1/9 , 1/12 , 2/9 , 1/9} , {0 , 13/18 , 0 , 2 3 4 5 6 0 , 0 , 0 , 0 , 1/9 , 1/6} , {0 , 0 , 3/4 , 0 , 0 , 0 , 0 , 1/12 , 1/6} , {0 , 0 , 0 , 25/36 , 0 , 0 , 0 , 5/36 , 1/6} , {0 , 0 , 0 , 0 , 25/36 , 0 , 0 , 5/36 , 1/6} ,{0 , 0 , 0 , 0 , 0 , 13/18 , 0 , 1/9 , 1/6} , {0 , 0 , 0 , 0 , 0 , 0 , 3/4 , 1/12 , 1/6} , {0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 0} , {0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1}};

7 Q = P[[1

;; 7, 1 ;;

7]];

8 Fund = Inverse [ IdentityMatrix [ 7 ] Q ] ; 9 ones = {{1} , {1} , {1} , {1} , {1} , {1} , {1}}; 10 Expected = Fund . o n e s ; 11 H = ( Fund IdentityMatrix [ 7 ] ) 12 Inverse [ DiagonalMatrix [ D i a g o n a l [ Fund ] ] ] ; ;; 7, 8 ;; 9]];

13 B = Fund . P [ [ 1 14 MatrixForm [ P ] 15 MatrixForm [Q]

16 N[ MatrixForm [ Fund ] ] 17 N[ MatrixForm [ Expected ] ] 18 N[ MatrixForm [H ] ] 19 N[ MatrixForm [ B ] ] 20 Plot [ (Qk . P [ [ 1 21 Plot [ (Qk . P [ [ 1 ;; 7, 8 ;; ;; 7, 8 ;; 9 ] ] ) [ [ 1 , 1 ] ] , {k , 1 , 1 0 } ] 9 ] ] ) [ [ 2 , 1 ] ] , {k , 1 , 2 0 } ] 9 ] ] ) [ [ 1 , 1 ] ] , {k , 1 , I n f i n i t y } ] ] 9 ] ] ) [ [ 2 , 1 ] ] , {k , 1 , I n f i n i t y } ] ]

22 N[ Integrate [ (Qk . P [ [ 1 23 N[ Integrate [ (Qk . P [ [ 1

;; 7, 8 ;; ;; 7, 8 ;;

S-ar putea să vă placă și