Sunteți pe pagina 1din 34

Note Set 8 Numerical Game Theory

8.1 Overview

In this note set, we will cover a number of techniques of computing numerical


solutions to games. We will consider both discrete strategy spaces and continuous
strategy spaces. Games with discrete strategy spaces will typically only admit mixedstrategy equilibria, so all the algorithms we will consider will focus on computing mixed
strategy equilibria. The easiest case consists of finite two-player zero-sum games. We
will also consider general two-player games and games with more than two players. The
algorithms we will consider will mostly borrow from the theory of constrained
optimization.
For games with continuous strategy spaces, it is more reasonable to expect that a
pure-strategy equilibrium exists. The algorithms we consider will most often borrow from
the theory of nonlinear equations.

8.2 Finite Two Player Games

Consider the problem of computing the mixed strategy equilibrium of a finite two
player game. These games are sometimes called bi-matrix games because the game can
be characterized by two matrices. Let player 1s utility be given by Ai , j and player twos
utility given by Bi , j .
A Nash equilibrium is a set of probability distributions, ( p*, q*) , such that,
p* = arg max pT Aq * such that pi 0 ,
p

q* = arg max p *T Bq such that q j 0 ,


q

p
i =1

=1

=1

q
j =1

We can obtain first-order conditions for each of these problems by forming the
Lagrangian,

L = pT Aq (eT p 1) T p
L = pT Bq (eT q 1) T q
The first-order conditions will yield,

Aq e = 0 ,

pB e = 0

eT p = 1 ,

eT q = 1

i pi = 0 ,

i qi = 0

p 0, 0, q 0, 0

Hence, finding all mixed strategy equilibria of a game ( A, B) involving all solutions
( p, q, , , , ) to the system of nonlinear equations and inequalities above. Notice that

the problem above is linear except for the quadratic terms i pi = 0 and i qi = 0 .
A Linear Complimentarity Problem is the problem of finding vectors ( w, z ) that
satisfy the following set of equality and inequalities,

w Mz = q ,

w 0,

z 0,

wi zi = 0 for all i

We can rewrite the conditions for a Bi-matrix game as a Linear Complimentarity


Problem.
Bi-matrix games can have any number of equilibria. A number of algorithms exist
for computing a single equilibrium. These include the Lemke-Howson algorithm (which
behaves much like the simplex method) as well as newly developed interior point
methods. It is rather hard of getting a hold of code for solving an LCP, and almost
impossible to get hold of an interior point method. However, methods for finding a single
equilibrium are not that useful anyway.
An alternative approach is to compute all solution via enumeration. In particular,
we can enumerate all possible solutions by choosing whether wi = 0 or zi = 0 for each i .
For each choice or coordinate to zero, we can solve for the other coordinates by solving
w Mz = q . Finally, we can check whether these coordinates satisfy w 0 and z 0 .
This approach will work (and is implemented by gambit), but is extremely
computationally intensive.
Fortunately, the special case of zero-sum games is much easier to solve. Zero sum
games satisfy B = A . A finite two player zero-sum game is sometimes called a matrix
game because the game can be characterized by a single matrix. Let 1,..., I denote the

strategies of player 1 and let 1,..., J denote the strategies of player 2. Suppose that Ai , j is
the utility player one gets when player 1 plays strategy i and player 2 plays strategy j .
Let Ai , j denote player twos utility.
A mixed strategy for players 1 and 2 can be characterized by the vectors p and

q . The expected utility of player 1 is given by pT Aq and the utility of player 2 can be
characterized by pT Aq . We can characterize the equilibrium by,
(1)

p* = arg max pT Aq * such that pi 0 ,


p

(2)

q* = arg min p *T Aq such that q j 0 ,


q

p
i =1

=1

=1

q
j =1

We can show that ( p*, q*) is a mixed-strategy equilibrium to the above game if
and only if the following conditions hold,
(3)

(4)

p* arg max min Ai , j pi such that pi 0 ,


j
p
i =1

p
i =1

q* arg min max Ai , j q j such that q j 0 ,


q
i j =1

=1

=1

q
j =1

One can show that this the solution to (3) and (4) can be formulated as the
following linear programming problems,
I

(5)

max z such that z Ai , j pi 0 , pi 0 ,


z, p

i =1

(6)

min z such that z Ai , j q j 0 , q j 0 ,


z ,q

j =1

i =1

q
j =1

=1

=1

Using the duality theorem, we can show that the optimal value from these two problems
in the same. This, in turn implies that,

min Ai , j pi * = max Ai , j q j *
j

i =1

j =1

This, finally, implies that any solution to (3) and (4) must also be a solution to (1) and
(2).
Now, we know that a linear program is (i) infeasible, (ii) unbounded, or (iii) has
an optimal solution. We can immediately see that it is not infeasible or unbounded.
Hence, an optimal solution exists. Furthermore, we know that aside from degeneracy,
linear programs admit a unique solution, and in case of degeneracy, the set of solution set
will be convex.

Example 1

Consider the following example of a 2 by 2 bimatrix game,


1 2
B=

1 0

1 1
A=
,
1 0

Notice that this is not a zero-sum game. Let us start with the textbook approach to
solving this game. We start by showing that each player will not play a pure strategy.
Suppose that player 1 players row 1. In this case, player 2s best response is to colum 2,
but in this case player 1s best response is to play row 2. Hence, there is equilibrium
where player 1 plays row 1 with probability 1. Next, suppose player 1 plays row 2. The
player 2s best response is to player column 1. But player 1s best response to column 1 is
to play row 1. Hence, player 1 will not play a pure strategy in equilibrium. A similar
argument shows that player 2 will not play a pure strategy. Hence, any equilibrium
requires both players to mix over both their strategies.

Now, for a player to mix over both strategies, each strategy must be providing
him with the same utility. Player 1 gets q1 q2 in expected utility from playing row 1 and
q1 in expected utility from playing row 2. Combined with the fact that q1 + q2 = 1 , we

have that q = ( 13 , 32 ) . Player 2 gets p1 + p2 from playing column 1 and 2 p1 from playing
column 2, so his equilibrium strategy must by p = ( 14 , 34 ) .
Now, let us consider the algorithms for finding equilibrium. We can determine
that the equilibrium must satisfy the following equations,
q1 q2 1 = 0 ,

q1 2 = 0

p1 + p2 1 = 0 ,

2 p1 2 = 0

p1 + p2 = 1 ,

q1 + q2 = 1

1 p1 = 0 ,

2 p2 = 0 ,

1q1 = 0 ,

2 q2 = 0

p1 0 ,

p2 0 ,

1 0 ,

2 0

q1 0 ,

q2 0 ,

1 0 ,

2 0

Solving the last three equations amount to selecting a subset of the coefficients of ( p, q )
to be zero. If we set k coefficients, this will require 4 k of the lagrange multiplier to be
zero. We them have 4+6=10 equations and 10 unknowns, which we can solve. We can do
this for every subset of ( p, q ) (of which there are 24 . In each case, we check whether the
solutions satisfy the remaining inequalities.
Let us start by looking for a solution where non of the probabilities are positive.
We have,
q1 q2 1 = 0 ,

q1 2 = 0

p1 + p2 1 = 0 ,

2 p1 2 = 0

p1 + p2 = 1 ,

q1 + q2 = 1

1 = 0 ,

2 = 0,

1 = 0 ,

2 = 0

p1 0 ,

p2 0 ,

q1 0 ,

q2 0

We can solve the equalities to obtain p = ( 14 , 34 ) and q = ( 13 , 32 ) , which satisfies the


remaining inequalities, and hence is an equilibrium.
Next, let us try a solution of the form,

1 = 0 ,

2 = 0,

1 = 0 ,

q2 = 0

The solution to the equalities will imply q1 = 0 and q2 = 1 , which violates the
assumption that q2 = 0 . Hence, not equilibrium has this form. If we repeated this
procedure for all possibilities, we would find that no other equilibrium exists.

Example 2

Consider the following example of a 2 by 2 matrix (i.e. zero-sum) game,


1 1
B=

1 0

1 1
A=
,
1 0

Notice that A = B . We can solve this game by solving the following linear programming
problem,
(5)

max z such that z p1 + p2 0 , z + p1 0 , p1 0 , p2 0 , p1 + p2 = 1

(6)

min z such that z q1 + q2 0 , z + q1 0 , q1 0 ,

z, p

z ,q

q2 0 , q1 + q2 = 1

Notice that these are the same problem, so we need only solve one of them. We can
reduce that first problem to,
max z such that z 2 p1 + 1 0 , z + p1 0 , p1 0 , p1 1
z, p

We can represent this graphically as,


p1
z+
p1

1
p1+

p1 1

p1 0 z

We see that the solution will solve z 2 p1 + 1 = 0 and z + p1 = 0 , hence we have an


equilibrium of ( 13 , 23 ) . We can also that the equilibrium will be unique.

8.3 Finite Multi-Player Games

Consider a finite N -person game. Player n has the pure-strategy set {1, 2,..., J n } .
Let n denote the J n -dimensional unit simplex. We let pn n denote a mixed strategy

of player n . We define An to be an N -dimensional tensor representing player n s


utility. We define a players expected utility by,
un ( p1 , p2 ,..., pN ) = An [ p1 p2 .. pN ]

We let sn , j n denote the j th pure strategy of player n , i.e.


sn , j = [0,..., 0,

j th position

, 0,..., 0] . We let zn , j ( p) = un ( sn , j , p n ) un ( p) and

g n , j ( p) = max{zn , j ( p ), 0} .

We can define a Nash Equilibrium as follows,

Definition (Mixed Strategy Nash Equilibrium of a Finite N -Player Game): p * is a


mixed strategy Nash equilibrium if un ( pn *; p n *) un ( pn ; p n *) for all pn .

As McLennan and McKelvey (1996) show, the Nash equilibrium can be characterized in
the following ways.

Theorem 8.1 (Characterization of Mixed Strategy Nash Equilibria in a Finite N -Player


Game): p * is a pure strategy Nash equilibrium if and only if any of the following
conditions hold.
(i)

p * is contained in the Semialgebraic Set, z ( p ) 0 , pn n

(ii)

p * is a solution to the Nonlinear Complementary Problem, z ( p) 0 ,

pn, j zn, j ( p) = 0 , pn n .
N

(iii)

Jn

p * is a global minimizer of the Liapunov function, v( p) = g n , j ( p) 2 .


n =1 j =1

Proof: We first show that NE (i ) . We must show that un ( pn *; p n *) un ( pn ; p n *) for


all pn n if and only if un ( pn *; p n *) un ( pn ; p n *) for all pn {sn ,1 , sn ,2 ,..., sn , J n } . The
forward direction is obvious since {sn ,1 , sn,2 ,..., sn , J n } n . For the reverse direction, we
must simply show that if there is a mixed strategy that implies player n s utility over
pn * , then there exists a pure strategy that has this effect. Since un ( pn ; p n *) is linear in
pn , we can write, un ( pn ; p n *) = c ' pn . Suppose that,
un ( pn ; p n *) un ( pn *; p n *) = c '( pn pn *) > 0 .

There must exist a j such that c j [ pn pn *] j > 0 . Suppose that pn , j > pn , j * , then we
must have c j > 0 , which means that c j [ sn , j pn *] j > 0 . Alternatively, if pn, j < pn, j * , the
c j < 0 , in which case c j [ sn ,k pn *] j > 0 for k j , proving that a profitable deviation

exists in pure strategies.


We next show that (i ) (ii ) by showing that z ( p) 0 and pn n imply that
pn , j zn , j ( p) = 0 . Let Jn be the set of j such that zn , j ( p ) = 0 . Then we have,
un ( sn , j , p n ) un ( p) = 0 for j Jn
un ( sn , j , p n ) un ( p) < 0 for j Jn

Using linearity, we can write,


c '( sn , j pn ) = 0 for j Jn
c '( sn , j pn ) < 0 for j Jn

We have,
c ' sn, j = c ' pn for j Jn , hence, we can write,

10

c '( sn , j sn , k ) < 0 for j Jn , k Jn

This implies that c j < ck for j Jn , k Jn . Now, if pn, j > 0 for some j Jn , then we can
improve player n utility by setting pn, j = 0 and moving this probability to some strategy
k such that k Jn , by the fact that ck > c j , proving the result.

To show that (i) (iii) , notice that v( p*) = 0 if any only if g n , j ( p*) = 0 for all
z ( p ), zn , j ( p) 0
n and 1 j J n . Since g n , j ( p) = n , j
, we have that this is equivalent
otherwise
0,

to z ( p*) 0 , proving the result.

Characterizations (ii) and (iii) each lead to important solution algorithms for this
problem. A nonlinear complementarity problem can be solved using direct methods. One
such method is the SLCP (Sequential Linear Complementarity Problem) algorithm. This
is an algorithm that will find a single solution to the NCP. The algorithm is quite similar
to the SQP algorithm for nonlinear programming.
Alternatively, we can solve (ii) by enumerating the possible solutions, solving the
resulting system of polynomial equations, and checking if these polynomial equations
satisfy the remaining inequality constraints. Methods exist to find all solutions of a
system of polynomial equations. Hence, we can use this method to find all the mixedstrategy equilibria of a multiplayer game.
Alternatively, we can find a single mixed strategy equilibrium by solving the
nonlinear programming problem described in (iii). We can find multiple mixed strategy
equilibria by using different starting points.

11

8.4 Continuous Games

In this section, we will consider continuous games. Let j = 1, 2,..., J denote the
players in the game. Let X j \ D denote the strategy space of player j where D j
j

denotes the dimension of player j s strategy space. Let x j X j denote a strategy played
by player 1. Let X = X 1 ... X J denote the strategy space for all the players and let
x = ( x1 ,..., xP ) X

denote a vector of strategies for all the players, where

D = D j = dim( X ) . Let X j = X 1 ... X j 1 X j +1 ... X J denote the strategy spaces of all


j =1

players except j . Let x j = ( x1 ,..., x j 1 , x j +1 ,..., xJ ) X j denote a vector of strategies for all
other player.
We let u j ( x1 ,..., xJ ) denote the utility received by player j if the players player
strategies ( x1 ,..., xJ ) X . We have also denote U j ( x j ; x j ) = u j ( x1 ,..., xJ ) . We define the
best response functions as, BR j ( x j ) = arg max U j ( x j ; x j ) where BR j ( x j ; x j ) is understood
x j X j

to be a set-valued correspondence. We consider the following two equivalent definitions


of a pure strategy Nash Equilibrium.

Definition of Nash Equilibrium 1: x* = ( x1*,..., xJ *) is a Nash Equilibrium if for each


j {1,..., J } , we have u j ( x1 *,..., x j 1 *, x j *, x j +1*,..., xJ *) u j ( x1 *,..., x j 1 *, x j , x j +1 *,..., xJ *) for all
xj X j .

12

Definition of Nash Equilibrium 2: x* = ( x1*,..., xJ *) is a Nash Equilibrium if for each


j {1,..., J } , we have x j * BR j ( x j *) .

There are few general conditions that ensure the existence of a Nash Equilibrium.
The following theorem gives one such result:

Theorem 8.2 (Debreu-Fan-Glicksberg): Suppose that the utility functions of each player
are continuous and quasi-concave in his own strategy. Then a pure strategy Nash
Equilibrium exists.

To guarantee the uniqueness of a equilibrium, we usually need a stronger condition- each


players utility function must be strictly concave in his own strategy.

Response Iteration

Consider a current point

x k = ( x1k ,..., xJk ) . Consider a vector of points

x k +1 = ( x1k +1 ,..., xJk +1 ) where x kj +1 BR j ( xk j ) . Then in each iteration k + 1 , player j is playing

a best response to strategies xk j . It should be clear that if x kj = x kj +1 , then x kj is an


equilibrium. Hence, this suggests iterating this approach until convergence. This
subsection will consider generalizations of this approach.
This approach can be generalized in a number of ways. First, it is possible to
consider

dampened

iterations.

For

example,

we

could

consider

x kj +1 = (1 ) BR j ( xk j ) + x kj . In addition, we could consider updating between iterations.

13

Finally, we could consider steps that increase utility, but do not necessarily maximize
utility at each step.
In the event that D j = 1 for each j , we recommend the following strategy.
Employ Guass Jacobi iterations. At each iterations, solve for the optimal value using
Brents method. In the multi-dimensional case, we cannot employ Brents method.
Instead, we suggest applying the Nelder Mead Simplex Method. In the event that the
game is smooth enough such that Newtons method will be effective, we would
recommend apply the approach in the next section.
We define a response function for player j as a function R j ( x j ; x j ) that satisfies
U j ( R j ( x j , x j ); x j ) > U j ( x j ; x j ) if there exists such a point and R j ( x j , x j ) = x j otherwise.

Notice that a response function always exists. Notice furthermore that the Best Response
function can be used to yield a response function,
x j BR j ( x j )
xj ,
R j ( x j , x j ) =
otherwise
BR j ( x j ),

where BR j ( x j ) is understood to mean any element of BR j ( x j ) .

Algorithm 8.1:
1.

Let x k denote the current point at iteration k . Set xcurr = x k

2.

For players 1 though P


a. Calculate a point for player j that yields better utility for player j that
xcurr , j , given that the other players play xcurr , j , and set xnew, j equal to this

point. Set x kj +1 = xnew, j + (1 ) x kj .

14

b. If using the Guass-Seidel option, then update xcurr , j = x kj +1


3.

If x k +1 is sufficiently close to x k , then stop successfully

4.

Check for cycling. If x k +1 is close to x k p for j = 1, 2,..., J , then reduce and


continue

5.

If is too small, then stop unsuccessfully

6.

Go back to step 1

Joint First-Order Condition Solution

An alternative approach to solving for an equilibrium proceeds with the following


observation. For any interior equilibrium x* int( X ) such that u j ( x) is differentiable in a
neighborhood of x * , we following conditions are necessary for equilibrium,

x j

U j ( x j *; x j *) = 0 for j = 1,..., J

This forms a system of D non-linear equations. We solve these equations using any
approach to solving nonlinear equations (e.g. Newtons method or Nelder Mead).
Notice that this method does not guarantee that the point we have converged to is
an equilibrium (each player current strategy is a global maximum of their response
problem). In fact, the current point may not even be a local equilibrium. We have found
that it is very useful to have tool for checking whether any proposed equilibrium is in fact
one. In order to verify an equilibrium, we suggests a grid search approach. Checking for a
local equilibrium is easier- we simply need to check each players Hessian for positive
definiteness.

15

8.5 Two-Candidate Spatial Competition with PolicyMotivated Candidates

Here, we consider a model of two candidate spatial competition one candidate has
a valence advantage, candidates are uncertain about the position of the median voter, and
candidates are policy-motivated. This model follows the work of Groseclose (2001). Let
Fv (v) = Gv (v ) be the distribution of the voters ideal point, where Gv is know to the

candidates, but in only known to follow a distribution F , where F (0) = 12 . We


assume that both Gv and F have full support.
The candidates take positions yL \ and yR \ . Voters have utility functions
of the form,
u L ( y; v ) = ( y v ) 2 ,

u R ( y; v ) = ( y v ) 2

where represents candidate Ls valence advantage. Without loss of generality, we


assume that > 0 . Now, notice that uL ( yL ; v) uR ( yR ; v) if and only if

+ yR 2 yL 2 2( yR yL )v . If yL < yR , then we can compute the share of voters who


will vote for each candidate,
+ yR 2 yL 2
+ yR 2 yL 2

=

sL ( yL , yR ; ) = Fv
G

v
2( yR yL )
2( yR yL )

+ yR 2 yL 2
+ yR 2 yL 2


sR ( yL , yR ; ) = 1 Fv
G
1

v
2( yR yL )
2( yR yL )

Alternatively, when yL > yR , we have,

16

+ yR 2 yL 2

sL ( yL , yR ; ) = 1 Gv

2( yR yL )

+ yR 2 yL 2

sR ( yL , yR ; ) = Gv

2( yR yL )

When yL = yR , sL ( yL , yR ; ) = 1 and sR ( yL , yR ; ) = 0 .
Candidate L will when the election when sL ( yL , yR ; ) 12 . The probability of this
event is given by,
+ yR 2 yL 2


Pr( sL ( yL , yR ; ) 12 ) = Pr Gv

2( yR yL )

1
2

+ yR 2 yL 2

Gv 1 ( 12 )
= Pr
2( yR yL )

+ yR 2 yL 2
+ yR 2 yL 2
= Pr
Gv 1 ( 12 ) = F

2( yR yL )

2( yR yL )

We assume that candidates care about both policy and holding office. Let x
denote the policy outcome and let wk = 1 if candidate k wins office. We have,
U L ( x, wk ) = wk ( x qL ) 2 ,

U R ( x, wk ) = wk ( x qR ) 2

where qL qR . The candidates utility from the game is given by,

+ yR 2 yL 2
+ yR 2 yL 2
2
2
(
)
1
VL ( yL , yR ) = F
y
q
F

L
( yR qL )
L

2( yR yL )
2( yR yL )

+ yR 2 yL 2
+ yR 2 yL 2
2
2
(
)
1
VR ( yL , yR ) = F
y
q
F

L
( yR qR )
R

2(
)
2(
)
y
y
y
y

R
L
R
L

where yL < yR . A similar expression can be derived for the other case.
We can compute the equilibrium of this game using the various strategies we have
described. Using Response Iteration, we simply need to supply the algorithm with the
utility function. We will then set,

17

G (v) = ( v / )

We must choose the model parameters , , qL , qR .

8.6 Multi-Candidate Spatial Competition with VoteMaximizing Candidates

Consider the following situation. There are J candidates running for office, who
propose policies y j \ D . Voters are characterized by their ideal points v . Candidates
are characterized by their mean valence j . The utility of a voter with ideal point v
receives from voting for candidate j is given by,
u j ( y j ; v) = j ( y j v) 2 + j

where j has the logistic distribution. We can compute the probability that a voter with
ideal point v votes for candidate j ,
q j (v, y j , j ) =

j ( y j v )2

( yk v ) 2

k =1

The vote share of candidate j is given by,

e j ( y j v )2
sj (yj ,j ) = J
dFv (v)
v
2
ek ( yk v )
k =1

Taking first order conditions yields,

18


y j

2
j ( y j v )2
(
y
v
)
e
ek ( yk v )

k j
s j ( y j , j ) = 2
dFv (v) = 0
2
v
J k ( yk v )2

k =1

We can write these as,

vq (v, y , )(1 q (v, y , ))dF (v)


q (v, y , )(1 q (v, y , ))dF (v)

yj =

Notice that this is a fixed point problems, which we can potentially solve using fixed
point iterations.
Now, consider a convergent equilibrium, with y1 = ... = yJ = y * . In this case, we
have q j (v, y*, j ) =

, so that the first-order conditions imply that y* = vdFv (v) .


v

k =1

The first-order conditions will hold when each candidate locates at the mean-voter
position. Consider the second-order condition as well,
2
y j 2

= 4 2 ( y j v ) e

j ( y j v )2

( yk v ) 2

k j

sj (yj ,j )

J ( y v )2

( y j v )2
y j e k k + 2( y j v)e j

k =1

dFv (v) = 0
3
J

k ( yk v )2
e

k =1

At y * , we have,
4 2 e

y j 2
2

sj (yj ,j ) =

k j

J k
e
k =1

J k
(
*

)
y
v
y * e
v
k =1

19

j
+ 2( y * v)e dFv (v)

= 8 2 q j 2 (1 q j )Var (v)

From this, we know that their exists a local equilibrium at y * .


Now, notice that computing the candidates utility function requires computing a

D -dimensional integral. We will compute this integral using simulation methods, i.e.,

e j ( y j vn )2
s j ( y j , j ) = N1 J

k ( yk vn )2
n =1
e

k =1

The integrals in the fixed point equations can be computed in a similar way,
N

yj =

1
N

v q (v , y , )(1 q (v , y , ))

1
N

n =1
N

q (v , y , )(1 q (v , y , ))
n =1

We will assume throughout that v ~ N (0, I ) and we will vary the mean valence
parameters (1 , 2 , 3 , 4 ) . Merrill and Adams (2001) show that argue that when the fixed
point mapping above is a contraction mapping, then there exists a unique solution to the
fixed-order conditions which will satisfy the conditions of being an equilibrium.
Let us consider an example of two-dimensional spatial competition with 4 parties.
Let us assume that v ~ N (0, I ) . We start with the case where 1 = 2 = 3 = 4 = 0 .

8.7 Mixed Strategies in Continuous Games

Consider the following multidimensional Downsian model. There exist N voters


with ideal points vn . There are two candidates who take position in a D -dimensional

20

policy space, y1 \ D and y2 \ D . Voter n has a utility function given by


un ( y j ; vn ) = y j vn . A voter votes for the candidate he prefers with probability 1 and
2

voters for each candidate with probability one-half is he is indifferent. The probability
that each candidate wins the election is given by,

{n : y1 vn

P1 ( y1 , y2 ) = {n : y1 vn

{n : y1 vn

< y2 vn

< y2 vn

< y2 vn

} > {n :
} = {n :
} < {n :

y1 vn

> y2 vn

y1 vn

> y2 vn

y1 vn

> y2 vn

},
},
},

1
1
2

P2 ( y1 , y2 ) = 1 P1 ( y1 , y2 )
We assume that candidates would like to maximize the probability that they win
the election. Notice that this game is a zero-sum game. Hence, we can represent the game
by the function, A( y1 , y2 ) = P1 ( y1 , y2 ) . Plott (1967) considered such a game and found
that a pure strategy equilibrium existed only under unlikely conditions, when D > 1 .
However, mixed strategy equilibria are more likely to exist (Duggan, various references).

The Uncovered Set

Let X be a (finite or infinite) set of alternatives and let P represent the MajorityRule social preference relation. We say that x X covers y X if the following
conditions are met.
(i)

xPy

(ii)

P( x) P( y )

(iii)

R( x) R( y )

21

Where we define,

P ( x) = { y X : yPx}

R ( x) = { y X : yRx}
We will denote the covering relation by C . The Uncovered Set is the set of maximal
elements of C . That is,

UC = {x : C ( x) = }
where,

C ( x) = { y X : yCx}
The following proposition summarizes some properties of the Uncovered Set.

Proposition 1: (i) The Uncovered Set in non-empty. (ii) The Uncovered Set is Unique.
(iii) The support of mixed strategy equilibria to the multidimensional Downsian model is
contained in the Uncovered Set.

For a proof of this result, see Banks, Duggan, and Le Breton (2000).
Developing an algorithm for computing the Uncovered Set when the set of
alternatives is finite is relatively straight-forward. We assume that we can somehow
compute the Majority Rule social preference relation over the alternatives. For example,
where the Majority Rule Social Preference is determined by majority rule and there are
N voters, this step will have complexity O( NJ 2 ) . We can compute the covering relation

by brute force, by looping over all triples x, y, z X . The algorithm above clearly has
complexity O( J 3 ) where J is the number of alternative. Then, we can compute the
maximal element of the covering relational as follows.

22

Algorithm 8.1: Computing the Uncovered Set

1:

Set UC(i) = 1 for i = 1 to J

2:

Change = 1

3:

while ( Change = 1 )

4:

Change = 0

5:

for i = 1 to J, j = i+1 to J

6:

Compute xi Cx j (if it hasnt been computed already)

7:

if ( xi Cx j ) then

8:

UC(j)=0

9:

Change = 1

10:

break for

11:

else if ( x j Cxi ) then

12:

UC(i)=0

13:

Change = 1

14:

break for

In terms of worst case complexity, the algorithm has complexity O( J 3 ) . The


complexity of the algorithm will be dominated by the steps that are the most time
consuming. The most time consuming steps will be reading in the matrix of outcomes
and commuting the covering relation. The complexity of the first step will differ from
application to application, for example, it may involve generating random numbers,

23

computing integrals, solving for the equilibrium of a model. In the case where ideal
points of a finite number of voters are drawn independently from some distribution, the
theoretical complexity will be O( NJ 2 ) . The theoretical complexity of computing the
covering relation will be O( J 3 ) .

Computing the Mixed Strategy Equilibrium

The easiest way to solve such a game is to approximate that game using a finite
one. This problem then reduces to a finite two-player zero-sum game, which we can solve
using linear programming. To simplify the computational complexity of the problem, we
can first compute the Uncovered Set. This allows us to reduce the size of the linear
programming problem we must ultimately solve.
We will approximate the solution to the above equations using the approximation,
J

max z such that z p j Ajk , k = 1, 2,..., J ,


j =1

p
j =1

= 1 , p j 0 , j = 1, 2,..., J

Let us consider the canonical linear programming problem,


max c ' x such that Aeq x = beq , Aneq x bneq , and l x u

We set x = ( z , p1 ,..., pJ ) , l = (, 0,..., 0) , u = (,..., ) , Aeq = [0,1,...,1] , beq = [1] ,


Aneq = [ e; A] , and bneq = 0 , and c = (1, 0,..., 0) .

It is worth discussing the existence and uniqueness of solution. The Fundamental


Theorem of Linear Programming tell us that either (i) an optimal solution exist, (ii) no
feasible solution exists, or (iii) the problem is unbounded. We can immediately rule out
(iii) since the feasible reason is a subset of the unit simplex. We can rule out (ii) since

24

z = 1

and p = (1, 0,..., 0) is a feasible solution. Thus, we know there must exist a solution

to the Finite Linear Program.


Uniqueness is not guaranteed for solutions to linear programs, but can easily be
checked. If the solution is not unique, then the set of solutions can be characterized by as
set of linear inequalities. Two-player zero sum games typically do not admit unique
solutions, and this is what we find. One of the primary objectives of this paper, however,
is to limit the set of political outcomes. For this purpose, it is sufficient simply find the
support of all mixed strategy equilibria.
As an example, let us an example with N = 101 voters who ideal points are drawn
from the triangle distribution. We consider a grid of J points by J points on

[0,1] [0,1] . We will consider J = 21, 41,81 . We present the results below,

Figure 8.1 Multidimensional Downsian Competition (J = 21)


1
0.8
Voter Ideal Points
Uncovered Set
Mixed Strategy

0.6
0.4
0.2
0
0

0.2

0.4

0.6

0.8

25

Figure 8.2 Multidimensional Downsian Competition (J = 41)


1
0.8
Voter Ideal Points
Uncovered Set
Mixed Strategies

0.6
0.4
0.2
0
0

0.2

0.4

0.6

0.8

Figure 8.3 Multidimensional Downsian Competition (J = 81)


1
0.8
Voter Ideal Points

0.6

Uncovered Set
0.4

Mixed Strategies

0.2
0
0

0.2

0.4

0.6

0.8

8.8 Some Games of Incomplete Information

26

In this subsection, we apply techniques we have learned to solve a number of


games of incomplete information.

Example 1: Civil Courts

Consider the following game. A plaintiff suffers damages ~ F , where is


known only to the plaintiff. A defendant can make a settlement offer 0 . If the event
of the settlement, the plaintiff receives utility and the defendant receives utility .
If the settlement is not taken, the case goes to trial, in which case the jury awards to
the defendant, where ~ F . We assume that the defendant is a large company that is
familiar with the jury pool, and thus knows . The plaintiff only knows the common
distribution, F .
We let a( ; ) {0,1} denote the strategy of the player which corresponds to what
offers to accept, conditional on their information . The plaintiffs strategy is given by

( ) which represents the offer that the plaintiff makes conditions on their information.
We will solve this game by applying Perfect Bayesian Equilibrium.
We let a *( ; , ) denote the equilibrium strategy of the plaintiff and let

*( , ) denote the equilibrium strategy of the defendant. Here, denotes the


plaintiffs beliefs about the jury poll and denotes the defendants beliefs about the
damages incurred. We can state the conditions for Perfect Bayesian Equilibrium as
follows,
a *( ; , ) = 1 * E[ | ( | )]

27

*( , ) arg max E 1{a *( ; , ) = 1} + 1{a *( ; , ) = 0} |


0

( ) = f ( )
( | ) =

f ( ) *( , )

if

f ( ) *( , )d

f ( ) *( , )d > 0

We can substitute in the third equation to obtain the following conditions,


a *( ; , ) = 1 * E[ | ( | )]

*( ) arg min E 1{a *( ; , ) = 1} + 1{a *( ; , ) = 0}


0

( | ) =

f ( ) *( )

f ( ) *( )d

if

f ( ) *( , )d > 0

Let us look for an equilibrium where *( ) is one to one. In this case, we have that
places probability 1 on the truth. Hence, we have,

a *( ; ) = 1

*( ) arg min E [1{a *( ; ) = 1} + 1{a *( ; ) = 0} ]


0

We can substitute the first equation into the second to obtain,


a *( ; ) = 1

*( ) arg min E 1{ } + 1{ < }

= arg min F ( ) +
0

We can minimize the function F ( ) +

dF ( )

dF ( ) over for any given

to obtain *( ) , which suffices to solve he game. The first-order conditions are given
by,

28

F ( ) + f ( )[1 1 1 ]
Notice that when = 1 , we have = 0 as the unique solution. If both players are risk
neutral, there is no need to bargain to avoid trial.

Example 2: Pricing Competition

Suppose that two firms produce a product at cost c1 and c2 . Each firm knows its
own cost, but the other firms cost is only know to be drawn from the distribution F (c) .
Each firm must decide how much to produce x1 [0, 12 ] and x2 , where x = x1 + x2 .
Demand for the product is given by 1 p where p is the price of the product. The price
is chose to equalize demand and supply, so that p = 1 x1 x2 . A firms profit will by,

1 ( x1 ) = (1 x1 x2 *(c2 ) c1 ) x1dF (c2 )


c2

2 ( x1 ) = (1 x1 *(c1 ) x2 c2 ) x2 dF (c1 )
c1

An equilibrium satisfies,
x1 *(c1 ) = arg max (1 x1 x2 *(c2 ) c1 ) x1dF (c2 ) for 0 c1 1
c2

x1[0, 12 ]

x2 *(c2 ) = arg max (1 x1 *(c1 ) x2 c2 ) x2 dF (c1 ) for 0 c2 1


x2 [0, 12 ]

c1

Notice that we have,


x1 *(c1 ) = arg max x1 (1 c1 ) x12 x1 x2 *(c2 )dF (c2 )
c2

x1[0, 12 ]

x2 *(c2 ) = arg max x2 (1 c2 ) x2 2 x2 x1 *(c1 )dF (c1 )


c1

x2 [0, 12 ]

Notice that first order conditions imply that,

29

x1 *(c1 ) = 12 (1 c1 ) 12 x2 *(c2 )dF (c2 )


c2

x2 *(c2 ) = 12 (1 c2 ) 12 x1 *(c1 )dF (c1 )


c1

We can insert the second equation into the first to obtain,


x2 *(c2 ) = 14 12 c2 + 14 E[c] + 14 E[ x2 *(c2 )]
Taking an expectation of both sides yields,
E[ x2 *(c2 )] = 13 (1 E[c])
Finally, we can plug this into the above equation to get,
x1 *(c1 ) = 13 12 c1 + 16 E[c] ,

x2 *(c2 ) = 13 12 c2 + 16 E[c]

Hence, the optimal strategies only defend on the distribution of costs through its mean.
Suppose, however, that we did not know this problem admitted an analytical
solution. Consider solving this problem by approximating it on a finite grid,
x1 *(c1 ) = arg max (1 x1 x2 *(c2 ) c1 ) x1dF (c2 ) for 0 c1 1
x1[0, 12 ]

c2

x2 *(c2 ) = arg max (1 x1 *(c1 ) x2 c2 ) x2 dF (c1 ) for 0 c2 1


x2 [0, 12 ]

c1

Placing this on a grid yields,


n

ai = arg max (1 a j bk ai )a j Fk2


j

k =1

bi = arg max (1 b j ak bi )b j dFk1


j

k =1

Consider the case where F1 = F2 = Uniform(0,1) . The true solution is,


x1 *(c1 ) = 125 12 c1 ,

x2 *(c2 ) = 125 12 c2

30

We computed the solution using a grid size of n = 21,51,101, 201 , and produced the
results below. Results were compute using a starting value of x1 = x2 = 0.5 and a
dampening factor of 0.1 (although a dampening factor of 1.0 seems to work as well).

0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0

n=21
n=51
n=101
n=201

0.2

0.4

0.6

0.8

1.2

Example 3: War with Simultaneous Offers

Consider a model where two countries 1 and 2 must decide whether to fight a war.
Each country makes a proposal xk [0,1] on how much of a pie to allocate to country 1.
In the event that that x1 x2 , the offers can be reconciled and a mediator country decides
to allocate of the surplus to country 1, so that country 1 receives (1 )x1 + x2 and
country 2 receives 1 (1 )x1 x2 . In the event that their offers cannot be reconciled
(i.e. x1 < x2 ), the countries must go to war to settle their differences. Let 0 < p < 1 denote
the probability that country 1 wins the war and let 1 denote the fraction of the surplus
that is left after the war. The winning country obtains the whole leftover pie in the event
of a war, of size . Each country must pay a cost to fight the war, which is drawn from a

31

distribution Fk which has support on [0,1] . Hence, the utilities of each player in the event
of a war are p c1 and *(1 p ) c2 .
Let x1 *(c1 ) and x2 *(c2 ) denote the optimal strategies of both players. We
conclude that the equilibrium satisfies the functional equations,

x1 *(c1 ) = arg max {1{x1 x2 *(c2 )}( x1 + (1 ) x2 *(c2 ))


c2

x1[0,1]

+1{x1 < x2 *(c2 )}( p c1 )} dF2 (c2 )

x2 *(c2 ) = arg max {1{x1 *(c1 ) x2 }[1 x1 *(c1 ) (1 ) x2 ) ]


c1

x2 [0,1]

+1{x1 *(c1 ) < x2 }( (1 p ) c2 )} dF1 (c1 )

Playing around with the conditions will indicate that the model does not have an
analytical solution, even where both cost are assumed to be uniformly distributed.
In order to solve this problem numerically, we must somehow translate a
functional problem into a finite one. The first approach we can use is to approximate this
functional problem with a finite grid. Consider approximating this problem on a finite
grid of n points, {0, n11 , n21 ,...,1} . We let ai denote the strategies of country 1 and bi
denote the strategies of country 2. We have the following two equations,
ai =

bi =

n 1

1
n 1

j:0 a j n 1 k = 0
n 1

1
n 1

arg max 1{ n j 1 bk }(

arg max 1{ak


j:0 b j n 1 k = 0

j
n 1

+ (1 )bk ) + 1{ n i 1 < bk }( p n i 1 ) Fk2

} 1 ak (1 ) n j 1 + 1{ak <

j
n 1

}( (1 p ) n i 1 ) Fk1

j
n 1

Notice further that we now have a finite fixed point problem, which suggests a solution
method- (dampened) fixed point iterations.

32

Let us try to solve this problem in the special case where p = 0.75 , = 0.5 ,

= 1 , and F1 = F2 = Uniform(0, 0.2) . When we start the fixed point iterations from 0.5
and used = 1 , the algorithm converged to a point where both players offer to hold onto
the entire pie by themselves. When we used = 0.1 , we obtained the following more
interesting equilibrium,

1.20
1.00
0.80
x1
x2

0.60
0.40
0.20
0.00
0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

8.9- References

[1]

Plott, Charles (1967). A Notion of Equilibrium and its Possibility Under


Majority Rule. American Economic Review 57-787-806.

[2]

Chvatal, Vasek (1983). Linear Programming. Chapter 15.

[3]

McKelvey, Richard D. (1992). A Liapunov Function for Nash equilibria.


Working Paper.

[4]

McKelvey, Richard, and Andrew McLennan (1996). Computation of Equilibria


in Finite Games.
33

[5]

Stengel, Bernhard Von (1999). Computing Equilibria in Two Person Games.


Handbook of Game Theory, Vol. 3.

[6]

Banks, Jeffery, John Duggan, and Michel Le Breton (2002). Bounds for Mixed
Strategy Equilibrium and the Spatial Model of Elections. Journal of Economic
Theory 103:88-105.

[7]

Duggan, John, and Matthew O. Jackson (2006). Mixed Strategy Equilibrium and
Deep Covering in Multidimensional Electoral Competition. Working Paper.

34

S-ar putea să vă placă și